That element of style which isn't aesthetics, or which is aesthetics but gives no pleasure, or gives pleasure but makes life no easier, or which makes life easier but doesn't support one in goodness—should be relinquished.
The Moby website includes a link to 175,000 entries fully International Phonetic Alphabet coded. Therein are two different text files, both of which has English words rendered in a uniform phonetic style. They would go a long way to building a tree of all phonotactic English words.
I'll admit, my primary motivation for posting this was a hope that jtauber would have something to say about it.
I had a similar thought for a start, but not for English. Ideally you'd find a word list with the most phonemic spelling you could find. That's one of the reasons that I thought of Yiddish, whose Romanization (YIVO) is very regular and reflective of its phonology. You'd have to come up with some mildly sophisticated rules for a lexer that crawls the words and builds syllables—mostly for building diphthongs and consonant clusters—but nothing too hairy.
Finally it seems (maybe somewhat fancifully) that these two approaches have analogues or at least namesakes in the linguistic world. We might call the former synthetic and the latter analytic.
For the majority of this week I've been working on building a toy framework—a flask clone. Before I came to Hacker School I thought I should learn how to use web frameworks; I'm glad I didn't spend too much time on that because it works very well to learn how they work by building one yourself. Of course mine is not very good for actual production—but I am confident that by the end, I'll have a much firmer grasp on how to pick up any framework from the work I've done implementing my own.
All I really had to do to get started was to work through the flask tutorials to learn what it does, and then through this WSGI tutorial to learn how WSGI applications (of which flask is a member) are put together. Then it was off to the races. Now I can handle routes, serve pages, handle GET and POST, run templates with variable replacement and template extension, and interact with a MongoDB database. Next I want to build conditional logic and looping into my template, then I'm going to try to build an ORM for my framework.
What they call 'mindfulness' is a boring miracle.
The lack of availability of a certain character is not the only good reason to divorce typeset and handwritten realizations, either. There's also just good typesetting practice. For instance, terminal er in the Abbreviations looks more or less like 'ɛ' with one more hook coming off the bottom, if that makes sense. The closest unicode analog is Ę, CAPITAL LETTER E WITH OGONEK. But it quickly became apparent that Ę looks very awkward at the end of a word. It's a capital letter and the eye has very specific expectations for where capital letters are to be found. So I made the unicode version of terminal er into ę, instead. That's a little awkward to unambiguously write by hand, but it makes perfect sense when reading typewritten text. And of course this relationship of very light digraphia is already very much the norm whenever one compares typeset text and handwritten text.
A la Scriptogram and Calepin, a Dropbox-powered blogging engine. Currently in closed beta. Differentiating details are sparse for now, but they do seem to tout automatic syncing.
And the progressive complexity of the orthography suggests a progressive shift in literary style as well; from the simple and instructional to the more gnomic, abstruse, poetic. This would accompany the orthography but also simply fulfill the necessity of example text after there's little to explain beyond the symbols themselves.
Another crucial help in establishing focus on the individual user and writing for the internet at large would be allowing custom domain names.
Playtesting has been informative. I'm happy to say that the proposed hand and deck arrangements provide for much livelier play.
I've also been experimenting with different scoring schemes; that's where the remaining work lies. One quality that I failed to fully appreciate was the simplicity of the original system—not just in terms of the arithmetic needed to sum up one's score (everything being 10, 50, 150, 250, etc.) but also in the mnemonic elegance of how the scores are represented by captured cards.
In the original system there are really only three scoring tokens, that is, captured cards used to keep score and add up at the end of the game: face-up cards are worth ten points, face-down cards are worth 100 points, and 10♦ and Jacks are worth fifty points. Every scoring unit—100 for a glove, 200 for a sock, 150 for a glove of Jacks, etc.—can be composed of those three items.
Whatever the relative fairness or proportionality of those specific amounts, the value of a system which only has three tokens to mix and match can't be overstated. In our playtesting, the adjusted-scoring games (notwithstanding whatever degree they need further adjustment, for reasons stated above) weren't nearly as fun or smooth as the ones with the classic scoring simply because they required so much more effort and bother.
It's clear, if I do want to adjust the scores for fairness, that a similar system—three scoring tokens at max—will have to stay.
So it occurs to me that one of the interesting problems in personal productivity and to-do lists and the like is the problem of programmatically determining free time. I will tell you my thought process here:
Another area that modern app design hasn't penetrated to is the literary journal. The poets who publish on the web do so in one of two types of place:
Neither format is optimal. Literature is inescapably aesthetic and thus design, typography, the rest are integral to its function. But this is the modern age and I should be able to tweet my poems and subscribe to RSS feeds.
In the death of history we also have the infinitization of history. Now, though we are living without a past, everything is in the past; now everything is fixed into history as soon as it is uttered, just as everything fixed has been freed from all referents and all bodies in motion have been deranged from their arcs.
This enduring sensation, that we are living constantly in the stillborn present, is what allows all past things to stretch out before us. This field leaves out no stone, no potsherd.
Streams by this user that have been favorited by others.
No favorited streams yet.
That element of style which isn't aesthetics, or which is aesthetics but gives no pleasure, or gives pleasure but makes life no easier, or which makes life easier but doesn't support one in goodness—should be relinquished.
The Moby website includes a link to 175,000 entries fully International Phonetic Alphabet coded. Therein are two different text files, both of which has English words rendered in a uniform phonetic style. They would go a long way to building a tree of all phonotactic English words.
I'll admit, my primary motivation for posting this was a hope that jtauber would have something to say about it.
I had a similar thought for a start, but not for English. Ideally you'd find a word list with the most phonemic spelling you could find. That's one of the reasons that I thought of Yiddish, whose Romanization (YIVO) is very regular and reflective of its phonology. You'd have to come up with some mildly sophisticated rules for a lexer that crawls the words and builds syllables—mostly for building diphthongs and consonant clusters—but nothing too hairy.
My intuition has been that Yiddish is particularly dense along its phonotactics. Now studying Italian, I wonder if it is too.
I suppose you could also call it phonotactic density.
That is, for all the possible lemmas according to a language's phonotactics, how many of them actually exist in the language?
I have often wondered about what I'll call for lack of a better term 'phonotactic coverage'.
Finally it seems (maybe somewhat fancifully) that these two approaches have analogues or at least namesakes in the linguistic world. We might call the former synthetic and the latter analytic.
It seems like most people have a bias towards one approach or the other. And that skill in systems design (as opposed to talent) is largely a matter of knowing when a given situation demands one or the other approach.
It seems like there are two main approaches to systems design: the one where, when composing separate features, you abstract them all until you derive a conceptual lowest common denominator, and then define a single unified hierarchy where every feature is an example of one of the elements of a generalized mechanism; and the one where each feature (or a given new feature) is developed independently and in a manner that requires the least overhead, conceptual or otherwise.
We must define relation as the intersection of two subjective experiences. Any communication technology is necessarily alienating, because it can only—at best—bring a proxy of someone else into my experience. But as long as there is a mediating layer between our two experiences, we can't be said to be interacting with each other. Only with representations of each other.
This fundamentally alienating quality of all communication media remains unaltered in the present age. The exponential growth of social media might have the appearance of bringing people into relation to each other, and certainly exposes the individual to far more numerous proxies of others than before, but they remain proxies. So the more people seem to be connected, the more alienated they are in fact, as more of their postures of relation are concerned with representations and proxies, rather than subjective beings.
For the majority of this week I've been working on building a toy framework—a flask clone. Before I came to Hacker School I thought I should learn how to use web frameworks; I'm glad I didn't spend too much time on that because it works very well to learn how they work by building one yourself. Of course mine is not very good for actual production—but I am confident that by the end, I'll have a much firmer grasp on how to pick up any framework from the work I've done implementing my own.
All I really had to do to get started was to work through the flask tutorials to learn what it does, and then through this WSGI tutorial to learn how WSGI applications (of which flask is a member) are put together. Then it was off to the races. Now I can handle routes, serve pages, handle GET and POST, run templates with variable replacement and template extension, and interact with a MongoDB database. Next I want to build conditional logic and looping into my template, then I'm going to try to build an ORM for my framework.
What they call 'mindfulness' is a boring miracle.
Meanwhile posical is coming along well. Implementing timedeltas and comparisons was a snap; I just downcast them into datetime dates, do the math there, and then make a new posical date with the result.
During my first week I worked on an application called abba—the abbreviation engine.
abba
began life as an adaptation of the New Abbreviations, the shorthand system that I've been working on since 2006. I wanted to write an application that could basically model the the Abbreviations—the challenge being that many of the abbreviations I made up have no unicode equivalents, and so you couldn't simply write a script to do unicode character replacement.
Instead what abba
does is insert references to abbreviation objects, which exist independently of their representation in any given medium. It does this by the rather clever method—originally suggested by Tom, I think—of passing through single high-plane unicode characters which can then be dereferenced at rendering time.
This morning I built a small toy called posical, which models August Comte's Positivist Calendar in python. It seems to be a sort of spiritual little brother to abba. Next I'll want to make it understand timedeltas so you can do date math. But finally it seems like it would be fun to generalize it into an all-purpose tool for constructing alternate calendars; god knows there are enough of them floating around and it'd be fun to make a general model.
So here's a little sample of what abba
looks like as of right now. It's very much in development but it's already pretty fun to look at. Allow me to run through the first couple posts in my stream about the New Abbreviations:
Foꝛ ð laſt ſix years oꝛ ſo, I'v̯ woꝛk̳ on a complet̯ ⁊ compꝛehenſiv̯ ſyſtem of tranſcription-bas̳ ʃoꝛðaͫ Ⱳc tak̯s its ɹſpiration pꝛimari᷏ from ðeabbꝛeviation̯s of ð medieval ſcribal tradition. It's tailoꝛ̳ pꝛimari᷏ foꝛ wꝛiti̫ ɹ E̫liʃ, ðouȝ it's uſeabl̯ ɹ any la̫uag̯ ðt uſes ð Latin ɖaraɥę ſet. It compꝛiſes about 150 ɹdividual ɖaraɥ੭s, ðouȝ a ſizeabl̯ ɖṵ of ðoſe ɖaraɥ੭s ar̯ diacritics ðt ç combin̯ WITH oðęs ŧ foꝛm a v੭y flexibl̯, denſe, ⁊ a̯sðetical᷏ pleaſi̫ ſyſtem. I'v̯ docum̯̳̃ ðs ſyſtem ɹ a ſliȝt᷏ l̯ſs-ðan-matuꝛe foꝛm; it's closę ŧ featuꝛe complet̯ now. Ev੭y ſo often I conſidę ð b̯ſt way ŧ publiʃ ð ſyſtem foꝛ widę conſumption.
Arguab᷏ ð v੭y firſt ði̫ ðt needs ŧ b̯ don̯ ǂ ŧ freez̯ ð featuꝛe ſet, as it w੭e, ⁊ defin̯ a v੭ſion 1.0 onc̯ I'm confid̯̃ it's got a‖ ð glyphs foꝛ woꝛds ⁊ ɖaraɥę combinations foꝛ a firſt paſs.
ðr ar̯ ſev੭al t̯ɖnical conſid੭ations aftę ðt. ð lett੭s n̯̳ ŧ b̯ wꝛitten by haͫ ⁊ ſcann̳.
But I alſo n̯̳ ŧ eſtabliʃ a way ŧ repꝛeſẽ ðem ɹ pꝛĩ̳ text; cuꝛr̯᷏̃ I hav̯ a larg̯ numbę of bitmaps ðt I mad̯ from editi̫ a fix̳-widð pixel fõ. ðs may oꝛ may not b̯ ð b̯ſt way. Obvious᷏ a fu‖ featuꝛ̳ typefac̯ would b̯ b̯ſt but ðt miȝt b̯ beyoͫ my abiliti̯s.
ſecoͫ ðr's ð qu̯ſtion of how ŧ compoſe ð docum̯̃s; wh̯ðę ŧ put ŧg̯ðę a PDF docum̯̃ oꝛ a navigabl̯ webſit̯.
On̯ of ð qu̯ſtions ŧ anſwę h੭e ǂ, Ⱳt ǂ ð puꝛpoſe of docum̯̃i̫ it ɹ ð firſt plac̯? Oꝛ moꝛe dir̯ɥ᷏—ǂ it exp̯ɥabl̯ ðt anybody who iſn't m̃ would
a) Hav̯ ɹt੭eſt ɹ learni̫ ðs ſyſtem, ⁊ b) B̯ abl̯ ŧ learn it ŧ ð degre̯ ðt I hav̯?
It requir̯s eßẽial᷏ no ðouȝt foꝛ m̃ ŧ wꝛit̯ ɹ ðs ſyſtem. But ðt muſt b̯ ɹ larg̯ part becauſe I cam̯ up WITH it; ⁊ becauſe I cam̯ up WITH ɹdividual ɖaraɥ੭s ovę ð couRSE of tim̯.
I hav̯ no exp੭ienc̯ WITH any ſoꝛt of beta t̯ſt੭s, ɹ oðę woꝛds.
And here's the same text with generator mode enabled:
Ɨ: 'THE'
ɲ: 'TO'
ʎ: 'OF'
ϗ: 'IN'
Ʒ: 'THAT'
ԡ: 'IT'
ϒ: 'AND'
ӓ: 'BE'
Ԙ: 'SYSTEM'
ȷ: 'FOR'
Ȟ: 'IS'
ϙ: 'WITH'
ʉ: 'HAVE'
Ҫ: 'IT'
ζ: 'CHARACTERS'
Ƒ: 'OR'
Ї: 'BEST'
Ρ: 'FIRST'
Σ: 'THIS'
ϸ: 'WAY'
ӕ: '_TH'
Ϻ: '_IN'
ϟ: '_AT'
ɑ: '_IT'
Ұ: '_TO'
Ӽ: '_TE'
Ѽ: '_AN'
Ӷ: '_ST'
ӄ: '_CO'
Ƣ: '_BE'
Ϯ: '_AR'
ƫ: '_IS'
Ϗ: '_ER'
ȵ: '_LE'
ɮ: '_RE'
ȷ Ɨ laӶ six yeϮs Ƒ so, I've worked on a ӄmpȵӼ ϒ ӄmpɮhensive Ԙ ʎ trѼscription-based shorӕѼd which takes ɑs Ϻspirϟion primϮily from ӕeabbɮviϟiones ʎ Ɨ medieval scribal tradɑion. ԡ's tailoɮd primϮily ȷ wrɑϺg ϗ Englƫh, ӕough ԡ's useabȵ ϗ Ѽy lѼguage Ʒ uses Ɨ LϟϺ chϮacӼr set. ԡ ӄmprƫes about 150 Ϻdividual ζ, ӕough a sizeabȵ chunk ʎ ӕose ζ Ϯe diacrɑics Ʒ cѼ ӄmbϺe ϙ oӕϏs ɲ form a vϏy fȵxibȵ, dense, ϒ aesӕetically pȵasϺg Ԙ. I've documenӼd Σ Ԙ ϗ a slightly ȵss-ӕѼ-mϟuɮ form; ԡ's closϏ ɲ feϟuɮ ӄmpȵӼ now. EvϏy so ofӼn I ӄnsidϏ Ɨ Ї ϸ ɲ publƫh Ɨ Ԙ ȷ widϏ ӄnsumption.
Ϯguably Ɨ vϏy Ρ ӕϺg Ʒ needs ɲ ӓ done Ȟ ɲ fɮeze Ɨ feϟuɮ set, as ԡ wϏe, ϒ defϺe a vϏsion 1.0 once I'm ӄnfident ԡ's got all Ɨ glyphs ȷ words ϒ chϮacӼr ӄmbϺϟions ȷ a Ρ pass.
ӕϏe Ϯe sevϏal Ӽchnical ӄnsidϏϟions afӼr Ʒ. Ɨ ȵtӼrs need ɲ ӓ wrɑӼn by hѼd ϒ scѼned.
But I also need ɲ eӶablƫh a ϸ ɲ ɮpɮsent ӕem ϗ prϺӼd Ӽxt; curɮntly I ʉ a lϮge numƢr ʎ bɑmaps Ʒ I made from edɑϺg a fixed-widӕ pixel font. Σ may Ƒ may not ӓ Ɨ Ї ϸ. Obviously a full feϟuɮd typeface would ӓ Ї but Ʒ might ӓ Ƣyond my abilɑies.
Seӄnd ӕϏe's Ɨ queӶion ʎ how ɲ ӄmpose Ɨ documents; wheӕϏ ɲ put ҰgeӕϏ a PDF document Ƒ a navigabȵ websɑe.
One ʎ Ɨ queӶions ɲ ѼswϏ hϏe Ȟ, whϟ Ȟ Ɨ purpose ʎ documentϺg ԡ ϗ Ɨ Ρ place? Ƒ moɮ diɮctly—Ȟ ԡ expectabȵ Ʒ Ѽybody who ƫn't me would
a) ʉ ϺӼɮӶ ϗ ȵϮnϺg Σ Ԙ, ϒ b) ӓ abȵ ɲ ȵϮn ԡ ɲ Ɨ degɮe Ʒ I ʉ?
ԡ ɮquiɮs essentially no ӕought ȷ me ɲ wrɑe ϗ Σ Ԙ. But Ʒ muӶ ӓ ϗ lϮge pϮt Ƣcause I came up ϙ ԡ; ϒ Ƣcause I came up ϙ Ϻdividual ζ ovϏ Ɨ ӄurse ʎ time.
I ʉ no expϏience ϙ Ѽy sort ʎ Ƣta ӼsӼrs, ϗ oӕϏ words.
One happy result of working on abba
is that it's informed my understanding of the Abbreviations, too. One immediate and obvious consequence is that it will force me to very rigorously define positioning rules and such; any place where I've been allowing myself to fudge things a little when composing texts—because things will be immediately obvious from context—will be exposed, and the abba
implementation of the New Abbreviations will evolve as a reference implementation of the Abbreviations themselves.
Something else that this has pointed out to me is that the unicode realization of any given abbreviation and my handwritten realization of that abbreviation need not be very similar at all. Precisely because abbreviations are objects, with multiple renderings, before they are unicode characters, means that I can happily say that 'in', for instance, can look like 'ɹ' in unicode and look like something similar, but distinct, when written by hand. And if I want to write a bitmap outputter or something that replicates the glyphs as they're written by hand at a later date, I can.
In abba
the specific abbreviations are abstracted away from the program itself. So, while the New Abbreviations will be the main data set that I write for myself, it's important that any user be able to write whatever set of abbreviations they would like to see.
Right now the abbreviation definitions are stored in a json file that describes a series of sets of regular expression transforms. One exciting feature that I'm working on right now is my own markup language for describing abbreviations to make them easier to write.
Right now each abbreviation is stored as a regular expression that it matches to, its name, and then its realization in a given renderer (unicode is the one I'm starting with), if it has one. But I intend for the user to be able to use a much simpler patterning system if they like, for instance, writing er.terminal.initial
which the system will compile into "er(?=\\b)|(?=\\b)er"
, and so on.
The lack of availability of a certain character is not the only good reason to divorce typeset and handwritten realizations, either. There's also just good typesetting practice. For instance, terminal er in the Abbreviations looks more or less like 'ɛ' with one more hook coming off the bottom, if that makes sense. The closest unicode analog is Ę, CAPITAL LETTER E WITH OGONEK. But it quickly became apparent that Ę looks very awkward at the end of a word. It's a capital letter and the eye has very specific expectations for where capital letters are to be found. So I made the unicode version of terminal er into ę, instead. That's a little awkward to unambiguously write by hand, but it makes perfect sense when reading typewritten text. And of course this relationship of very light digraphia is already very much the norm whenever one compares typeset text and handwritten text.
The lack of availability of a certain character is not the only good reason to divorce typeset and handwritten realizations, either. There's also just good typesetting practice. For instance, terminal er in the Abbreviations looks more or less like 'ɛ' with one more hook coming off the bottom, if that makes sense. The closest unicode analog is Ę, CAPITAL LETTER E WITH OGONEK. But it quickly became apparent that Ę looks very awkward at the end of a word. It's a capital letter and the eye has very specific expectations for where capital letters are to be found. So I made the unicode version of terminal er into ę, instead. That's a little awkward to unambiguously write by hand, but it makes perfect sense when reading typewritten text. And of course this relationship of very light digraphia is already very much the norm whenever one compares typeset text and handwritten text.
One happy result of working on abba
is that it's informed my understanding of the Abbreviations, too. One immediate and obvious consequence is that it will force me to very rigorously define positioning rules and such; any place where I've been allowing myself to fudge things a little when composing texts—because things will be immediately obvious from context—will be exposed, and the abba
implementation of The New Abbreviations will evolve as a reference implementation of the Abbreviations themselves.
Something else that this has pointed out to me is that the unicode realization of any given abbreviation and my handwritten realization of that abbreviation need not be very similar at all. Precisely because abbreviations are objects, with multiple renderings, before they are unicode characters, means that I can happily say that 'in', for instance, can look like 'ɹ' in unicode and look like something similar, but distinct, when written by hand. And if I want to write a bitmap outputter or something that replicates the glyphs as they're written by hand at a later date, I can.
Finally, I'm building into abba
the ability to actually generate its own abbreviation rules on the fly.
If the user so desires, abba can take any given text, perform some word and letter sequence frequency analysis on it (right now very simple), and produce its own set of abbreviation rules with randomly-assigned abbreviation characters to apply to the text in question. The ruleset is treated exactly the same as a ruleset read from a JSON file, so anything that abba
can do in one mode, it can do in the other. I think this is quite fun.
Part of this markup system is necessary to be able to refer to other abbreviations within the abbreviation system: a major bug is the fact that ordinarily, the regular expression library doesn't recognize the private use characters as word characters (understandably so), and thus has trouble combining them sensibly.
For my own sense of propriety and politesse I decided to programmatically assign the unicode control chars from the Private Use Area, which is designated to be unused by official encodings. I originally thought it might be nice to use Plane 15, the supplemental Private Use Area-A, but that would involve the use of double-wide surrogate pair characters and complicate the issue. Later concerns ended up persuading me to move to Python 3 (where everything is a unicode string), which might have mooted the problem, but in any case it seems saner and more extensible to restrict myself to the standard private use area, which still presents me with 6400 codepoints to make use of.
In any case, I should implement something pretty soon that sanitizes the source data before the abbreviations are applied, escaping any characters in that range that are already in the document.
On Wednesday, though, I rewrote the program to use single unicode characters to reference abbreviation objects. This radically simplified the logic involved, because the resulting lists were homogeneous lists of chars—mutable strings, in other words—which meant they could be acted on in-place, in a single pass. And regular expressions provide a very powerful framework for doing string substitutions.
The original version of the app that I wrote on Tuesday treated strings as lists of chars and simply inserted the abbreviation objects directly into the lists in the place of the character sequence to be replaced. This had the desired effect but the math was laborious, as you ended up after only a single pass with heterogeneous lists, that had to be traversed, broken up into two lists—one for chars and one for objects—which were then worked upon separately, and finally recombined.
Today was my first day at Hacker School.
I learned about decorators from Erik Taubeneck.
Decorators can be seen as a way to wrap arbitrary functions. Decorating the method foo
with the decorator @bar is syntactically equivalent to foo = bar(foo)
— that is, redefining foo to always be passed through bar.
The reason you'd want to do this: you can abstract out a lot of boilerplate that might apply to many different functions. As a bonus that boilerplate doesn't have to be in one spot; it can be at the beginning and end of the functions you want to call. For instance, if you need to run authentication checks on lots of functions in your web app, you can decorate all those functions with an authentication decorator.
Then, when you define the decorator, it takes a function as an argument and returns that function depending on whatever logic you like. So for instance it would take your user operation, check for authentication, and return the operation if the user was authenticated (and the operation would then run normally), and return a redirect function if the user is not authenticated.
Transcendence is the fundamental delusion of the mystic, and their only sufficient motivation.
The moment someone tells you, 'pay non-judging attention to your thoughts and feelings, and they will arise and pass,' you're awfully likely to take up that practice with the expectation/hope/intention that the thoughts and feelings in question are going to pass. You've set up an aversive relationship, where you're doing this practice in order to escape or rid yourself of experiences you don't enjoy.
Teaching mindfulness often seems almost impossible. Here's the basic pitch: if you maintain a non-judging awareness of the flow of thoughts and emotions, they will arise and path smoothly without getting caught up in thinking and self-view. The problem is that if you present this, as one almost always does, in a therapeutic context—as a remedy for the pain of thoughts and feelings—then you're nearly guaranteed to create conditions where that's exceedingly difficult to put into action.
It's in the interests of bosses everywhere to believe that their workers are happy.
In the workplace, I might respect your skills while not respecting your talents. Different roles have different requirements, different balances of the two. But as you ascend the hierarchy you often find the balance tipping in favor of talent.
'I don't know any of this stuff; you're the expert,' the CEO tells the IT guy, as he waits for his printer to be set up. But there's an inversion of the Hegelian Knechtschaft, a righting of the traditional dynamic: there's the message, It is not worth my time or talents to know this stuff; better I should pay you to know it for me. The slave secretly pities his master for his master's ignorance and dependency; the master pities his slave for his skill—for the degree to which his life energy has been transformed into technical ability, the crude stuff of labor.
Talent, like art, always tends to uselessness. The more of a capacity for idleness, philosophy, reflection—the more of a capacity in the aristocrat or his modern reflex, the CEO, for uselessness, the more brilliant and worthy of his station he may consider himself.
The most perfect leader is the one who has perfectly abstracted himself, until he stops being a leader and becomes a philosopher.
A la Scriptogram and Calepin, a Dropbox-powered blogging engine. Currently in closed beta. Differentiating details are sparse for now, but they do seem to tout automatic syncing.
Thoughts by this user that have been liked by others.
An exchange here and on twitter reminded me of something I've wanted from the web for a long time: a home on the social web for the humanities.
I find startups and new web apps and text editors to be fun, but I'm a Neal Stephenson-style geek, not a Paul Graham-style geek. What turns my crank is linguistics, poetry, philosophy.
And I know I'm not alone! But you have to go somewhere like Language Hat to get that kind of content—and that means you're in a poorly searchable, poorly commentable, poorly exportable world of individual posts and unsorted comments beneath them.
I would be so jazzed by something more like—even like Hacker News! With Markdown text or links and threaded social discussion.
Part of the issue here is cultural. Reddit has Reddits devoted to many of these topics, but they're frankly either underpopulated or well-staffed of not-terribly brilliant people.
In the workplace, I might respect your skills while not respecting your talents. Different roles have different requirements, different balances of the two. But as you ascend the hierarchy you often find the balance tipping in favor of talent.
'I don't know any of this stuff; you're the expert,' the CEO tells the IT guy, as he waits for his printer to be set up. But there's an inversion of the Hegelian Knechtschaft, a righting of the traditional dynamic: there's the message, It is not worth my time or talents to know this stuff; better I should pay you to know it for me. The slave secretly pities his master for his master's ignorance and dependency; the master pities his slave for his skill—for the degree to which his life energy has been transformed into technical ability, the crude stuff of labor.
Talent, like art, always tends to uselessness. The more of a capacity for idleness, philosophy, reflection—the more of a capacity in the aristocrat or his modern reflex, the CEO, for uselessness, the more brilliant and worthy of his station he may consider himself.
The most perfect leader is the one who has perfectly abstracted himself, until he stops being a leader and becomes a philosopher.
In the most basic, obvious terms, it's a novel middle ground between tweeting and blogging: through formatting and post organization it tries to encourage a series of concise mini-posts—usually about the length of a single thought—which, unlike in Twitter, are all organized into individual streams, each on an individual topic.
This is an interesting and elegant enough choice that I think all the different purposes it can be put to remain very much to be seen. In my case it seems perfectly suited to the practice of thinking aloud (this stream itself being a notable exception), that is, sending a series of individual thoughts or steps of a thought process across the transom without necessarily having my entire argument reasoned out ahead of time; indeed the format seems to lend itself just as easily to realizing halfway through a stream that one's initial premise is redundant, or malformed, or not as interesting as one originally thought. Which is part of the fun.
On the other hand the format creates new niggles and demands in the realm of contextualization and personalization. Because each user does not have just one blog, and because any user's output is going to exist in a form that's not really familiar or immediately intelligible to a reader from offsite, the design can lead to the expectation, I think, that users are all writing for each other; that is, that it's a social network, and that we are in the process of accreting a culture and vocabulary that we all share, and talking primarily for each other. Which isn't right, I don't think; my intended audience is the internet at large, and that seems to be the case for most users here.
It's hard to tell how to assist that intention. Greater personalization and theming is one obvious way; like in Marquee above, and to a lesser extent in Twitter itself, custom backgrounds and fonts are an effective way to create a 'brand' and emphasize the individual user. On the other hand, any web designer will surely lament the day that they sign away their pleasing, homogeneous aesthetic to the disarrayed whims of the mob.
Simple familiarity might be the greatest aid here; I am trying to think back to the first couple of times that I clicked in to someone's Tumblr or Twitter page, and whether—font customization be damned—I simply needed to see more and more unconnected people using the service in order to know what to expect when I clicked a link, and to not have to waste any thought on contextualizing the feed; that is, to when I was able to simply read a Twitter feed as an individual's Twitter feed, rather than another instance of this service, Twitter.
Looking back at this list, it's clear that one of the areas in which developers have really progressed from the first generations of blogging software is in posting methods—the whole conceits of many of these services lie in the clever and frictionless way that a user can make a new post. I'm particularly attracted to Dropbox-based services, as I retain a slight pre-web suspicion of text input boxes located in the browser, and because they also completely solve the problem of data lock-in as a bonus.
So this is an obvious area of potential growth for ThoughtStreams. Right now one can only post in a text input box located in the browser. Email-to-post is an obvious direction, though it immediately becomes complicated by the novel format. Normally one emails one's blog, and the subject becomes the title, and the body becomes the text. For any given ThoughtStream user, the blog analog is an individual stream. Leave aside that individual posts don't have titles; that would imply that there would be a new email address for every stream created in the system, and the word 'profusion' quickly comes to mind.
The alternative is that blogs (and thus destination addresses) map instead to users, subjects map to streams, and bodies map to text. But the same problem as above sticks in here: users tend to have a lot of streams. They're encouraged to have a lot of streams. As such, it's a pretty lofty technical requirement that in order to post by email the user must a) remember and b) spell correctly the precise name of the stream they'd like to post in.
Posting by Dropbox is a little more appealing to me, though that's probably because a vision of its operation involves more handwaving—that is, more hard technical work on the part of somebody else. But one could imagine a system in which the entire structure of a user's ThoughtStream (what's the name for all the streams that belong to a given user? Stream bed? I hope so) stream bed is reflected in their Dropbox folder hierarchy: there is an Apps folder in my Dropbox, and a ThoughtStreams folder in my Apps folder, and therein is one folder for each stream. If I want to create a new stream, I can create a new folder or subfolder. If I want to create a new entry in a stream, I can create a new plaintext file in the folder of my choosing. This is appealing; it would doubtless also require significant refactoring.