The Indigestible Missives From the Reality-Based World Mon, 12 Mar 2012 20:08:22 +0000 en hourly 1 Exposed: Ben Franklin. Also, Franklin’s real schedule Mon, 12 Mar 2012 20:08:22 +0000 Warren You might have seen this before.

It’s supposed to be Ben Franklin’s daily schedule, and it’s used all the time by can-do types who want to make you feel inadequate, because look at all the stuff Ben Franklin was able to get done before you’ve even shaved that three-day growth or changed out of your PJs you underachieving no-good slacker slug.

Twaddle, says I, twaddle.

What all these motivational types lose sight of (one of the things they lose sight of) is that Ben Franklin was a famous womanizer. When he was made US Ambassador to France,* he managed to avail himself of the relaxed French sexual morés, and did consort lo most lustily with anyone within eyesight.

Well, I’ve uncovered Franklin’s real daily schedule. It should offer some insight into how he was able to accomplish so much in one day, and give us all something to strive toward.

Next time some busybody wants to motivate you to greater things, whip out your copy of Franklin’s timetable, and watch their fervor vanish. You’re welcome.


* He really was.

]]> 0
The medium is not the message Tue, 31 Jan 2012 23:19:28 +0000 Warren I wonder what exactly Jonathan Franzen is thinking. I’ve read his The Corrections, and wasn’t that wowed by it, which doesn’t really mean much except that one book of his I read didn’t resonate with me in the same way that his other books seem to resonate with many other readers.

But then he tells an audience at a festival that ebook readers are potentially damaging to society because of the impermanence of the words they display on the screen (via CSM):

“That kind of radical contingency is not compatible with a system of justice or responsible self-government.”

He seems to be suggesting that reading words which are not printed on paper somehow makes the experience of reading less real. Furthermore, it seems this unreality is so ephemeral in its nature that society itself will be destabilized as a result.

To conflate the behavior of an electronic device with the future viability of a society seems a little excessive, doesn’t it?

We’ve had the intertubes for better than 20 years now, and in that time we’ve seen (probably) petabytes of information produced on it — much of which is subjectively assessable as noise. That’s easy to prove; how much of the internet do you not spend time paying attention to? Most of it.

(That doesn’t mean that the stuff you ignore is being ignored by everyone else, of course; it just means that your areas of interest don’t intersect with everything that’s available to you. This is no more a problem than the fact that there are probably parts of your local library or bookstore whose shelves you’ve hardly visited, if ever. Time and attention are finite, and interests are subjective, after all.)

The point is that this ephemerality of information has not destabilized society just yet. Are things different today than they were thirty years ago? Of a certainty. Are things worse? Some are. Are things better? Some definitely are.

For instance, it’s now possible for young discontents to hook up with terrorist organizations, and become fully radicalized. That’s a problem. It’s also possible for gay kids to see messages from people who’ve struggled with the same things they’re facing, and gain encouragement from those messages. That’s a good thing.

Almost all of what you can get online is ephemeral, in the same way that a book you load into an e-reader is ephemeral. (Strictly speaking, printed books are ephemeral as well, though much more slowly.) What matters, though, is not the permanence of the medium; what matters is the impression that is made by the message contained in the medium.

Content, in other words, is more important than media.

What you read has much more of an effect on your mind than the thing you’re reading it from, be it e-ink, an OLED, phosphors, or pigment on parchment and its myriad analogues. A quality message will remain with the reader whether it’s been acquired from a Kindle or a sheet of pulp.

Here’s another objection he has:

“A screen always feels like we could delete that, change that, move it around. So for a literature-crazed person like me, it’s just not permanent enough.”

I wonder if people living in medieval times objected to movable type using similar arguments.

That Franzen seems to think media is at least as important as content suggests something, though I can’t tell whether it’s about his fear of his own ephemerality; or his fear that his work would vanish were it not for stacks of printed books with his name on the cover; or fear that he’s really not that great a writer in the first place, so his books are barely worth reading to begin with; or just a total failure to comprehend technology more modern than an Underwood.

]]> 0
Are we safe yet? Tue, 31 Jan 2012 22:39:59 +0000 Warren This from the Beeb. A Tourist named Leigh Van Bryan was barred from entry to the US because of something he’d posted on Twitter.

The 26-year-old bar manager wrote a message to a friend on the micro-blogging service, saying: “Free this week, for quick gossip/prep before I go and destroy America.”


In another tweet, Mr Bryan made reference to comedy show Family Guy saying that he would be in LA in three weeks, annoying people “and diggin’ Marilyn Monroe up”.

The TSA interrogated Leigh Van Bryan for five hours. They then responded (emphasis mine):

“Mr Bryan confirmed that he had posted on his Tweeter website account that he was coming to the United States to dig up the grave of Marilyn Monroe. Also on his tweeter account Mr Bryan posted he was coming to destroy America.”

Apparently it’s not just a sense of humor that the TSA lacks, but any sense of perspective or proportion as well.

]]> 0
Kill all programs before shutting down OSX 10.7 (Lion) Fri, 26 Aug 2011 17:49:43 +0000 Warren If you’re like me — and I know you are — you sucked up a copy of OSX Lion as soon as it was on the App Store, and were immediately infuriated by the checkbox labelled “Reopen windows when logging back in”.

Why did you find it infuriating, as I do? Because you have to uncheck it every. bloody. damn. time you shut down or reboot. If you don’t, then whatever programs you had running when you shut down will “helpfully” be loaded right the hell back into RAM when you boot again.

Apparently, someone at Apple made the decision that we want our programs to reload every time we reboot, and to hell with what we think about it — because there is no way to override this checkbox setting.

There is no preference to change it.

There is no way to make it go away.

If you forget to click that checkbox on shutdown, your programs will all reload the next time you boot.

Those of us who use silicon pigs such as Adobe’s suite find this setting not merely irritating, but positively infuriating, since it adds several minutes to your system boot time.

There have been several solutions offered to deal with this. I check periodically to see if there’s been progress made. The last time I looked, I stumbled across a series of AppleScripts written by Victor Andreoni that essentially send tell commands to the Finder, ordering a shutdown and clicking the checkbox for you.

In reading his discussion of his methods, I saw that he’d found a default setting, TALLogoutSavesState, that apparently controls whether your programs reload on boot or not. Unfortunately changing that setting to 0 is not persistent; it’s rewritten to 1 on each boot. What that means is that it’s a short-lived pleasure; next time you boot your system, yep, the goddamned programs load up again.

His AppleScript solution is suitable, I think — but there’s a principle in play here, and I’ll be hell if I let my Mac tell me what to do. So in Googling for more information, I learned a couple of other things, and fired up Automator, and did this.

Description and a link to zipped files follow.

The first item in the Automator workflow is self-explanatory; you want to exit your programs gracefully before shutting down. You’re not out to kill your machine; you just want it to behave like it used to.

The second item is a pair of calls to a shell script. Basically, this is how you issue direct commands to your OSX install, without having to use the GUI. The first call is to delete (rm) a file in your home directory, contained in the Library — Preferences — ByHost folder path. (By default, your Library folder is hidden under OSX.7, probably to keep you from touching the bare wiring.)

The file that’s being removed is a preference list which stores information on what programs are running at the time the command is issued. Each program has an index entry, as well as notations for whatever document windows might be loaded. The * is a global variable character, necessary because between the words “” and “.plist” there is a hexadecimal string that’s variable from system to system, and probably boot to boot. The * wildcard basically says “look for anything with this stuff at the beginning and end, and containing anything in between.”

Deleting this file will purge the system’s list of whatever might have been running when you run the workflow. It does not affect the login items you might have set for yourself, either by right-clicking their icons in the Dock and selecting Options > Open at Login, or by specifying them as login items under your Users & Groups control panel.

The second line tells the system to set the “open at login” variable to 0, effectively disabling it entirely before shutdown. I don’t know for certain that it’s necessary to do this, but I figure overkill is better than annoyance.

Finally, there’s an Applescript that tells the Finder to shut down the system. This happens immediately after all the programs have exited, without that dialog box coming up and asking you if you’re sure. It just shuts your system down immediately, without further discussion. (Essentially, this is the same as how it used to be when you’d hold down the option key and select Shut Down from the Finder menu.)

I’ve saved the workflow as both a plain workflow script, so you can see what it does for yourself, and as an application if you decide you trust me. Put it on your Dock and use it to shut your Mac down until Apple comes to its senses and realizes that not all of us have SSDs, and that sometimes we have a damned good reason for not wanting all previously running programs to load the next time we boot.

Download the package here. It’s 280 K. Go in peace.

]]> 7
Congress declares victory over constituents Tue, 02 Aug 2011 16:06:49 +0000 Warren In a bold step forward for decreased government regulation of government, Congress was able to declare victory over the vast majority of the American electorate Tuesday. “This is an important day for all legislators,” stated Speaker of the House John Boehner (R-OH). “For years we’ve been hampered by the demands of unreasonable taxpayers, but with this latest vote, I think we’ve been able to establish once and for all who’s really in charge here.” Rep. Boehner then paused to weep.

“I’m deeply satisfied with this outcome,” slurred Senate minority leader Mitch McConnell (R-KY). “In our vote to extend the debt ceiling and cut spending, we’ve been able to protect our real interests, and for the first time since I was elected, I know we don’t have to fear being voted out of office as a result.” Senator McConnell was referring to the refusal of the GOP to close the so-called Bush tax breaks on the wealthiest 10% of Americans, despite the support of an estimated 70% of the American public for doing so.

“We’ll just keep on getting re-elected now, since money is speech and protected,” he added. “Money talks, and the bullshit walks, so we don’t have to listen to any of y’all’s bullshit any more.”

Asked whether he was concerned about the effect that cutting benefit programs — the so-called “entitlements” of Social Security, Medicare, and such programs as unemployment assistance or food stamps — might have on the American people, McConnell said, “What American people? Oh, you mean the wage slaves and cannon fodder? No, I don’t lose any sleep over them at all.”

Boehner echoed McConnell’s lack of sentiment. “If the American people really mattered to any of us in the first place, do you think we’d have put up such a fight over Obamacare while we have free healthcare for life? Do you imagine that there’d be a nine-percent unemployment rate while we take home six-figure salaries, work three days a week, and have month-long recesses? No, we never really gave a shit about you, and now, we don’t even have to pretend any more.”

For many Republicans, the greatest moment of the budget standoff occurred when Rep. Gabrielle Giffords (D-AZ) arrived on the floor of the House in time to cast her vote in favor of the GOP-backed plan. Her show of support for the measure opened the way for other House Democrats to cast their votes in support as well.

Giffords was critically injured in January when gunman Jared Loughner shot her in the head while at a meeting with supporters in Tucson.

Asked whether she was conscious of any trace of irony in casting her vote after receiving millions of dollars in medical care with the taxpayers footing the bill — while hundreds of thousands of those same Americans would have been unable to afford the healthcare she was given, and thus survive similar injuries, had they received them — Giffords refused to comment.

“See?” Boehner said. “All it really takes to get anyone to vote for a GOP plan is traumatic brain injury.”

He then went out back to light a victory cigarette. “Someday I’ll be getting a lung transplant, and you assholes will be paying for it,” he said. “And there isn’t a goddamned thing anyone can do about it. Enjoy your ramen soup and Fox television broadcasts. Fuck y’all.”

President Obama, who is expected to sign the bill into law once it passes the Senate — in a final, crushing blow to the liberals and moderates who led him to victory in 2008 — could not be reached for comment.

]]> 0
Why we don’t let cartoonists write policy decisions Thu, 21 Jul 2011 18:15:48 +0000 Warren About one in twenty cartoons by Michael Ramirez is actually worth a read; most of the time he produces far-right nitwittery that, rather than providing balance or nuance to my openly socialistic and lefty views, simply represents cognitive noise. It’s unfortunate, too, because Ramirez is actually one hell of a skilled illustrator. He clearly puts a lot of time and effort into his single-paneled gibes, but that seems to be the extent of his effort involved in creating them.

Case in point is this simple fallacy. See if you can spot the problem (and in this case it has nothing to do with his politics):

What Ramirez seems to be missing here is that the Y2K bug, the H1N1 outbreak, and the I-405 problems didn’t come about because there was a hell of a lot of work done to prevent them happening in the first place.

We’ll start with Y2K. Indubitably it was true that modern software was capable of handling a double-nought in the year field; however, quite a lot of embedded and entrenched systems had not been modernized for years prior to 2000. Those systems were, in fact, quite vulnerable to year-related errors, and it was only the work of a large team of engineers that prevented a widespread failure of these systems. That nothing bad happened is evidence of their success, not the unreality of the problem.

H1N1 has a similar story. There was in fact a threat to human populations from a vector of avian flu a few years ago; again, it was a campaign of education and immunization which forestalled widespread outbreaks — not a Chicken Little (so to speak) overstatement of the threat.

Finally, the I-405 modifications … are recent enough history that I don’t think I need to go into what happened there. Public awareness campaigns obviously worked.

Why does this even matter? Because as soon as we begin to take as fact that averted threats were nothing we needed to worry about in the first place, we’re apt to become complacent, and downplay the reality of other perceived threats — possibly to our ruin.

If we want to mock possible threats, it’s probably safer to mock the idea of surgically implanted bombs in terrorists (or the sexual assaults performed daily by TSA) than to make fun of measured, sensible, nuanced responses to situations that are generally agreed to be valid problems.

Can you think of anything else that might be a problem which is being downplayed (by right-leaning interests, as it happens) as insignificant despite overwhelming evidence?

]]> 1
Who is I? Wed, 01 Jun 2011 01:35:21 +0000 Warren Physics is a field that continues to surprise. In the early 1900s the belief was that it was effectively finished — apart from a few minor details, there wasn’t anything new left to discover. Those few minor details ended up being the set of insights Einstein had which revolutionized our understanding of energy, matter, space, and time.

While finessing what we now know as General Relativity, Einstein came across something that didn’t make sense to him; actually it so offended his sense of order that he chose to work around it rather than explore it. Later physicists, following up on Einstein’s work, found that it led to indeterminacy, which essentially means that we cannot simultaneously know a particle’s speed and its location. The physics of Quantum Mechanics developed from that.

More recently, the LHC in Europe may have found traces of a subatomic particle which might or might not tie together current theories in physics; or it could be a statistical anomaly. And elsewhere, developments continue in teleportation.

Not the Star Trek version of it. So far it’s only subatomic particles that have been teleported, but it is happening. Essentially what happens is a particle’s state is analyzed, during which the particle is disassembled, after which it gets reassembled on the other side of the room. That it’s the same particle is confirmed by its quantum state — a sort of fingerprint. Eventually, we can imagine the same happening for larger items such as atoms, marshmallows, missing socks, and possibly even living entities such as goldfish or people.

So suppose you step into a teleporter one day, and zap yourself to the other side of the planet, where you spend some time shopping and eating interesting foods. When you’re finished you teleport yourself back home. As you step out of the booth, you’re accosted by a wild-eyed person who insists that you’re no longer you, that you’re actually dead.

On the face of it, that’s ludicrous. The flavor of your interesting meal still remains on your tongue and your stomach is still full; your arms are loaded with the goodies you bought on your long-distance journey; you’re upright, respiring, and capable of becoming irritated by strangers. So how can you be dead?

To understand this wild assertion, let’s take another look at teleportation. What’s happened to you as you activate the booth is that your entire material being is converted to energy, transmitted elsewhere, and then re-condensed from energy into matter. Well, converting something to energy is precisely the same as disintegrating it. In order for the teleporter to work, it has to actually take you apart on the subatomic level. That sounds pretty lethal, doesn’t it?

Yet, despite having been torn apart atom by atom, here you are, thinking, breathing, digesting, with a complete set of memories going back in time as far as your memories always have.

Suppose there’s a malfunction in the teleporter, and after you’re turned into the energy pattern at home, you step out at your destination — but a second version of you ends up being duplicated back home. That version of you expected to appear on the other side of the world, has not, and is furious. Meanwhile, you at your destination go off on your shopping trip.

By the time you get back home, the other you has had sufficiently different experiences that he’s separate, his own individual with his own recent memories — yet by any biometric measure you’re identical. Even your fingerprints are the same.

What happens next? How do you resolve having a copy of yourself? Which one of you steps into the teleporter to be re-absorbed — and is that even an option?

Suppose instead you’re killed in a tragic accident. As it happens, teleporters retain a copy of energy patterns they’ve processed, just in case something goes wrong on the receiving end, such as a blackout. The pattern can then be reintegrated at the departure point. Your grieving family remembers the trip you took last week, goes to the teleportation center, and asks that your pattern be retrieved from its computer’s storage. A moment later, a reassembled you appears in the teleportation booth with no memory of having been in storage, and with no memory of a fatal accident.

Is it you?

Does a teleporter kill you, or does it transmit your essence in some way, or does it make copies? Were you killed when you were disintegrated, or was the reassembly a kind of re-animation? If you were killed, how can there be a continuity of memory? Everything you know asserts its existence, you keep thinking I am alive.

If this is so, what is the seat of this thing called I? Is it an entity, a process, something separate from your body? If it’s separate, how can it be teleported along with the rest?

These are goofy science fantasy scenarios, but thinking about them can lead to some interesting results. Those who believe in souls might reject the entire idea of teleportation. Or they might insist that duplication of people is impossible, paradoxical. Or that the duplicates would be nonviable, incapable of functioning because they lack the animus necessary to survive. Or that the duplicates are soulless monsters, possibly golems or even bodies inhabited by demons.

If those concerns aren’t yours, we still have the issue of duplication — accidental or otherwise — and what it might mean. And at the core of it all, we must return to the question of what happens to you — or your consciousness — when your body is taken completely apart.

My thought on this is that as long as we’re thinking of our bodies and minds as being monolithic, contiguous entities, we’ll find ourselves baffled by these questions. However, there are other ways of looking at ourselves, and not surprisingly Buddhism offers one of these alternate ways.

In Buddhist psychology, we’re not composed of a body/mind monad, nor a simple body-mind duality. We’re composed instead of five interacting aggregates, broken down into body, sensation, perception, conceptualization, and consciousness. Out of those five aggregates comes an emergent property, awareness — or mind.

These five things are called aggregates because they cannot be broken down into discrete elements; even they are composed of other things. To see how this may be so, consider your body. What it’s made of is rather simple, chemically — carbon, oxygen, hydrogen, iron, calcium, and a few other elements — derived from organic sources such as food, air, and water. Yet this aggregate of constantly-changing individual atoms nevertheless appears to retain integrity to a high degree. The same is so of the other aggregates.

Awareness (mind) is an emergent property resulting from the interaction of these aggregates, and appears to be dependent on them, since if you remove any one of those aggregates, awareness vanishes. Yet, if the aggregates are little more than processes that self-perpetuate from interaction with the world around them, and awareness is itself dependent on those aggregates for its existence, what does this suggest about awareness — and about the concept of I, which seems to be central to this awareness?

If the body, in other words, is both a body and a wave of atoms flowing through space, isn’t it valid to see the body as a pattern that continually remakes itself (almost like slow teleportation)? If awareness functions in the same way, how does the I actually function? Why is it not aware of this constant self-remaking? Does it have a blind spot, or does this suggest something else?

If it’s the responsibility of I to stitch together the variegated inputs from the five aggregates into a consistent, apparently seamless narrative, how would the I benefit from seeing where it’s discontiguous? How could it even be made aware of its discontiguous nature in the first place?

Finally, suppose the I actually is discontiguous, and becomes aware of this fact. What sort of effect would that have on the awareness which possesses the concept of I?

The next time you’re planning to teleport somewhere, this might be worth considering. Or, if teleportation isn’t in your agenda, consider instead general anesthesia or even deep sleep. Both are states that attenuate consciousness to such a degree that awareness vanishes — yet we’re able to pick up where we left off, just as soon as we waken. How is this possible, if what we perceive as I really functions as we imagine it does?

Not that you should lose any sleep over it.

]]> 0
Footprints of a gigantic lizard Tue, 12 Apr 2011 02:40:09 +0000 Warren I can’t say with anything like certainty that I know what happens to us when we die. To some extent I think it might be a bit like the reverse of what happened at birth, only a bit more rapidly and drastically.

Of course, what happens at birth is itself an interesting question; after all, fetuses are viable before birth, capable of living without the womb. You have to go back a number of weeks to find a fetus that can’t survive on its own. What’s intriguing is that you don’t get signs of coherent awareness, of a stringing together of consciousness into the narrative that calls itself I, until well after the baby has come into the world.

Death, on the other hand, can be abrupt. It can just as easily be a gradual process, one that happens slowly enough for everyone to get used to the idea. I have a feeling that gradual deaths are easier for the loved ones to deal with.

From another perspective, though, we’re really dying all the time, in the sense that the person I was a minute ago — or an hour or a day ago — is not the same as the person that I am now. Even relatively minor events have changed my perspective, so it can be argued that the past me is dead in one sense. However, there is history, there is a continuity, there is that continuing motion of consciousness whose entire job is to join together discrete, disparate events and sensations into a beaded string of apparent wholeness.

There’s a reason for all this philosophy in this post.

If the emergence of personality is an ongoing process, logically it can be traced backward until you start unweaving the roots of that personality. I’ve done that with a couple of things in myself, specifically my intellectual curiosity and my sense of humor. There are two other things I can trace pretty readily. One is my interest in the German language and culture; the other is my love of Godzilla movies.

I went to Berlin (still West Berlin, back then) when I was five years old. I don’t recall a whole heck of a lot from that trip, as you might expect, though I was struck by the odd sounds of European sirens, the cramped bustle of the U-Bahn (Unterstrassenbahn, subway), the old world architecture, and the way that commercial products were different from their American counterparts. There’s a lot more, but it doesn’t bear repeating here. The point is that in later years these experiences and impressions would serve to fuel my interest in taking German language classes, since it is the language of my ancestry, at least on the distaff side. We went there to visit my maternal great-grandparents.

Godzilla movies, on the other hand, descended from my paternal lineage. They’re terrible, of course; not only are the dubbed voices famously awkward and hopelessly mismatched, but the effects are truly bad. The stories are thinly written morality plays cautioning humans against hubris, and usually have a dash of anti-radiation panic thrown in for seasoning. And the monsters … well. Just guys in rubber suits. But that’s the point. Imagine, just imagine, that you report for work each day, put on a goofy (if uncomfortable) costume, and spend the morning stomping the crap out of intricately detailed balsa wood miniature buildings. If that’s not a working definition of joy, I’m not sure what is.

It was my maternal grandmother who led me to Germany and a greater appreciation of my heritage there, and it was my paternal grandmother who first subjected me to Godzilla. (That was in a movie theater in Needles, California when I was eight or so; she sat beside me and cheered “Go get ’im, Godzilla”, laughing as loudly as I did, having a hell of a good time.) These discrete aspects of my personality did not exist before those two women brought them into my life, and those two women are dead now. They died within a month of each other, this year.

In neither case was it a surprise. There had been time for everyone to be aware of what was happening, and to prepare for it as best we could. In this regard, I believe we were fortunate. I don’t imagine I was better prepared than anyone else.

Buddhism has a couple of different ways of looking at death — at least, the canonical versions of Buddhism do. One view holds that there’s something remarkably akin to reincarnation. This view is prevalent with Tibetan Buddhist variants in particular. The other view uses the term rebirth instead of reincarnation. The difference is subtle; in rebirth, personality is not passed into another body, just tendencies or patterns of behavior. With reincarnation, something akin to a soul is passed along instead.

Both of these views owe their existence at least in part to where Siddhartha Gautama spent his life. The Indus River valley is the cradle of Hinduism, which teaches the doctrine of reincarnation. However, another key teaching in Buddhism is impermanence, which logically must preclude the notion of a soul. So Buddhists might have to tread a very careful line, balancing the idea of rebirth or reincarnation against a philosophy that essentially undermines the very idea.

I don’t believe in either reincarnation or rebirth. I don’t see any way by which either could be possible. So from my perspective, my grandmothers are not merely dead; they aren’t even around in essence any more, being born into another body; nor are they sitting on a cloud, harps in hands, knitting their brows and tsk-tsking at how thoroughly wrong I am. The standing waves of their lives have troughed and ebbed, leaving behind wet sand and slowly fading footprints.

Nonetheless, they persist in other ways, through my DNA, and through the tendencies and preferences they passed to me by example and instruction. This is, ultimately, all that any of us can leave behind. The extemporaneous German language instruction from one, the love for gooey butter cake from the other — these are moments we’d shared, events that continue to ripple through the froth of time into the present through my consciousness, impressions of which I pass along to others here. Eventually my own standing wave will break on the shore of time as well, and still others will recall being affected by some event or other from my own existence. This is as close to rebirth as I know how to come.

Despite my nonbelief in gods or souls, though, I find myself at ease with these deaths, recent though they both are. That’s because I was fortunate enough to encounter a philosophy that allowed me to understand something important about life — or more accurately, something about the way I perceive it. That shift in perception, encapsulated in the second noble truth*, was left behind some 2500 years ago, and continues to alter the human sphere of experience today.

And rather than fill me with a sense of hopelessness or defeat, I find myself experiencing things more keenly and more vitally now, particularly when I ponder the big questions — such as how a million-Mark banknote came into my possession; or how I continue to be filled with glee every time Godzilla flattens another pagoda.

It is my wish that we may all truly understand how ephemeral human existence is, and comport ourselves appropriately in that understanding.

* Roughly, that dissatisfaction, unhappiness and suffering are rooted in the belief that anything is eternal, permanent, or unchanging.

I was primed for Buddhism — all unawares — by the same grandmother that gave me Godzilla, by the way. Spending part of my childhood summers visiting her and my grandfather, I would watch the old TV series Kung Fu. Many years later, watching the show again, I was struck at how completely its blatantly Buddhist evangelism had saturated my consciousness. The lessons from that series are still relevant, despite the way David Carradine broke his own wave on the shore.

]]> 0
Paper airplane template Fri, 08 Apr 2011 18:49:49 +0000 Warren Yes, that’s right. This is one of the things I actually do for a living.

The backstory is actually valid. Our occupational health department offers, among other things, FAA flight physicals. Many of our local pilots seem not to know this, though, and are going to other cities to get the physicals done.

So to promote our much-more-convenient local services, I decided to take the term “flyer” literally and create an advertising piece that can be folded into a working paper airplane.

In order to even do that, though, I had to work up a two-sided template that has the folds marked and the inner, outer, upper, and lower surfaces mapped.

Here are little thumbnails of the results…

The white areas are contiguous places where graphics and copy can be placed without affecting the finished, folded look of the plane. The purple areas are contiguous inner surfaces when the plane is folded; the grey areas are the upper wing surfaces; and the blue areas are the underside of the plane.

…And here’s a zip file (2.5 MB) containing the templates as PDF and Illustrator files.

I saved the Illustrator files back to version CS2. These are formatted to print full-page on US letter paper, 8.5 x 11 inches.

Have fun!

]]> 0
Inverting black and white colors in InDesign CS5 Tue, 05 Apr 2011 23:25:39 +0000 Warren One of the truisms about programmers is that they prefer not to have to do the same thing twice. That’s why the good ones tend to start keeping code libraries, and the really good ones start doing subtle and elegant things with objects.

Coming from a programming-for-clients background I eventually ended up in graphics (which was what I’d started doing long before anything else), at least partly because when you’re working on something for a client you’re never really finished. You’re always doing the same thing twice. And considerably more often than that as well.

So it is that I don’t particularly like having to do the same design twice, nor do I like sloppy or inelegant solutions to situations.

Some time back I did a design for our local charity race. That design works pretty well on shirts and in smaller scales, but on a poster it’s basically a big field of white without a lot to bring the subjects into prominence. So what I’d done before (last year) was invert the colors in the art, transposing black with white and vice versa.

This is trivial to do in Illustrator, but since I’m using InDesign as my layout tool, the issue I start bumping into is having to keep two different versions of the art around — the standard black-over-white, and the inverted version.

This is not good, for a couple of reasons.

1 If I make changes in one, for whatever reason, they won’t necessarily get reflected in the other.

2. Having more than one copy of a working file is pretty damn sloppy.

Another problem is that InDesign doesn’t easily let you invert colors in placed art, unless it’s a TIFF. (I have no idea why, but it’s apparently been that way forever.) There’s no Invert option in the ink effects choices, and Difference and Exclusion are both inadequate for truly inverting black and white. All you end up with is something black sitting atop something else that’s charcoal grey. Hardly adequate contrast, and definitely not an inversion.

Thus presented with a challenge (remember, I didn’t want to have to invert the colors in Illustrator, though I could have in less than 60 seconds), I started messing about. This is the image I began with. Note that the figures are black on a white field, as originally created in Illustrator.

My little journey of discovery wasn’t as arduous as I’d feared. All I needed to do in the end was create two additional graphics — both InDesign shapes — and place one below the art, the other atop it. To make it obvious what I’ve done, I’ve added some layers to the image, labelled foreground — white, art, and background — black. The art is, as you’d expect, on the art layer.

On the background layer, I created a black rectangle with no ink effects. The art is placed on top of that in its own layer, with its ink effect set to Difference at 100% opacity. That gave me a black-on-charcoal effect, which obviously is not where I want to be. Click the screenshot here to see what I mean.

To rectify this, I placed a white (Paper) filled rectangle above the art, again on its own layer to make it clearer how things are stacked. This, too, is set to Difference ink at 100% opacity. Click the final screenshot here to see the results.

The black and white values in the art are inverted, without my having to alter the source file in any way.

Direct export to PDF works with the final version, as does output to EPS and then distilling. The “inversion” to black is less than perfect — the black is actually a dark grey, though it looks better in a direct PDF export than in something passed through Distiller — but for many uses, I think this is a workable alternative to editing your source art to invert black and white in InDesign CS5.

]]> 0