We, the Despots

This morning, Monica Lewinsky’s name appeared in my Twitter feed. My first thought was to tweet a cheeky one-liner—referencing obvious (and tired) subject matter.

I then went on to read about Lewinsky. She recounted the experience of having her reputation destroyed on the internet. She also described the pain she felt as a result of the public harassment she endured. Since then, she’s made ending cyberbullying her mission.

I felt embarrassed.

Who was I to judge this person? What right do I have to mock another’s personal matters? Why would I trade my decency, just for a quick gag? More importantly: Am I the kind of person who’ll reduce another human being to a crass punch line?

If I’d acted on that dumb impulse, I wouldn’t have been alone. Tweets directed at her ran the gamut from, “Are you available for bachelor parties?” to, “I have a cigar with your name on it……” and “Let me tell u Moni, not all 21 year old suck the presidents dick.” (The sort of retorts one hopes will surface when these folks apply for jobs.)

A few hours passed, and Renee Zellweger’s name was trending. Turns out, the actress looks quite different than before—as a result of plastic surgery.

The public barbs that followed ranged from the cruel to the outright vicious. One compared the actress to Lord of the Rings’ Gollum. Another made an eCard that reads, “May your Halloween costume be as shocking as Renee Zellweger’s new face.” And everyone from individuals, to the “news” media were quick to take a kick at the actress.

A handful spoke to the insidious nature of Hollywood. They noted how it glorifies young actresses only to discard them upon reaching middle-age (and how this effectively forces these artists to undergo cosmetic surgery). Fewer yet acknowledged the systemic nature of sexism in media. I wonder if instead of our jokes, we should ask ourselves some questions. For example, why do we think it’s acceptable to judge, critique, and mock certain women, just because they have public personas?

Perhaps these spiteful leanings were always with us—and it’s just our tools that have changed. The internet, for example, isn’t just a set of technologies; it’s an amplifier. It takes our culture’s bravest moments, and highlights them for everyone to see. It can galvanize the voices of a silenced few, and raise them up for all to hear.

The internet also amplifies our weaknesses, as a people. My examples from this morning are just a couple of the most recent ones. Daily, we’re fed niblets of gossip that we collectively ravage. We extract every last bit of amusement from these, no matter the pain we inflict. Our appetites debase our morals and sensibilities.

But—we can be change.

You, me, every one of us: We can do better. We can think critically. We can challenge ideas. We can discuss issues. We can have heated debates, in which we say what we believe. We can speak with conviction, bias, or even ignorance. It’s in this discussion that we come to understand.

However, what we mustn’t do, is allow our weakest instincts to take hold. While any notion should be open to debate, we must preserve the rights of the individual. Because when we take momentary pleasure in others’ misfortunes, we are all made lesser.

Apple Doesn’t Design for Yesterday

Last night, I installed OS X Yosemite. After the marathon-length download, I finally saw it in action. My initial reaction wasn’t unlike that of many others. I’ll sum it up with the phrase, “This got hit by the ugly stick.”

Now, before you go all fanboi on me, please allow me a moment to explain my reaction. First off, It’s OK if I’m not immediately wowed by the updated GUI. Change works this way. Within a few days I’ll likely grow accustomed to this very flat, very Helvetica, environment. This was my experience when iOS was flattened. Although primitive seeming at first, after a few weeks, it felt fine—and its predecessors looked clumsy.

The biggest point of discomfort I have with the new OS X relates to type. Helvetica sets wide and isn’t always well-suited to screens. These shortcomings are glaring in Yosemite. I need to expand Finder window columns so they accommodate the girth of this type family; similarly, type in the menu bar looks crowded and soft. Admittedly, these are First World Problems. That said, I’m not complaining so much as I’m observing.

Apple’s decision to make a wholesale shift from Lucida to Helvetica defies my expectations. Criticize the company as much as you’d like, but it treats user experience with reverence. So, this leaves me wondering: What possible reason is there for this shift? Why make a change that impedes legibility, requires more screen space, and makes the GUI appear fuzzy?

The answer: Tomorrow.

Before I elaborate on this point, though, let me discuss yesterday. Microsoft’s approach with Windows, and backward compatibility in general, is commendable. Users can install new versions of this OS on old machines, sometimes built on a mishmash of components, and still have it work well. This is a remarkable feat of engineering. It also comes with limitations—as it forces Microsoft to operate in the past.

The people at Apple don’t share this focus on interoperability or legacy. They restrict hardware options, so they can build around a smaller number of specs. Old hardware is often left behind (turn on a first-generation iPad, and witness the sluggishness). Meanwhile, dying conventions are proactively euthanized.

When Macs no longer shipped with floppy drives, many felt baffled. This same experience occurred when a disk (CD/DVD) reader no longer came standard. I probably don’t need to remind you how weird it seemed for the iPhone to not have a physical keyboard. Apple continues to remove items that seem necessary from their products and line-up.

In spite of the grumblings of many, I don’t recall many such changes that we didn’t later look upon as the right choice. Floppy disks were too small. The cloud made physical media (CDs and DVDs) unnecessary. Better touch screens allowed a more efficient means of input, which made bulky keyboards unnecessary.

“What about this change to Helvetica?” you ask. It ties to the only significant point in yesterday’s iMac announcement: Retina displays. Just take a look at Helvetica on any high-fidelity screen, and you see a crisp, economical, and adaptable type system.

Sure, Helvetica looks crummy on your standard resolution screen. But, the people at Apple are OK with this temporary trade-off. You’re living in Apple’s past, and, in time, you’ll move forward. When you do, you’ll find a system that works as intended: because Apple skates to where the puck is going to be.

The House Party

Imagine a friend with a big house, where everyone is welcome. Given the many invited, there’s always something to see, hear, or discuss, when you visit. You can write stuff on the walls, show funny videos to friends, and even play games. He’ll even let you make a room private—just for you and your friends.

There are some rules: He asks that you use your real name. He doesn’t want you to put naughty pictures on the walls. He is against the house being used for elicit purposes. (He also has a strange thing with public breastfeeding—but we all have our hang-ups.) Other than that, he’s pretty chill.

After hosting the party for a while, Budweiser and a few other companies got in touch with him—and asked to take part. So, they paid him to put up a few of their banners. Reps from these companies visit and listen to what folks are saying. You find this weird, but it covers his bills—and you’d rather look at a few posters than pay to attend the party.

Sometimes, he wonders if rearranging the furniture would make the group discussions better. So, he tests varying arrangements and then switches to the layout that works best. This also upsets you—both that you are part of a test, and that he’s so determined to make the space better. Perhaps you just don’t like change.

This is an understatement. You don’t seem mildly perturbed with these improvements. In fact, the last time he tested a new room approach, you thought he was manipulating you. So you wrote nasty things on the walls—altogether forgetting that you are a guest in someone else’s house. You SHOUTED IN RAGE, when he moved the couch—but forgot all about it, a week later. You felt violated when he “infringed your privacy,” even though you put your photos in a place everyone could see them.

You don’t seem to be angry about climate change, wanton consumerism, or the NSA violating your privacy. Instead, you pout about how shitty your friend’s house is—and how, one day, you’ll leave. Strangely, you never do. He isn’t forcing you to stay, or putting you under duress. Come if you want, or leave if you’d like—no big deal.

I’m not implying that Facebook’s house is perfect; but, it is sort of remarkable. It’s improved how we communicate, service is consistent, and it mostly runs without a hitch. When management oversteps, they tend to step back and correct course. And—I repeat—you are always free to leave, should you be unhappy, or get bored with this particular party.

I wonder if our frustrations with Facebook are more about who we are than what it is. This network seems to activate our more needy characteristics. We crave a sense of connection and the adulation of our peers. As we witness the pseudo-lives others cultivate and project, we feel inferior. So, we respond, in kind, using Instagram to doctor ordinary moments, posting our most clever quips, and unintentionally bragging about ourselves.

Facebook might prove the greatest example of the medium being the message. The Status Update isn’t an invitation to dialogue, it’s a pleasure lever. Each time we post, we draw attention to ourselves, and beg for tiny nuggets of digital applause. This mechanism is powerful because it rewards our most self-serving behaviors.

So, instead of hating the house, or the host, perhaps we need to be more concerned with the ugly reflection we find in all its mirrors.

A Better Way to Fail Fast

Certain notions are so insightful that they spread quickly. As this happens, many such ideas are sapped of their meaning, and turned into hollow platitudes. “Fail fast,” is such a phrase, and it’s commonly repeated among those who work in, or with, startups.

The logic behind this phrase is sound. It proposes that making the right choices on undefined projects can be difficult; therefore, you shouldn’t dwell on setbacks. Instead, you should learn what you can from your missteps and move on—quickly.

This idea is powerful because it encourages innovators to look forward. Or, as Thomas Edison said, “I didn’t fail. I just found 2,000 ways not to make a lightbulb; I only needed to find one way to make it work.”

Without such a mindset, one can get hung-up on his/her stumbles. Worse yet, the resultant fear can slow the innovator from taking new approaches and entertaining somewhat divergent possibilities.

However, this phrase’s meaning has become perverted, affording many an excuse to give up and switch to something completely different. They use the words “fail fast” to avoid the hard tactical questions that need to be addressed, in order to advance. Instead of biting down and dealing with the obstacles at hand, they quit, and move on to something completely different—because they’re “failing fast.”

Let’s bring this discussion back to Edison. What if he had given up on the lightbulb and moved on to another challenge, every time he hit an obstacle? Sure, he could claim that he’d “failed fast,” but he also wouldn’t have achieved his breakthrough.

I’m all for failing fast. For this device to work properly, though, I choose to fail fast on smaller points, while keeping my long-term goal consistent. As such, I propose a simple rework of the phrase. Let’s stop saying, “fail fast,” and instead try: “fail on the small stuff fast.”

Discuss this post in #startupchat on Chapp.

May You Design In Interesting Times

Lately, I’ve repeated the same phrase to a number of smashLAB clients: designing for the web has changed more in the last 3 years than in the 10 before it. I’m never sure whether they believe me or not. I fear that these statements sound a little like a sales pitch. That said, I’m convinced others in this industry have experienced the exact same upheaval.

The Web is Always Changing

The web has always been a tricky medium. Those of us who’ve been at it for a while remember building for 640 x 480 pixel displays, and setting type in graphic formats—to gain some typographic control. We hacked tables and wonky framesets to push immature mark-up to do as we needed. We tested liquid layouts, knowing something interesting was there, even if not quite yet; and, we integrated primitive CMSs and learned to deal with their idiosyncrasies. Some of you might even look back with amusement at tedious tasks like manually dithering (a meagre 216) web-safe colors, to expand the reach of that limited palette.

And we got pretty good at it. In fact, as recently as four years ago @shelkie and I bragged about how you could overlay wireframes and comps from a smashLAB project with the finished website, and typically see pixel-perfect accuracy. Some didn’t care about that level of detail, but we did—and we were even snobby about that capability and commitment to craft. That said, pixel-perfection (for the most part) doesn’t matter any longer. Looked on coldly, one might even say it was a carry-over, or relic, from the print era.

I’m not implying that the web of 2010 was all about pixel perfect execution; instead, I reference it as an example of our focus. We also implicitly understood web conventions, and had established solid processes for building for the web. I also think we were often ahead of the game, as we tested new technologies and (sometimes successfully) pushed approaches. For example, we were playing with the asynchronous scripting in IE6, screwing around with in-browser content editing before most others were. And we were noodling with mobile as early as 2002, but felt that technologies like WAP were just too flimsy to become a standard.

Prognostication is a troublesome practice, almost always proven flawed as time passes. A notion that works in principle is no match for what people actually do. As such, Twitter has, to my surprise, proven a remarkable platform, whereas other good technologies like the MS Surface don’t seem like they’ll ever take off. So, we might have debated whether native apps or the mobile web would win the war (and other such matters), but those discussions meant little until we had lived with these technologies for a while.

After a few years in a multi-screen environment, most would likely agree that the notion of adaptable design is increasingly essential. Yes, there’s Responsive Web Design, which is suitable in a vast number of instances; however, there’s also the need for the conventions we apply in a browser context to adapt to native applications—in order to establish UX/brand consistency for visitors. (E.g., Users expect Facebook to principally act and operate the same way within the browser and in its native iOS/Android applications.) As you’ve likely noticed, I haven’t even touched upon new backend databases, languages, and frameworks, like NoSQL, node.js, and Meteor, as those aren’t my area of expertise; nevertheless, it’s all in a state of flux.

Left Behind by New Models

Establishing design systems that work well across a number of different implementations is no small challenge. Even seemingly simple questions like “does our app need a back button?” are tricky and have broad implications. For some, it’s too much. Recently, I had a conversation (that started on Twitter and moved to Chapp) with a designer who’d largely given up, on design for the web. He explained that although he knew—and had promoted—the approaches championed in web design today, he finds them difficult to implement. He even felt that the demands of the digital context had invalidated the fundamental design rules he had learned during his training and career.

Over the course of our discussion, I encouraged Stephen to not become overwhelmed by the changing nature of design. I noted that back in 2007, I was ready to write off social media, but knew that if I did, I would effectively relegate myself to the waste-bin of design: I’d be left behind as a model that was unable to adapt to change. With that concern in mind, I instead jumped in with both feet. For the next six months, I toyed with every social network I could, in order to simply understand. As I did, I was reminded that no one is restricted from growth, unless they get frustrated and claim defeat—which everyone feels like doing at some point.

With that said, I feel lucky to have started my design career on the web, and acknowledge the advantage that comes from having done so. Those of us born ~1970 were the first to have computers around us for our entire lives, and with this came a great deal of change, and a kind of acceptance of lifelong learning. At no point in my career have I felt as though I knew all there was to know about my vocation. Actually, I’ve typically felt more of a chronic inferiority, and fear that I will never know enough. Perhaps this is why I write so many articles in which I question the nature of design, and the role of the designer: the gig changes so much that I no longer look upon this work as I once did.

In the mid-90s, I saw a copy of “Ray Gun” magazine, designed by David Carson, and it blew my mind. He’d brought an irreverence to a stale feeling medium. In doing so, he seemed to knock down so many barriers: after that, everything seemed like fair game. Carson wasn’t the only one; designers ranging from Margo Chase, to Neville Brody, and Carlos Segura all brought a sense of personality to design that I also wanted a part of. I was working as an artist at the time, and these designers had effectively brought art to design. Little did I realize that this was perhaps the swan-song of personality-driven graphic design. (Although designers still produce personal and visually delightful work, such efforts seem less relevant in today’s landscape.)

Although aesthetics will always be important, a digital context simply involves a greater number of considerations than what something looks like, or how clever the designer behind it was. In fact, if there’s one characteristic that seems to hold back older designers, it’s an obsession with surface: an inability to appreciate that some design solutions might even benefit from plain, or even unoriginal, implementation.

This sounds like a relatively obvious observation, which might best be highlighted by the popularity of flat design. Although I don’t think such approaches are always appropriate, I think it’s fair to say that flat design is in part a reaction to an overly-indulgent focus on surface. I don’t need visuals that represent torn bits of paper and leather surfaces, in order to check the date in my Calendar application. Although there’s nothing wrong with a beautiful presentation, if the visuals in no way improve the user experience, perhaps we need to ask whether such treatments deserve to be there in the first place.

Preventing Extinction

This isn’t a new idea; but it’s one that many designers, who came of age during the print era, are surprisingly resistant to. After releasing The Design Method, we sent copies to a number of designers, asking them to (candidly) review the book on either Amazon or their blogs. The feedback was overwhelmingly positive. Younger designers in particular dug into the content and found it useful—even if a little dense in areas.

The only negative feedback seemed to come from designers who were in their 50s and 60s. A couple of these individuals came back to me, asking why I hadn’t made the book more beautiful. I explained that such an approach seemed patently wrong, given the nature of the content presented in the book. As I prompted for their feelings about the content, I learned that not one of them had even read the book. They were looking at The Design Method as an object, as opposed to a delivery device for information.

It’s hard to be frank when partaking in such interactions, for fear of seeming reactionary or emotional; however, I was somewhat taken aback by their responses. Couldn’t they see that the things we design are about more than just surface? Didn’t they understand that my challenge wasn’t about creating a beautiful artifact, but rather, taking a complex matter and codifying it into an applicable manual? Moreover, what kind of a designer believes playful visuals should take precedence over clarity, readability, and order? Eventually, I realized that this argument probably wasn’t worth feeding into. But, I had to wonder if this small handful of responses was symptomatic of a hurdle that will hold back some designers—who have an inability to adapt to changing ideas about what design should do.

Regardless, these discussions strengthened my desire to evolve as a designer (and as a human). I am not a slave to past conventions any more than I need to be resistant to new possibilities. My brain isn’t incompatible with emerging approaches, unless I choose to shelter myself from such possibilities. This leaves me silently repeating the following mantra: I will not allow myself to become a designosaur.

There are No Experts

Going back to my conversation with Stephen, though, the question for many mid-late career designers becomes one of whether to stick with design—and, if so, how to adapt to all of the change underway. Or, “if everything I know about grids doesn’t matter any more, what do I do now?” Well, first of all, there’s plenty of work that isn’t digital in nature that still needs to be done. Second, all that you know about grids (as well as the other design principles you’ve learned) are as important as they ever were. However, you can also learn a whole bunch of other really neat stuff.

In this article, I’ve already used the phrase, “lifelong learning,” and I think this is an increasingly pivotal notion for designers to open themselves up to. There are so many great new approaches, technologies, and opportunities being unveiled every day. There’s no reason to limit yourself from learning these new things; similarly, there’s little possibility of you becoming an expert in all of them.

Perhaps that’s the most important point for people like us to get comfortable with: we don’t need to be experts at in all of these areas; instead, we need to build toolkits for ourselves that bend to different circumstances. Today’s designers have to possess an adaptability that allows them to shift from one situation to the next. (Odds are that over the course of your career, you’ll work on many projects that have little in common.)

For older designers, such notions probably seem sacrosanct with their whole means of operating. They were more likely hired because of their expert knowledge, and to complete defined tasks. For example, a soda company could go to a package design studio and say, “we need you to come up with a design for some Tetra Paks, and we expect you to know exactly what marks/items we need present on them.” However, I rarely ever get requests that specific. Instead, prospective clients tend to  come to us with questions like, “We’re thinking about building an app to help change people’s dietary habits. How do we know if this is the right approach?” or, “We have this innovative new product people would love if they tried it, but everyone’s scared to give it a chance. What should we do?”

The designer of the past could tell a prospective client that they knew exactly how to complete the task at hand, and had most likely done so successfully, many times before. As design challenges become more ambiguous and undefined, though, such assuredness is difficult to maintain. As such, I’m more inclined to say something to our new clients along the lines of, “I’m not sure, but I have some ways to find out and propose some potential approaches.” This might not be enough for some, but it’s a responsible and realistic response. It’s also one that designers will probably have to grow more accustomed to using.

Interesting Times

Since starting smashLAB, I’ve seen new approaches, technologies, and mindsets come and go (and sometimes come again). Some I was reluctant to accept; others I leapt into, head-first. Increasingly, I’m choosing the latter approach. Although I won’t casually take risky approaches on a client project, I have found that we can’t afford to get locked into any one way of working. (Most times, we test more cutting-edge approaches/technologies with our internal projects, so we can trial-run such things safely.)

As we test varying approaches, we see certain patterns emerge. Most notably, we’ve seen how often the right decision is also the simplest one. Regardless of medium, technology, or deliverable, when you remove your own ego—and cut away extraneous elements—you tend to arrive at something that is good. Similarly, I find that our conversations increasingly gravitate to systems over novel approaches. We’re less concerned with how interesting or new a solution might be, and more so with how well it will work in different situations. So, I’ll choose the adaptable logo over the smart one; similarly, I feel like I’ve done my job well, when a design approach requires fewer fonts, colors, treatments, instructions, and exceptions.

Looking at design from a systems standpoint requires a kind of big-picture focus on behalf of the designer. In a print context, or when planning around fixed-size displays, a designer could be highly particular about accuracy of placement; however, the multiple-output scenarios we’re now more likely to design for don’t accommodate this level of specificity. Sure, we can establish relative rules/guidelines, but these don’t work when overly rigid.

I like to compare this challenge to having to make a one-size-fits-all t-shirt. Were you tasked with creating such a thing, you’d probably source a stretchy fabric, and cut the shirt to work for the greatest number of people. However, for the very smallest and largest people, this shirt probably wouldn’t fit very well. Similarly, an application like Todoist works surprisingly well in a number of different environments, but never feels quite as seamless as a native application.

So, yes, design today requires designers, clients, and users alike to accept some level of imperfection. In exchange, we gain the ability to serve a greater number of users, prototype rapidly, and iterate more frequently—leading to design solutions that can in time become excellent. Sure, new problems arise: nascent technologies are not always dependable and can be poorly documented and supported. Some of the approaches we implement aren’t great on older devices, meaning we’re effectively reinforcing barriers for the have-nots. Similarly, the rate of change is resulting in certain trends that aren’t always great for the end-user.

With all of that—the good and the bad—we find ourselves being nudged forward. Considered on the whole, this results in a net benefit, and we see good design elevate the human experience. For the designer prepared to face the challenge, this is an exciting time. The rate of change is forcing us to look past self-expression, and look upon design for what it can do for others. Similarly, it’s pushing us to abandon out-of-date approaches that there isn’t any room for.

Some will say that this change is all too much, and decide that the time has come for them to do something altogether different. Me? I love it. At no other point in my career has design seemed as vital, relevant, and exciting.

It’s understandable that some feel a little trepidation when they learn that their known methods might no longer be the best ones. Similarly, I think we all understand feeling overwhelmed by the sheer volume of change and opportunity at hand. But, I assure you, there’s little you can’t learn if you are willing to give it a shot. You might not be perfect, but no one is. Besides, who wants to sit on the sidelines, just when the game starts picking up?

Discuss this post in #Design on Chapp.