Last week, Adobe unveiled a smart stylus (codename Mighty) and a diminutive digital ruler (Napoleon). If you’re interested in the future of interaction — if you’ve been paying attention to Google Glass, for example — you should be thinking about what Mighty and Napoleon have the potential to become.
Glass and Mighty/Napoleon aren’t products, yet. (Though Glass is getting close.) But Google and Adobe have lots of credibility. Both companies consistently ship great products. With Glass and Mighty/Napoleon, both companies are aiming to do more than just iteratively improve currently available technologies. In different ways, Google and Adobe are trying to shrink the gap between digital and analog experiences in everyday life.
The hardware design here is really lovely. The stylus has three sides, with softened edges and a twist along its length. The ruler is sized for use on an iPad, with tool buttons in the center and graspable areas on either end.
The software is innovative and impressive, too. Adobe’s been working on this stuff for a while. Elegant, pressure-sensitive drawing. Integration with the Adobe cloud. Lines and arcs that combine the physical input from the hardware with digital capabilities like snap-to guides and repetition of previous geometries. There’s noticeable lag when drawing, but that’s a limitation of the iPad. I’m hoping that efforts like Mighty will prove to manufacturers of tablets and phones that it’s worth working hard to lower input latency. (I’d also like to see some other improvements to touch screen core technology, but that’s another post.)
As Michael says in the video, the magic is in the combination of hardware and software. More and more, that’s true for all our digital devices. It’s certainly true for Glass, which is an ambitious attempt to shrink and productize display, communications, and interface hardware, and which will also require an almost completely new software interface design.
In one important way, Glass and Mighty/Napoleon are mirror images. The Glass world is a virtual space – a display only you can see. Glass is an always-on, private, virtual overlay on top of the real world. Mighty and Napoleon are tactile, physical, and shared. They’re a real-world extension of and overlay on your tablet’s digital environment.
These two approaches are complementary and they’re two of the most important building blocks for the next generation of computing experiences. The promise of Glass is, I think, fairly obvious. The promise of the digital/physical hybrid approach that Mighty and Napoleon represent is not yet quite as well understood. But think about it this way: we’re very, very good at using our hands. We’re good at using tools. We’re good at manipulating, leveraging, reasoning about, sharing and showing physical objects. If all our interfaces are entirely virtual, we’re ignoring all those skills and affordances.
The iOS interface taught us to use our fingers to directly manipulate pixels. But as a species, we’re tool-users. Mighty and Napoleon give us tools that extend what our hands can do by themselves. You won’t always use a pen and a ruler, but when you need a pen and a ruler, your fingers aren’t good substitutes. Steve Jobs famously said, “if you see a stylus, they’re doing it wrong.” But I’d argue that if you see a stylus done wrong, they’re doing it wrong. Getting a stylus interface right – hardware and software, both – is hard (just like getting a multi-touch interface right was hard). But Adobe’s demos are a pretty good argument that they’re on the right track.
Work on “wearable computing” and physical, spatial interfaces goes back many years. As it happens, wearables and spatial interfaces were both pioneered at the MIT Media Lab, at around the same time.
And around the same time, John Underkoffler was showing off room-sized spatial interfaces, complete with multi-surface work environments, object tracking, and tools like Napoleon.
The important idea here is that these two approaches — the virtual and the physical, as well as augmenting reality and using real-world tools to extend our digital environments — are siblings. They’re two sides of the same coin. Two components of the future. They grew up together and now they’re about to go mainstream together. Fun stuff.
Labrouste was one of the great architects of the 19th century. He was
trained in the then-traditional mode, a combination grand tour and
apprenticeship in Italy. During the prolific final twenty years of his
career, he created a new and contemporary language for public
buildings. He pioneered the use of cast iron and combined structural
innovation with detailed attention to decoration and form.
The relationship between the stone outer perimeter wall, pierced by
large windows in its upper story, and the internal iron truss
structure was one of the great novelties of Labrouste’s library
design. They are at once independent and codependent. The fineness
of the iron arcades would be impossible without the stone box, which
stabilizes them, while the minimal thickness of the stone box is
possible because there is so little lateral thrust fromt he metal
frame. This enhances the luminosity of the interior. Other
architects were experimenting with mixtures of traditional and
innovative materials in rail stations and market halls, but
Labrouste was the first to endow iron with a new civic decorum and a
system of ornament that expresses the nature of the material and
follows its constructive logic. The iron arches are pierced only
where it is structurally possible to lighten the material, thus
producing a beautiful – and perfectly rational – lacy effect.
The above text from the exhibit perfectly captures one idea – one
ideal – of architecture. And more than architecture, of engineering
and design in general.
Computer programming is a relatively new vocation. Like all new
practitioners, programmers have liberally borrowed and repurposed
vocabulary as we’ve invented our profession. We sometimes call what we
do “software engineering.” And we refer to the functional
definitions of our software systems as “architecture.”
These are metaphors. There’s very little quantitative work in most
day-to-day professional programming. And our “architectures” are not
buildings and bridges; they are built from very different raw
Still, engineering and architecture are very good
metaphors. “Engineering” is about specifying and producing functional
artifacts. “Architecture” endeavors to create large-scale physical
artifacts that serve complex, multi-layered human needs. Both are,
fundamentally, the quest for deft definition and elegant management of
constraints. Form that follows, but is not dictated by, function.
I think of engineering and architecture as two flavors of a larger
pursuit: design. Design, in this sense, is about transcending the
zero-sum problem of working with multiple, overlapping, conflicting
contraints, about turning those constraints into opportunities, and
about creating beauty from these opportunities.
Labrouste examined cast iron as a structural material and figured out
how to mate iron with stone so that the strengths of the two materials
complement each other. He created a new structural aesthetic from this
pairing. And he designed non-structural ornamentation that extended
John Carmack, a Labroustian figure to digital architects, has a nice
description of what he thinks makes a good programmer. Carmack says
that programming requires managing design trade-offs at multiple
layers of abstraction. A good programmer does this multi-layered
design work well, iteratively, intuitively, and keeps a “map” of these
abstractions in her head.
These multiple layers of abstraction are the site grading, plumbing,
stonework, wooden beams, lighting, furniture, finishes and
ornamentation of the computer programmer’s working world: the
microprocessors, disks, memory, input devices, graphics hardware, the
low-level software drivers, the operating system, the various software
libraries, the various parts of the new application being developed.
For Labrouste, technical aspects of building and construction were
inseperable from ornamentation. He carefully planned for the reading
room’s heating and ventilation. He designed a hot-water heating
system for beneath the tables, at the readers' feet, and he
installed twenty-four radiators around the perimeter of the
room. These iron heating devices were given a sculptural treatment
that recalls Labrouste’s earlier designes for streetlights on the
Pont del la Concorde. Nor were the transverse beams in the book
stacks left unadorned; the ornamentation underscored the assembly of
bolted ironwork. Drawings such as these, at full scale, were made to
guide the manufacture of the building’s component parts.
MoMa explanatory text for an exhibited elevation and plan of a
radiator for the reading room of the Bibliotèque Nationale.
Labrouste’s construction diary for the Bibliotèque Sainte-Geneviève
has been made available online by that library and archive.org.
Want five monitors? Create five display rectangles in your virtual
desktop OS, arrange them how you see fit. Maybe you prefer one giant
main viewport, virtually placed about 10 feet in front of you,
pushing up out of a grassy hillock like an informational gazebo,
with an array of tiny widgets flittering at the edges of your
Or maybe you don’t want any distraction. Like, none. All the finest
text editors have full-screen modes these days, the better to let
you get down to the business of writing or coding. But no
full-screen mode can block out the world outside the bezel of your
This is compelling. And for content consumption as well as production.
There’s a strong counter-force, though, that pulls against immersion
in every-day computing technology: humans are social animals.
Asking VR equipment to provide a complete and satisfying multi-user
experience is a very, very high bar. The Holodeck and the
Metaverse are still a long way off.
Most of most people’s work and play activity is social. Over the last
thirty years we’ve gotten trained to look at little screens all the
time, which are inherently isolating. So we’ve developed lots of small
habits that are anti-social. But my read on the near-future is that
the increasing number of screens we all have all around us, in
increasingly diverse form factors, is shifting the balance back
I’m on record about this and it’s one of my driving passions. Our
self-assigned job at Oblong, where I work, is to make all the
screens in the world more interconnected and interactive.
In the workplace, a dominant contemporary pattern for “information
workers” is for each day to be a series of meetings, punctuated by
interstitial time for “actual work.” To do the actual work we go away
to our own desks and interact with our own individual computing
But that pattern is starting to be supplemented by a new mode: working
together, on multiple devices and larger screens, to get “work”
done. This new mode is enabled by the proliferation of devices and
screens in the typical workplace, by how much of our work is digital
and network-accessible, and by the availability of better and better
And when I get home from work I rarely sit down to watch television or movies by myself. (If
the TV is on and I’m alone in a room, I’m usually doing something
else, too: eating, cooking, washing dishes, folding clothes, catching
up on email, stretching my creaky spine before I go to bed.) If I’m
watching TV with friends or family, my guess is that it’s actually an
important part of the architecture of that experience to belooking at
the same big screen as everyone else.
So while an immersive, exclusive user experience is definitely
something I want access to some of the time (I’m writing this on an
airplane), it’s also not what I want a lot of the time.
Augmented Reality and Virtual Reality are both useful and are both
important building blocks for our computing future. But the
old-fashioned pixel, on a wall or a desk or in my hand, is just as
important. Updating those old-fashioned pixels to be more numerous,
responsive, and flexible is complementary to creating new kinds of
pixels that take over or overlay on top of our view of the world.
This beautiful colloidal display work was my favorite thing at the
show. Like Persian miniatures floating in space: tiny, glowing,
floating, translucent pixel bubbles.
SIGGRAPH walks in its own shadow. (What, no parties on aircraft
carriers this year?) In the 1990s, everyone working in graphics came
to the conference. The academic computer science world had birthed a
new industry. Papers and poster sessions shared the program with trade
show blinkenlights and art exhibits.
But “computer graphics” is now mainstream. We have GPUs – really good
ones – in our cell phones. Programmers take graphics horsepower for
granted. And so do creative folks of all kinds. The “visual effects” industry isn’t
just effects any more; almost every piece of visual media we
watch is made with computers.
So SIGGRAPH isn’t as central as it once was. But its hybrid spirit
still serves it well. As the boundaries between computer graphics and
everything else we do with computers have become more and more porous,
shards of SIGGRAPH have become central to other events as diverse as CES and Eyeo).
And SIGGRAPH itself continues to be expansively inclusive. The exhibit show floor this year had lots of reassuringly familiar
booths (rendering software, motion capture), and several newer themes,
too (mapping, affordable scanning and 3d printing, markerless motion
There were a number of nice demos of markerless motion capture,
too. As compute power increases and sensors improve, we can do more
and more to extract meaning from the world around us. We’re not there yet,
though, for demanding applications like general mocap for visual effects and gestural interfaces.
It’s great to see 3D printing continually improve. There were at least
four manufacturers showing off printers on the exhibit floor, plus a
big community maker area in the Emerging Technologies zone. Every year
brings better resolution, support for more materials, and larger print
The most important vector of progress, though, is affordability. A
MakerBot Replicator costs $1749. You can buy one from the web and
have it shipped directly to your house. The SIGGRAPH community has
seen this pattern before with graphics cards. Performance and features
matter, but cost reduction is what drives a new technology towards the
tipping point of mass adoption. And with mass adoption, everything
Here’s what I want for Christmas, Kwanza, Hannukah, Eid, and
Bloomsday: all my files in the cloud, encrypted by me, accessible from
any program on any of my computing devices, cached locally as needed,
and served by providers that I choose and can freely migrate between.
I’ve been copying and synchronizing my home directory between machines
since the mid-1990s. I remember @nelson, in 1996, telling a story in
Sherry Turkle’s class about the dual effort of packing up both his
physical stuff and his digital stuff for a cross-country move.
In many ways, we’ve come a long way since 1996. Today I store music,
email and code in the cloud, thanks to iTunes, gmail and git. And I
use Dropbox and Google Docs and iCloud to share files with other
people and between my various devices.
But I still have a couple hundred gigabytes in my home directory that
I pull across to every new computer I set up. I need at least that
much storage on anything I buy to use as an every-day machine. And I
don’t have a good way to access the files in my home directory from
phones, tablets, or machines that I’m using temporarily.
This seems fixable. In fact, between them, iTunes Match and
Dropbox get enough things really right that, building on what
they’ve done, a “perfect” personal cloud infrastructure is fairly easy
In iTunes, the basic action of playing a song is always the same, but
the program’s interface makes it easy to see whether the song file is
cached locally or whether it will have to be pulled across the
network. There are options to fetch groups of songs you expect to want
to play later. Timeouts and failures are handled reasonably
well. Mostly, things just work, which is a significant and laudable
achievement on the part of the iTunes developers.
Dropbox does a terrific job of integrating with the native filesystem
on my Macintosh. I can use Dropbox to keep a directory backed up in
the cloud and synchronized between machines. Files are
versioned. Uploads and downloads happen in the background and there’s
good feedback about when transfers are likely to complete and how much
data is moving around. I can set basic sharing permissions and
generate public download links. Again, things just work; Dropbox ships
But I don’t want to put my whole home directory in Dropbox, for a
couple of reasons.
First, I’m moderately paranoid about privacy and security. I have
files that I’m not willing to let someone else store for me unless I
encrypt them myself, first.
But it’s too much trouble to manually encrypt and decrypt everything I
put into my Dropbox folders. Emacs is the only program I use
regularly that understands how to encrypt and decrypt files
automatically, on demand. I use emacs for a lot of things but, sadly,
not for everything I do every day. So I need the encryption and
decryption to be built in at the filesystem layer, right between my
local storage and the network.
Second, Dropbox doesn’t have iTunes-like selective local caching on
the Macintosh. My home directory these days is big enough that my
files don’t all fit on even a mid-range laptop hard drive. And they
certainly don’t all fit on my phone or tablet.
I want to have least-recently used caching behavior on by default and
taking up a configurable amount of local storage, plus easy
pre-download manual controls for files I know I’m going to need even
when I don’t have a network connection available. (Or when I might not
have a fast and cheap network connection available.)
Caching like this requires filesystem-level integration, too, to hide
the details from applications. This isn’t too hard to do on
linux. It’s probably fairly hard to on the Macintosh, unless you work
These are the basics, encryption and caching.
They both imply that we’ll need to store all filesystem metadata
locally (at least in a naive implementation). Encryption implies one
thing more; we have to build on top of a standard protocol so that I
can audit the source code of the client programs I use.
I want to be sure that no unencrypted data ever leaks off my
machine. I might choose to trust a client provided by a reputable
and well-run company, like Dropbox. But I shouldn’t have to do so.
Defining a standard protocol is a good thing in other ways,
too. Standards open the door to using multiple service providers,
moving between providers, or running my personal cloud infrastructure
And standards mean that individual applications can be extended to
natively support cloud storage. Specific workflows can be decoupled
from the semantics of the underlying filesystem. (As iTunes does, sort
of, with music.)
In fact, we can use this opportunity to redefine our expectations for
our filesystem beyond the basic requirements of network storage.
I’m a curmudgeon, and a long-time unix hacker, so I do tend to think
in terms of directory heirarchies. But that’s not the only way I
think. I’d love to have time built into my filesystem in a deep way,
and keyword tagging, too. And versioning. We can stand on the
shoulders of projects like Lifestreams and WinFS.
And, of course, we need full text search. Which requires plugins and
is complicated by the requirement for encryption.
Finally, I’d like to be able to give out a token that lets anyone,
from any device, securely share a file, a directory, or a tag. With
encryption built into our cloud storage standard, we can take the next
step and build a certificate-based capability system (with support
for timeouts and additional authentication layers).
So that’s my list for the personal cloud storage solution I’d love to
use every day: filesystem integration, encryption, caching, a standard
protocol, versioning, time as a first-class construct, tagging, secure
sharing, and full-text search.
According to the story, NFL attendance is dropping. If you stay home and watch the game on your couch, you have access to instant replay, lots of camera angles, and a good network connection (so you can see what people are saying on twitter). The league, and individual teams, are trying to figure out how to provide new media and digital experiences for fans who go, in person, to games.
Big sports venues are interesting on a number of levels: architecture and infrastructure; economics, urban planning and politics; the dynamics of crowds. The best stadiums are great social spaces, integrated into the physical and cultural fabric of their communities.
But sports arena design challenges are numerous and knotty. It’s not easy to decide where to put an arena or how to pay for it. It’s not easy to make such a big, inherently squat, special-purpose building look beautiful and work well. And there are no perfect solutions to tough trade-offs inherent in transportation infrastructure and the integration with surrounding public space.
These days, it makes sense to add “new technology” to this list of design challenges: displays, networking, media streaming, social connectivity, and interactive applications.
Baltimore’s Camden Yards, which opened in 1992, reshaped our expectations for contemporary stadiums. In a sharp break from standard practice, HOK, the architecture firm that designed Camden Yards, decided to preserve an existing 340 meter long warehouse as the centerpiece of the ballpark site. The warehouse, its brick exterior beautifully restored, served as the aesthetic centerpiece of the development.
Camden Yards was such a clear improvement over the multi-purpose, symmetric, concrete-box arenas of the sixties, seventies and eighties, that stadium construction in the United States shifted into a new mode. The stadiums built under the influence of Camden Yards are often described as “retro,” but it’s more accurate to think of them simply as buildings that are designed with specific attention to site, purpose and institutional memory.
I lived in Washington DC in 2005, when professional baseball returned to Washington after a 34-year absence. I was hoping the new team would be called the Grays, in honor of Washington’s great Negro League franchise, the Homestead Grays. The name turned out to be the Nationals, but I was plenty happy to have a team in the city, playing its home games in a stadium I could walk to from my house.
That stadium, RFK, was the first of the concrete donut multi-purpose stadiums, built in 1961 and newly refurbished for the Nats and the DC United soccer team. And it was a better place to watch a game than I had expected.
But a new stadium was being planned. Which got me thinking: what would a digital Camden Yards look like? How could you design a stadium with user-friendly display technology and network capabilities deeply embedded into the architecture, infrastructure and experience?
I wrote a short memo for friends involved with the team and the stadium planning that described some of the near-future experiences that cell phones and cheaper displays would enable. I went back today and read what I wrote in 2005.
Most of it holds up well. But a lot of the text reads pretty anachronistically. I was explaining a lot of things that we take for granted today, only six short years later.
We certainly didn’t have social-mobile-local as a buzzword yet in 2006. If I were writing this same memo today, I’d lean hard on the idea that a ballpark is the most social-mobile-local environment you can imagine. (It’s clear by my omissions, though, that I didn’t really understand the social graph at all – not the way we do now, with twitter, facebook and foursquare part of our everyday lives.)
The basic idea I was trying to explain was that we’d all have portable devices, with fast network access and good screens, with us all the time. And we would expect to use those screens as complements to our “in person” experiences.
Ballparks of the future will house vastly expanded opportunities for
interactivity, fan participation and merchandising. Wireless networks and a new generation of video screens and portable devices will deliver media feeds, music, games, stats and customized product offers to fans in their seats, in the concourse and in their cars on the way to and from games.
The next couple of paragraphs had the heavy-lifting job of painting this future more concretely. A “wide variety of personal communications devices,” indeed …
Wireless networks will blanket the stadium and be compatible with a
wide variety of personal communications devices. By the time the
Washington team plays its first game in its new park, most cell
phones will have built-in wireless networking and be capable of
playing streaming video. Many fans will carry PDAs and small laptops
with them to the game, just as they do in every-day life. And the
team will give high-dollar fans and skybox attendees special tablet
computers that enhance the stadium experience.
Fans will use their phones, PDAs and tablets to watch replays, to
order food and to find their friends. Parents will teach children
how to keep score using pixels rather than pen and paper, and
digital scoresheets will click directly through to libraries of
video clips, commentary and statistics. Die-hard fans will be in
baseball heaven, and casual fans will find that easy access to
expert commentary and video content increases their enjoyment of the
live game. Many people will play fantasy versions of the game that
they are attending, and play with and message other fans in real
If you think I was dumbing this explanation down, or that my audience probably understood this stuff already, I respectfully suggest that you’ve forgotten how much our daily lives have changed in the last seven years.
I regularly went to parties in Washington with media and political professionals who said things like, “the web is okay, but we’ll never, ever stop delivering newsprint every day.” Or, if they were younger, “I’ll believe you that paper is going away when you convince me that I’ll read a web site while sitting on the toilet.”
Of course, I include my long-running obsession with the steady increase in pixels in our built environments.
Video screens will be found all over the ballpark, in thousands of
hands and integrated into the architecture in new ways. With the
advent of ubiquitous video, the fan experience will change
dramatically, and as screens and content delivery opportunities
multiply, so too do opportunities to sell advertising and
As do some notes on super-local activities, crowd engagement, and flipping the relationship between music publishers and venues.
Fans who want to buy tickets for next week’s games will be able to
do so by visiting the section that they want to sit in and reserving
their preferred seats via the wireless network. Fans will vote on
music that is played in the stadium, and be offered the opportunity
to buy songs and download them directly to their phones and
PDAs. Major League Baseball clubs will stop paying royalties for the
use of popular songs and begin to be paid as marketers of and a
distribution channel for commercial music.
My favorite part of an imagined stadium future stadium, though, is still self-assembling.
But, perhaps surprisingly, the number of cameras installed in the
new ballpark will amaze and delight fans even more than the number
of screens. Small, networked cameras will be everywhere – thousands
of cameras – some fixed to point at a single spot in the stadium or
on the field, some that can be controlled directly by fans. Do you
want your own zoomable view of the game from the other side of the
stadium, streamed to your phone’s screen? Do you want access to a
real-time highlight reel cut together automatically by computer? Do
you want to vote on clips that should appear on the scoreboard
screen for the entire stadium to see?
A tiny production and statistics crew, their efforts amplified by
powerful automation tools, will assemble game footage, commentary
and data into a rich, multi-layered production stream. Not only will
fans have access to various slices of this stream while sitting in
their seats, but as they leave the ballpark they will be able to
pick up a DVD with a highly customized version of the game as a
souvenir (or download it later to their TiVo at home). A typical DVD
will have a compressed play-by-play of the game, crowd and
atmosphere footage that focuses particularly on the section of the
ballpark where the fan who bought it was seated, canned color
features on the players who were instrumental to the game, and
background music selected by the fan. All of this – all of the
versions of the DVD – can be produced in real time using automated
scene-cutting and archiving. Human “editors” will give instructions
and make small adjustments to a system that largely runs itself. And
all of the video and data will be archived so that it can be reused
and resold via the web to fans at home.
I remember thinking that the “real” future wouldn’t have DVDs, but that a take-home DVD was the only way to describe what I was trying to explain. I also remember thinking that you’d actually be able to view video and still images from other people’s cell phone cameras, but that trying to explain that would trigger a dead-end conversation about privacy.
I still believe that we’ll have, at some point in the near future, more or less the stadium experience described above. But these days, I think that experience will be enabled larely by a collection of applications we choose ourselves. We don’t really need much help from the companies that own the teams and stadiums (though those companies are in a position to provide great on-site experiences, if they choose to). I’m imagining a combination of the evolving features of Pinterest and Instagram and Livestream, plus lots of stuff from new startups that are still operating under the radar in garages and dorm rooms.
There’s a loose general consensus that “design” is an important part
of tech product development.
But the relationship between “design” and “engineering” is widely
Walter Isaacson’s biography of Steve Jobs is an example of this
misunderstanding. Isaacson portrays Jobs as interested mostly in
hardware and as a champion of aesthetics over mere functionality.
And it’s easy to see how you look at Apple from the outside and
imagine that Apple is a hardware company and that Steve Jobs is an
industrial design fetishist. Because Apple’s hardware is great.
But, in fact, this view of Jobs is exactly backwards. Jobs was
obsessed with software. He loved the classic Alan Kay quote,
“people who are really serious about software should make their own
hardware.” Beautiful software, in this worldview, demands
extraordinary hardware. And Jobs thought of design and engineering as
two parts of a single endeavor - building great user experiences.
If you want to have lots of raw material for thinking about what
“good design” is in the context of technology development, you
can’t do any better than showing up at Eyeo and drinking in all the
great work presented here. (And just plain drinking, because there’s
no shortage of good beer.) Same thing for thinking about how
design works as a process. And for thinking about design as a
competitive advantage (or a necessity, depending on how you look at
Eyeo is also an example of a terrific, terrific conference.
The work presented at Eyeo this year was strong, varied, beautiful and
innovative. The speakers showed their process and talked about their
“failed” experiments as well as their successes.
The community is amazingly supportive and broad-minded. Tools matter
to people, but the work they’re doing matters much more. I didn’t
hear a single negative conversation about any of the various and, in
some ways, similar and competing, frameworks that people at Eyeo care
passionately about. (I’m trying to learn from this, since I rarely
miss an opportunity to make critical comments about, say, vi or
The geographic, background and gender representation is more balanced
than at most conferences I attend, too Which is really nice. My
professional world is shockingly undiverse. Eyeo is something of a
vacation from that.
But my favorite thing about Eyeo is how seamlessly the design and
engineering parts of everyone’s work blend together. People sometimes
talk about a spectrum, with pure artistic expression on one end and
pure programming on the other. But almost everyone works on - and
cares deeply about - both components.
And really the art and the programming aren’t separable. Computer code
is both a medium and a technical toolkit, like oils and brushes, or
bronze and wax casts. Technical facility enables artistic expression.
Which brings us back to the Jobsian worldview. And to how the
collision of design and engineering is changing our user experience
Design and engineering are complementary, and overlapping, approaches
to solving hard and interesting problems. At every level of a
technology stack there are constraints, complexities, ambiguities,
conflicting goals, multiple possible approaches, and opportunities for