The Last Priesthood

Tuesday, December 16, 2008

The early days of computing were not a good time for lean processes to thrive. Cycle time was atrocious. It often wasn't possible to even run a program without manual intervention; you brought a stack of cards to the priests of the temple, who interceded with the computer god on your behalf and returned the answer.

Over time, as advances in computing brought steadily more power to developers and end users, such intercessionary roles in computing dwindled. Today, they are largely limited to the far end of the development cycle, where applications move out of dev / QA and into the data center. Hands-on installation and configuration are commonplace in bringing apps to production.

The advent of cloud computing will bring more than change; it is bringing disintermediation. In short order, the cloud model (whether outsourced or in-house) will eliminate a wide range of operations tasks, bringing an end to the last priesthood of software development. Automation will become pervasive in every area of the data center, and the power of the temple will be broken.

The Internet era is full of such examples of disintermediation -- Dell computer with direct-mail customized PCs, Orbitz and Travelocity with agentless travel booking -- but my favorite analogy for the effects of the cloud is from a still earlier time, when a different sort of priesthood fell before an onslaught of infidel machinery.

I was at university in the early 1980s, and had a part-time job in one of the computer centers. Every two weeks I collected my paycheck, made the half-mile pilgrimage to the bank, and waited with dozens of other supplicants for the temple keepers to summon us, at the tolling of a bell and the gleam of a holy light above the teller window, to receive our cash.

Then a curious thing called an "automatic teller machine" opened not far from my dorm, and the "bank" was now just three blocks away. Then another opened indoors, right across the hall from the pay office. Suddenly, I didn’t have to arrange deposits and withdrawals around class schedules anymore. I was no longer acting at someone else's convenience. They were now acting for mine. I abandoned the faith without a shred of remorse, never to visit the Keepers of Deposit and Withdrawal again.

Tom Peters described this disintermediation in consumer banking nicely in his Innovation Revolution seminar. (This was around the time he was starting to go off the deep end with self-branding, and much of the material is dated now, but this one quote has stayed with me for years)

"There could be nothing more humble than the two-foot-by-two-and-a-half-foot metal jaw of the ATM on the side of the bank or the grocery store. But assuming that ATM is powered by a state-of-the-art information system… it is a simple fact, ask your banker friends, that the ATM system of 1997 relative to the bank of 1970 is literally taking the place of three or four or five or six layers of management! What was a bank in 1970? It was exactly what you wanted it to be: a police station for your cash. Layer after layer of checkers stacked on top of checkers, keeping track of your bucks. And now they're gone. And now they're gone."
This is cloud computing in a nutshell. In the old world, project teams wait in the queue, applications in hand, for a limited staff to manually install and configure the machines (which may also be allocated to specific projects), manually install the apps, manually create all the rules needed for monitoring and routing.

In the new, capacity (cash) is delivered in bulk to the data center, and product teams lease it at their convenience. Application package metadata describes the needs for capacity, monitoring, routing, bandwidth and various quality-of-service details, and the cloud management software arranges it largely without intervention.

Now, the role of operations staff is clearly more advanced than that of bank tellers, and although the priesthood is clearly doomed, the personnel are not. Capacity and network planning are still in the picture, but the main focus will be on automation, and the marching orders were issued by Microsoft Chief Architect Ray Ozzie, in the 2008 PDC keynote that introduced Windows Azure: "The previously separate roles of software development and operations have become incredibly enmeshed and intertwined."

Operations must become more than a service bureau. It must be a deep and ongoing joint venture with development teams to create competitive automation that can bring applications to market as fast as possible, manage and scale them with infinite flexibility, monitor and analyze their behavior at a level of detail that turns production statistics into business intelligence.

Ops teams will also begin to draw on the highest level of development and architecture talent available in an organization, just as they do at leaders like Amazon and Google (and now, Microsoft.) Application teams will begin to see healthy competition for star programmers, who understand automation better than anyone. Management needs to present operations jobs as opportunities for innovation and career enhancement.

These changes in organization, staffing and process are imperative. If you can't accomplish them, you can be certain your competitors will. Or like as not, they will just buy salvation in a neat package like Azure or EC2.

(Disclaimer: The above opinions are mine alone and do not necessarily represent the views of current or former employers.)

Falling Up the Stairs: An Occasional Trip with Scala

Wednesday, November 19, 2008

Lately, I've spent some after-hours time exploring Scala, a candidate "next-generation Java" that combines functional and object-oriented features.

It's intriguing. I have that slowly growing sense of excitement I've felt when discovering some of the other standout tools in my repertoire: Eclipse, Emacs, Ruby, Spring, and Hibernate, for instance. It has an elegant combination of features that relieves a wide range of programming annoyances.

Scala offers direct-to-bytecode compilation and seamless two-way interop with Java without the performance issues introduced by Groovy metaclasses. It can achieve the reduction in boilerplate code I've seen with Ruby syntax and mixins and type extensions, without sacrificing the static typing that eases maintenance and integration in large systems. And it offers functional behaviors without diverging from a reasonably Java-ish look and feel, which makes it an easier pitch to organizations using Java.

So in the name of lending another voice to the cause, I will offer an occasional article on my experiences with the language. Rather than spend time attempting to teach it from scratch, I'll simply assume high fluency with Java and relate Scala to Java as best I can. (I am sure my efforts will prove amusing to Scala gurus, hence the theme "Falling Up the Stairs." :-)

I will also aim for examples that resonate with most Java application developers, because frankly, Scala can be daunting. Some of the material offered by the admittedly excellent Programming in Scala and by online tutorials is more familiar to computer language experts; I've talked to a few workaday programmers who felt put off by this and as a result questioned the relevance to "real-world problems."

Finally, I hope to help dispel the myth that adding languages to an environment always adds complexity. This view is generally held by management, which for the most part still equates languages with platforms. In that mindset, a new language is supposed to mean costly tools and training, long ramp-up times, sparse and poorly documented libaries, no leverage of established applications, and new development silos out of which teams can no longer share code or skills.

Thanks largely to wide use of virtual machines, this is no longer true. Rather than build from scratch, more language designers are now choosing to build on the Java and .NET platforms, whose SDKs also offer a vast established feature set and allow new code to coexist with more traditional languages like Java and C#.

In the VM world, learning a new language is not much different from learning, say, Spring or Hibernate. You inhale a six-hundred-page manual, get comfortable with some new concepts and syntax and libraries, and then set about improving maintainability and time to market, with the rest of your work environment intact. Idiomatic fluency of course takes time, but that's true of any technology with a rich feature set.

The point is that the language is just a tool; it's not a different world where everything is foreign and strange. You can pick and choose the parts of an application that can most benefit from the alternate language, and apply it to only those.

Ultimately this will lead to a style of programming where it will be as natural to pick the right language for a task as to pick the right libraries. And as several well-regarded programmers have already commented on this topic, I won't sermonize further.

On to Scala.

(Disclaimer: The above opinions are mine alone and do not necessarily represent the views of current or former employers.)

SOA: What's It For?

Monday, September 29, 2008

The Gartner "hype cycle" holds that new technologies go through a period of inflated expectations, disillusionment, and eventual productive usage. But with SOA, it seems the hype just won’t quit.

For me, the final straw came when attending two vendor presentations on application performance, where the logo for their new marketing campaign, emblazoned in the corner of each slide, proclaimed “Smart SOA.” The mind reels. What do JVM tuning and distributed caching have to do with service-oriented architecture?

Naught that I can see. SOA has simply become the miracle food additive of the IT industry. Vendors are straining to enhance the length and breadth of their product lines with “SOA enablers”, the same way food producers are jacking “heart-healthy” fish-oil extracts into everything from lunchmeat to orange juice, promising a revolution in IT health.

Reading the trade pubs, you must admit that ESBs are vital to ramping up your SOA. And governance? Can’t do without it. Help you manage the new services development lifecycle? You’re covered. Rules engines? Ready to fit your SOA. Metadata management? Check. Complex event processing? Check. And mashups? Hold on... those are now “enterprise” mashups, SOA’s killer app.

Do all these technologies actually enable service-oriented architectures? Let me allow Burton Group SOA maven Anne Thomas Manes to answer that:


It has become clear to me that SOA is not working in most organizations. … I've talked to many companies that have implemented stunningly beautiful SOA infrastructures that support managed communications using virtualized proxies and dynamic bindings. They've deployed the best technology the industry has to offer -- including registries, repositories, SOA management, XML gateways, and even the occasional ESB. Many have set up knowledge bases, best practices, guidance frameworks, and governance processes. And yet these SOA initiatives invariably stall out. The techies just can't sell SOA to the business. They have yet to demonstrate how all this infrastructure yields any business value.

For IT initiatives to make this level of investment and still not pan out shows an unsettling disregard for old-fashioned principles like clearly defined problems and rigorously worded requirements. In the case of SOA, we are hobbled from the start with a poor problem statement. Most are variations on the same theme. The one from Wikipedia, culled from several sources, will stand in as well as any:

Service-oriented architecture (SOA) is a method for systems development and integration where functionality is grouped around business processes and packaged as interoperable services. SOA also describes IT infrastructure which allows different applications to exchange data with one another as they participate in business processes. The aim is a loose coupling of services with operating systems, programming languages and other technologies which underlie applications. SOA separates functions into distinct units, or services, which are made accessible over a network in order that they can be combined and reused in the production of business applications.

While technically accurate, the wording does little to help either technologists or business users understand when SOA should be used, what benefits it brings, and how its adoption impacts an organization.

To its credit, the article (and most resources on the subject) go on to say that SOA “unifies business processes by structuring large applications as an ad hoc collection of smaller modules called services” and “new applications built from a mix of services from the global pool exhibit greater flexibility and uniformity”, et cetera. But judging from the experiences reported by Burton Group, these gains are not clearly understood, nor does use of supporting technologies guarantee them.

Improving the definition

I find it helpful to explain SOA, not as a standalone, abstruse technical concept as above, but as a small delta from knowledge that is familiar to both businesspeople and technologists. If SOA is new, if it is a step forwards from the way most development shops work, then what is that way called, where are we now? It’s product-oriented architecture, or POA, and the evolution from one to the other is straightforward.

A bit of history: Prior to the client / server era, if you were to take a 50,000-foot view of application architecture, it would not look especially interesting. There was only one tier, the mainframe, and everything lived there:


With the desktop computing explosion in the 1980s, it became de rigueur to move the UI and a good chunk of business logic to a box that cost less, was easier to program and administer, and could provide users with richer interaction:


Came the Web in the 1990s, and abruptly all that capability on the client became a competitive liability. The web allowed companies to transact business via any desktop, anywhere, not just those that could accept a custom install. So the client tier split again, with the bulk of product software moving back to a new server tier for the web.


(Here I’ve renamed the original Server tier as the “Data” tier – this is a stand-in for databases, mainframes, any repository of customer information.)

This is the current state of most product-oriented architectures. There are other intermediaries like firewalls, routers and middleware, but by and large, this is where the assets of a business product reside.

It ought to be far enough. Managing applications is difficult enough without splitting them into progressively more pieces. But in large enterprises and organizations serving diverse markets, it isn’t enough.

Two things happen: one, multiple products are built atop the same data to serve different customers and channels, often on different platforms:


Two, as the organization grows, responsibility is partitioned, and new data providers arise, a product must draw from many data sources:


Often both occur simultaneously. Over time, duplication of work appears: the same data integration, reconciliation, and business logic is being written for multiple channels. This is especially hazardous when independent applications have direct dependencies on multiple databases; as Neal Ford noted, what you essentially have is one large application with UI partitioned across product groups, all of which much regression test when the database changes.

A service tier helps with both situations. It hosts platform-independent access to common business logic on behalf of many channel products, and provides a measure of insulation to applications that do not require direct data access.


I’ve settled on the terms Client, Channel, Service and Data, but they go by many names. Herzum and Sims use User, Workspace, Enterprise and Resource. A Forrester brief on N-tier uses Interface, Interaction / Service Composition, Business Logic, Data. Various J2EE references call them Client, Web, EJB and EIS (hmm, no conflict of interest there.) I’ve heard Client, Presentation, Logic, Database; Client, Web, Service, Database; still more.

In each viewpoint the role of the new tier is roughly the same, and the evolutionary view suggests a much simpler, more business-oriented definition of SOA:

Service-oriented architecture is the separation of business processes from business products in order to enable more rapid and cost-effective reuse.

SOA is not some dramatic departure from established practice; it is a natural, emergent property of distributed applications. It says you can get more value for your development dollar by investing in a common base of capabilities accessible from multiple products and platforms.

Pitfalls in practice

While the conceptual evolution from POA to SOA is straightforward, the organizational evolution is not – and this is why SOA so often falls down. Here are some common obstacles faced in SOA adoption:
  • In counterpoint to the tired mantra, “IT must be aligned with the business”, services are not aligned with the business, that is, not with business unit product groups. They are aligned with business processes that potentially cross business units.
  • Requirements, funding / ROI models and executive sponsorship are historically project-oriented, and this model is what your oldest, most senior business and IT managers understand. The end result of a product development effort is something users can see and touch. Services are seen as infrastructure because their primary audience is application developers – and infrastructure is always under relentless cost pressure.
  • Development is divided between teams that provide common assets to product owners, and teams that actually build products. This demands that product developers cede a significant degree of control to service owners, leading to turf wars. SOA champions must be prepared to deal with these cultural issues.
  • Without adequate governance, it is easy to build services that only address the needs of individual channel products or data providers. This gives rise to JaBoWS (Just a Bunch of Web Services), where SOAP or some other protocol is used simply as wire format, with no attention paid to service modeling, interface design, and limiting redundancy.
When you need it

Deciding where you need SOA is not rocket science. The core requirement is to provide business process functionality to a diverse set of consumers, both end users and other applications. If that isn’t the case, SOA should be viewed skeptically; adding more moving parts will raise development and maintenance costs and may not offer meaningful return on investment.

It’s also not hard to tell the difference between adopting SOA and just using SOAP (and ESBs, and repositories, and rules engines, et cetera.) With SOA, your development is split across channel product initiatives and supporting service teams, with the latter taking a healthy portion of project funding and engineering talent. There are governance structures in place to drive consolidation onto common services. Executive sponsorship is rock-solid, and finance is on board with cost and ROI models for service projects that demonstrate clear long-term value to the business.

(Disclaimer: The above opinions are mine alone and do not necessarily represent the views of current or former employers.)

The Craftsman's Advantage

Monday, September 8, 2008

What a presumptuous name for a blog.

No, it isn't an attempt at solidarity with people who sweat for a living. And "Architect" is just the term HR departments reach for when they run out of lofty prefixes for "Engineer" -- a title that doesn't seem to bother some very distinguished individuals at (say) Sun and IBM.

With "Architect" too often comes the mandate that Thou Shalt Stop Coding, having at last risen above the mass of mere craftsmen. I still get asked, "You write code? Aren't you a [title deleted] of Architecture?" And the look that suggests I am indulging some unsavory juvenile pursuit, like zapping ants with a magnifying glass. (Actually, I used a chemistry set. Higher kill ratio.)

Code is the fundamental tool of our trade, and there's much to be said for attaining, and retaining, the status of craftsman (or craftsperson, if you're that P-C.) Which explains the "blue-collar" bit. It comes from a project post-mortem some years ago, at which I had the good fortune to be consulting, rather than sitting in the stocks.

Among other things, we found a distressing lack of technical leaders with practical knowledge, people who could think strategically, communicate with senior management, run teams, and still keep their hands dirty -- that is, maintain key technical skills that would allow them to know, not just believe, whether a project would succeed. "Craftsmen. Blue-collar types," someone said. Another: "Yeah, blue-collar architects!" Grins all around.

Good term for it, I thought, and filed it away, to soon forget in the frenzy of week-to-week.

It came to mind again when my wife and I were having renovations done. Adam and Bob (not their real names), our architect and builder, had long since, er, moved up the ladder from individual contributor, but I was struck by how completely in touch they were with "real work." As I watched them in action on site visits, and eavesdropped on abstruse chatter about soffits, cantilevers, load-bearing beams and whatnot, I realized the difficult relationship we see between developers and architecture astronauts was not there.

The reason? Adam and Bob weren't guessing. The project didn't have a "probability of success." Their blueprints and plans were as solid as the concrete foundation they had poured the last fall. They knew how to build the addition on our house, could see every step in their heads and even do it themselves if they chose, because they had done it before. And as their trade advanced over the years, with new tools and materials and techniques replacing the old, they had kept up.

That same commitment to craftsmanship creates a different sort of software architect as well. Being able to design and build confers a more nuanced, more credible perspective on

  • When REST is appropriate and when WS-* is needed
  • When to cross the line from Ruby on Rails to JEE or .NET -- and which to choose
  • When SOA is a wise investment and when it is a waste of time and money
  • Where caching should go in your daringly innovative N-tier solution, to keep it from sucking mud
  • Whether your outsourcing partner is writing quality code or creating maintenance headaches
  • When to take a calculated risk on an alternate persistence approach
  • Where that shiny new middleware the vendor is pitching fits in your messaging infrastructure
  • When a box-and-arrow diagram is geek line art, and when it has a workable implementation behind it
  • Whether a sharp job candidate is just talking a good game, or can actually get things done
  • When to add new languages and platforms to your app development strategy, and when to take them out
The non-craftsman is at a disadvantage because every technology is a black box. Its behaviors can only be understood from the outside. Its capabilities are conceptual, scrutinized perhaps but never experienced firsthand, with the immediacy and intimacy of actual practice. Its limitations, if not readily apparent, are a latent and immeasurable risk.

Maslow wrote, words to the effect: "When the only tool you have is a hammer, you tend to see every problem as a nail."

For software architecture, I would add: "When you've never actually used a hammer, you can't tell if a problem is really a nail."

(Disclaimer: The above opinions are mine alone and do not necessarily represent the views of current or former employers.)