We talk so much about agile processes, which are clearly well-suited to small and mid-sized projects. But what about scaling agile for big projects? This is a topic that’s often debated, but in my opinion at least, hasn’t been completely settled.
By big projects, I don’t mean the workload of the deployed application (i.e., lots of transactions per second), or even of the importance of the deployed application (i.e., a business-critical application). I mean the size and complexity of the application.
Think about a software development project that has, oh, 10 million source lines of C#, C++ or Java code, or maybe more than 300,000 function points. Or maybe 5x or 10x that size.
We’re talking about large software teams, complex requirements and a significant cost. It’s not a happy little Web application that’s being thrown together with a couple of dozen people and rolling out next month – it’s going to take a while, and it’s going to be expensive.
Many large projects use a heavyweight process like the Rational Unified Process. But can they be agile too? Can you successfully combine the flexibility of Extreme Programming with a requirements-first RUP project? RUP already specifies iterative development, but how much of Scrum can scale up large? Is the answer to use Kanban? Or to say bye-bye, agile?
When discussing this question with SD Times columnists Andrew Binstock and Larry O’Brien, Larry said that, when it comes to the problem of scaling agility for large projects, “it’s not the methodology, but the management. An aligned, self-correcting team is far more likely in a smaller business where there’s an investment and personal relationship from the low to the high. Is a scrum standup going to be successful for a team assigned to execute a doubtful policy about which they can give no meaningful criticism?”
Now, consider how agile plays out when launching a large project. As Andrew Binstock suggested, some questions are:
• How do you decompose large projects into agile-size subprojects?
• How do you balance the inevitable need for requirements with agility’s commitment to changeability?
• How do you do frequent releases that a user can give you useful feedback on?
Larry gently pointed out, though, that my premise might be flawed – or rather, somewhat out of date. Hey, what do you expect from a mainframe guy? He said, “With service-oriented architectures and the ease with which frameworks and libraries are pulled in, fewer companies think of themselves as dealing with very large codebases. The reality is that the enterprise ecosystem is still very large, but subsystems seem to be more loosely coupled than in the past. The result is that most teams perceive themselves as working with only a few tens of thousands of lines of code, for which agile techniques are fine.”
Z Trek Copyright (c) Alan Zeichick

I’m a mainframe guy. Cut my teeth writing COBOL, PL/I and FORTRAN on the IBM System/370. CICS is my friend. Was playing with virtual machines long, long before there was anything called “DOS” or Windows” or Linux.” My office closet is filled with punch cards and old nine-track tapes, all probably unreadable today. One of the happiest days of my professional life was trading in an old TeleVideo 925 monochrome terminal for a brand-new 3279 color display.
If you listen to just about any marketer in the software development or IT industry, mainframes are always described as legacy systems – with the implication that only a total loser would continue to use such an outdated piece of junk.
By casually repeating terms like “legacy system,” or buying into the phrase “legacy modernization” for projects that integrate mainframes with other platforms like Java and .NET, everyone perpetuates the marketing myth that mainframes are bad. That they’re relics whose time has come and gone. That the goal of any IT professional should be to replace every mainframe with something else – anything else.
I say, “Bah, humbug. Nonsense. Fiddlesticks. Balderdash.”
A legacy system is an old method, technology, computer system, or application program that continues to be used, typically because it still functions for the users’ needs, even though newer technology or more efficient methods of performing a task are now available. A legacy system may include procedures or terminology which are no longer relevant in the current context, and may hinder or confuse understanding of the methods or technologies used.
In many situations, there is no more efficient tool for solving a business problem than a mainframe. Mainframes are just as current, just as new, just as relevant and just as useful as any other modern, state-of-the-art IT platform. Mainframes are not legacy systems.
Now, are some mainframe applications legacies? Yes. Any application that hasn’t been properly maintained becomes obsolescent. If you’re having to do extensive wrappering around an old COBOL or RPG program that nobody understands in order to keep it running, then you’ve got a problem. But the problem isn’t that it’s running on a mainframe. The problem is that the software wasn’t properly documented and that your engineers weren’t properly trained.
A 30-year-old undocumented C# program running on .NET, or a 30-year-old undocumented C++ program running on Solaris or a 30-year-old undocumented Java program running on WebLogic will be just as “legacy” as a 30-year-old CICS program running on z/OS.
Today, IBM released a new family of mainframes, called the zEnterprise 196. I don’t know much about it – I haven’t touched a mainframe since the early 1980s. But I do know one thing: It’s not a legacy system.
Z Trek Copyright (c) Alan Zeichick

What will software development be like in the year 2020? It would be easy to draw a straight line from ten years ago through today, and see where it goes a decade from now.
Ten years ago: Hosted applications through ASPs (application service providers) were getting started, but had little impact. Today: Hosted applications through the cloud and SaaS providers are having some impact on enterprise data centers, particularly in smaller companies. Ten years from now: Hosted applications will be mainstream, and IT managers will have to justify running applications on-premise.
Ten years ago: The Web was everything, and browsers were how desktops and mobile devices (in their limited way) dealt with Internet-based services. Today: Desktops are browser devices but mobile devices increasingly use apps to manipulate Internet services as diverse as Facebook, reading newspapers and enterprise resources. Ten years from now: Apps will have taken over mobile devices entirely, and “walled garden” apps will be a significant presence on the enterprise desktop. The browser will be far less important than it is today.
Ten years ago: Distributed development teams just starting to leverage Internet bandwidth, hosted SCM systems and collaboration systems – but even so, most developers lived in their IDEs. Today: The value of collaboration tools has been proven, and in many organizations, sophisticated ALM suites have turned the stand-alone developer into an endangered species. Ten years from now: More and more ALM functionality will migrate onto servers, particularly hosted servers across the Internet. IDEs will be turning into front-end apps. Source code and metadata will live in cyberspace.
Ten years ago: Most serious enterprise developers worked with native compiled languages, with the primary exceptions of Web script, Visual Basic and Java. Today: Managed languages like Java, C#, Perl, PHP and Python rule the enterprise, with C/C++ and other native languages being seen as specialist tools for those who need to stay close to the hardware. Ten years from now: With the exception of device developers, the world will belong to managed runtimes and virtual machines.
Ten years ago: Databases meant a SQL-based relational database from a company like Oracle or IBM. Today: While most enterprise data is still in a large SQL-based RDMS, such as OracleDB, DB2 or SQL Server, many development teams have embraced lighter-weight alternatives like MySQL and are playing with NoSQL alternatives. Ten years from now: Most enterprise data will still be in giant relational databases, but there will be more acceptance of those alternatives.
Ten years ago: The most important members of a software development team were its programmers; testers got no respect. Today: The most important members of a team are seen as its architects; testers get no respect. Ten years from now: The most important members of the team will be its agile coaches and champions; testers still will get no respect.
Ten years ago: Software development was seen as a wonderful career, even after the dot-com implosion. Today: Software development is a wonderful career, but the recession has affected many enterprise jobs. Ten years from now: New tools will empower less-technical professionals to build applications, but software development will still be a wonderful career, as we take on the hard problems that nobody else can solve.
Ten years ago: SD Times launched. Today: On July 15, 2010, we celebrate the publication of our 250th issue. Ten years from now: The future’s so bright, we’ll have to wear shades.
Z Trek Copyright (c) Alan Zeichick

When you think about a modern software monoculture, which company do you think of first? Chances are that it’s Apple. However, if I asked that question between, say, 1995 and 2007, you probably would have said Microsoft.
In agriculture, a monoculture is when too much of a region plants exactly the same crops. If there’s a disease or pest that destroys that crop, the entire region is in big trouble. Similarly, if the economics of that crop change – like a price collapse – everyone is in trouble too. That’s why diversity is often healthier and more sustainable at the macroeconomic level.
However, the problem with a monoculture is that it’s an attractive nuisance. If all your neighbors are planting a certain crop and are making a fortune, you probably want to do that too. In other words, while monocultures are bad for society as a whole, they’re often better for individuals – at least until something goes wrong.
Microsoft’s dominance over the past couple of decades turned into a monoculture. Vast numbers of consumers and enterprises standardized on Windows and Office, because that’s what they knew, that’s what was in stores, that’s where the applications were and because for them personally, it seemed to be the right choice. to go with the flow.
While there were alternatives, like Unix and Linux and the Macintosh, those remained niche products (especially on corporate desktops), because a monoculture rewards going with the flow and jumping on the bandwagon. Monocultures foster a lack of competition and a desire to play it safe. Nobody wants to upset the bandwagon. And thus, real innovation at Microsoft didn’t make it into Windows and Office – leaving room for the Macintosh to take risks, build a compelling product and start taking market share, and for Linux to tackle and win the early netbook market.
Today, Microsoft’s Windows and Office still dominate the enterprise. But even with Windows 7, I don’t think that customers are quite as willing to just to whatever Microsoft says as they used to be.
In the smartphone wars, the iPhone never became a real monoculture – there are too many BlackBerrys and other devices. However, certainly the media acts as if the iPhone is the only game in town. Apple plays into the perceptions of monoculture, offering essentially one model handset (now the iPhone 4), with the only variations being a choice of two colors and three memory configurations.
Apple’s dismissal of the well-publicized flaws in the iPhone 4’s antenna – first saying that it was a user error (you’re holding the phone wrong), and then claiming it’s a trivial software bug (displaying an incorrect signal strength) – shows incredible arrogance. And I say that as a happy iPhone 3GS owner and long-time Mac user who frequently recommends Apple products to friends and colleagues.
Any company can release a product that has a flaw. However, Apple’s behavior has been astonishingly bad. And if Apple wasn’t trying to impose a software monoculture by offering essentially one handset, it wouldn’t be a big deal. If Apple offered half-a-dozen iOS handsets, if one had a bad antenna, nobody would even notice.
The upshot, of course, is that while Apple is sure to fix the problem, we may see the early demise of the perceived iPhone monoculture. Android is coming on strong with a fast-evolving operating system and a lot of innovative work from handset makers and app developers. While I have no plans to migrate from my iPhone 3GS right now, I would definitely consider an Android device for my next purchase. Monocultures are bad, and we all benefit from a rich and diverse marketplace.
Z Trek Copyright (c) Alan Zeichick

Is literally everything about the cloud? You’d think so, going by the chatter from the biggest industry players. It seems that every company that wants to talk to be is pushing something to do with cloud computing. New service offerings from hosting providers. New tools for optimizing the performance of applications, or for making it easier to migrate, or for making cloud-based development more agile.
The cloud sure is seductive. In our company, we’re considering a migration to cloud technologies within the next 12 months. BZ Media, the organization behind SD Times, is a small company, and frankly I’d rather not be maintaining server, either in-house or dedicated hardware in a collocation center. If the economics of cloud computer work out, and if reliability and scalability deliver what we need, then it’s a good thing.
Yet I’m puzzled. How much is cloud computing a software development conversation, rather than an operations conversation? Obviously the platforms are different: Windows Azure is different than Windows Server 2008. Microsoft’s SQL Azure is different than Microsoft’s SQL Server. The Java EE that VMware is pushing into Saleforce.com’s cloud isn’t the same Java EE that’s on your current in-house app server. Working with Amazon S3 is not the same as working with an EMC storage array. So yes, there’s an undeniable learning curve ahead. But that’s what you’d encounter in any significant server platform change, whether cloud, on-premise or collocated.
Therefore, my confusion. How much does a software development team need to know about the cloud, beyond how to deploy to it and integrate applications with cloud-based apps? Often, I believe, not much.
Z Trek Copyright (c) Alan Zeichick