How Bank IT Leaders Can Get out of Reactive Mode (and Start Preparing for Tomorrow)

I spend a lot of time talking to IT professionals in banks across the US and Canada. Some large and global, others regional. As the CEO of a technology consultancy that works with financial institutions of all sizes, having varied conversations with our clients is a big part of my job. And I can tell you that almost every single one of them says the same thing: I’m so busy reacting to day-to-day issues that I just don’t have time to really plan for the future.

In other words, they’re always in a reactive mode as they deal with issues that range from minor (slow transaction processing) to major (catastrophic security breaches). But while playing whack-a-mole is critical to any bank, even a small shift in priorities can give CIOs and their teams the room to get ready for tomorrow rather than just focusing on today.

How to get out of reactive mode

Every bank technology person intuitively knows all this, of course, but it’s almost impossible for most to carve out the time to do any real planning. What they need are some ways to break the cycle. To that end, here are just a few suggestions for IT leaders, based on my experiences with bank IT organizations, to get out of reactive mode and start preparing for tomorrow.

Have a clear vision

A clear vision is important in all organizations. Knowing what we’re all marching towards not only helps keep teams focused and unified, but also ensures high morale and a sense of teamwork. The day-to-day menial tasks mean a lot more when understood in the context of the overall goal.

Break projects into smaller projects

As a runner, I’ve participated in my share of marathons, and what I can say is that I’ve never started one only to tell myself, “Okay, just 26.2 miles to go!” Rather, like most runners, I break the race down into digestible (and mentally palatable!) chunks. It starts with a 1 mile run. Then I work up to a 10k (about 6 miles), and so on, until I reach the final 5k.

Analogously, I’ve seen successful teams in large organizations do amazing things just by breaking huge, company-shifting tasks into smaller projects — smaller chunks that everyone can understand, get behind, and see the end of. Maybe it’s a three-week assessment to kick off a six-month work effort. Or maybe it’s a small development proof of concept before launching a huge software redeployment initiative slated to last months. Whatever the project, making things smaller allows people to enjoy the little successes, and helps keep teams focused.

Get buy-in from company leadership

IT leaders are constantly going to management asking for more money to fund their projects and operations. And a lot of times, management doesn’t want to give it to them. It’s a frustrating situation for both parties, to be sure, but consider that one of the reasons management might be so reluctant to divert even more money to IT is you have nothing to show them for all the cash they’ve put into it previously. In their minds, they keep giving you more money, but nothing really changes. You’re still putting out fires and playing whack-a-mole.

If, on the other hand, you’re able to show them a project that will ultimately improve operations (or improve the customer experience, or whatever your goal is) they’ll be a lot more likely to agree. As an IT leader, it’s your job to seek out these projects and bring them to business leaders’ attention.

Implement DevOps methodology

I find a lot of financial institutions are still stuck in the old ways of managing their application lifecycles. They tend to follow an approach — the so-called “waterfall” model — that’s horribly outdated. The waterfall model for building software essentially involves breaking down projects into sequential, linear phases. Each phase depends on the deliverables of the previous phase to begin work. While it sounds straightforward enough, the flaw with the waterfall model is that it doesn’t reflect the way software is actually used by employees and customers in the real world. The reality is, requirements and expectations change even as the application is being built, and a rigid methodology like the waterfall model lacks the responsiveness that’s required in today’s business environment.

To overcome these flaws, we recommend a DevOps methodology. DevOps combines software development with IT operations to shorten application development lifecycles and provide continuous delivery. In essence, DevOps practitioners work to increase communication between software development teams and IT teams, automating those communication processes wherever possible. This collaborative approach allows IT teams to get exactly what they need from development teams, faster, to do their job better. “Fail fast, fail often” is a common mantra. Encourage the failure, learn from it, and then iterate to improve.

DevOps is obviously a radical shift from the way many bank IT professionals are used to making and using enterprise software, and to really implement it right, you need someone well-versed in the practice. But implemented correctly, it has the capacity to kickstart an IT organization that’s stuck in a rut.

Getting ahead

As an IT consultant, I’ve heard all the answers in the book for why your organization can’t seem to get ahead of the day-to-day. But these excuses are just that: excuses. If you’re an IT leader, by definition you have the power to change your organization. You just need to exercise it effectively.

Remember: our world of technology has three pillars: people, process, and technology. No stool stands on two legs, nor does IT. Understand these three complementary components, and you’re well on your way to transforming your organization.

Why Banks Need to Start Thinking Like Tech Companies

Historically, for most Americans (and Canadians), the local bank branch has always been where you go not just to deposit and withdraw cash, but to manage your retirement or savings account, apply for a credit card and secure a home, car or small business loan. Today, however, the bank’s ascendancy is being challenged by the rise of alternative institutions and other scrappy players who are trying to tap into areas that were formerly the exclusive domain of banks. This category of emerging fintech companies includes online-only banks, credit unions, retirement planning apps, online lending marketplaces, peer-to-peer payment platforms and others too numerous to mention. And while banks may have the size advantage, nothing in business lasts forever. Do these Davids have a chance to slay Goliath? And what do the banks need to do to protect themselves from upstart challengers?

Studies indicate these new entities are giving banks a run for their money (no pun intended). The top five U.S. banks, for instance, accounted for only 21% of mortgage originations in 2019, compared to half of mortgages in 2011. Filling the gap are non-bank lenders, which not only offer a convenient, digital-first customer experience, but also tend to approve more applicants. Similar trends can be witnessed in small business loans and personal loans.

It’s not a stretch to say the traditional bank is facing an existential crisis. This has been partly brought on by a general lack of competition for so long. For example, at one point, towns had just one bank. This single bank didn’t have to innovate in the face of zero competition. That reality may have led to a decades-long attitude of complacency, which as a result, has led to a failure to innovate. Retail banks need to rethink pretty much everything. In short, they need to start thinking like a startup—more specifically, a tech startup. Silicon Valley is driven in large part by a philosophy of disruption, innovation and entrepreneurship. Many alternative lenders have been empowered by this philosophy, but that’s not to say that traditional banks can’t make use of it, too. Far from it, here are some ways that banks can start thinking more like tech companies so they can stay competitive against alternative providers.

Embrace lean methodology. 

Startups, by definition, lack the resources of more established businesses, but they don’t let those limitations stifle innovation. In fact, those limitations actually serve to encourage innovation. Lean methodology is a way of designing and bringing new products to market specifically designed to fit the limited financial resources of startup organizations. First outlined by entrepreneur Eric Ries in “The Lean Startup,” this approach emphasizes building and testing iteratively to reduce waste and achieve a better market fit.

To become vehicles of innovation, banks should consider adopting similar methodologies. I’m not suggesting that they should create artificial obstructions or arbitrary constraints. But no matter the size of the institution, budgets are always going to feel too small—not least of all because product developers for massive institutions need to develop huge products to match. With tried-and-true methodologies for innovation like the Lean Startup out there, scarcity shouldn’t be an excuse for not innovating. 

Fail fast, iterate often. Adopt Agile.

Startups know that rapid iteration cycles mean rapid innovation. It also means embracing a culture of failure. Failing to fail means failing to succeed. These are the lynchpins of agile or lean methodologies. Excellence is the enemy of success and progress. Get it done, get it out there in front of the market and then iterate improvements.

Identify opportunities with big data. 

One of the reasons alternative lenders are able to offer such high rates of approval is that they employ state-of-the-art AI and machine learning techniques to get a better picture of their customer than a simple credit or background check can deliver. Well-trained AI algorithms can efficiently comb through a wide body of available data to uncover trends and make predictions about the risk of lending to a given individual with incredible accuracy. 

Online-first lenders have such an advantage here because they’re in a better place to mine that data. What a lot of people forget about data analytics is that the greatest algorithms are only as good as the data you feed them. Businesses, and banks especially, generate millions of data points per day—data that could prove valuable for data mining and other similar uses. However, the majority of this data is unstructured and heterogeneous, and often time, siloed and difficult to access. Many successful online-first lenders have carefully structured their digital loan applications to be useful for data analytics purposes from the ground up. When nearly 40% of the work of data analytics is gathering and cleaning data, this represents a huge advantage to the fintech startup. 

But traditional banks can take advantage of this, too. Developing online and mobile banking applications to replace old-fashioned paper forms for most activities would set banks up to make better use of that data by ingesting it in a cleaner format. Add in the fact that customers are demanding mobile banking features anyway, and there’s no excuse for not offering customers a more robust set of mobile banking features.

Shrink bloated bureaucracy with cross-functional organizations.

Think about all the startups you’ve visited. Did teams operate in silos, constantly blaming other teams for their inability to make progress? Or did they adapt to situations, never believing their roles to be fixed or immutable?

To become the latter kind of organization, traditional banks need to break the cycle of bureaucratic apathy. One way to do that is to have disparate teams work together on projects. Working on shared projects not only helps develop a sense of shared purpose, but it also empowers employees to solve problems in areas that are not considered in their traditional wheelhouse. That, in turn, reduces the inefficiency of teams passing the baton to another division until it’s been weeks or months until the customer’s concern has even been truly considered. Moreover, bringing together different kinds of minds and thinkers encourages the kind of fertile ground in which innovation is known to thrive.

Reports of the bank’s death have been greatly exaggerated.

Ultimately, banks have numerous advantages that they can leverage over most fintech startups. They have their brick-and-mortar retail locations, allowing them to make personal connections with customers that drive loyalty. They’re considered more trustworthy to the average consumer (for the most part). And a lot of people just want to do all their banking at a single bank branch rather than shop around for various piecemeal banking solutions. If banks can innovate their information technology and organizational structures to meet the changing needs of today’s customers, they can continue to dominate the financial market.

 

Open Banking in the US?

Open Banking in the US

Can government intervention in banking actually encourage innovation rather than restrict it? That’s the question the U.K. government set out to answer with the implementation of its Open Banking directive.

This policy, which requires the country’s nine biggest banks to make customers’ financial data accessible by authorized third-party service providers, came into effect in January of 2018. Now, more than two years later, we’ve seen the results of this experiment first-hand, and the feedback has been quite positive overall. Furthermore, that success across the pond has given many U.S. banking leaders the confidence to start thinking about what similar regulation would look like here.

As a technologist who spends his days helping businesses modernize their IT infrastructures, I think this embrace (albeit cautious) of open banking is an extremely positive development. I’ve seen first-hand the benefits that open platforms can have in an industry, for customers and providers alike. That’s why I think it’s so important that business leaders at traditional banks understand just what open banking is. Because once they do, they’ll agree that open banking, whether it’s through government regulation or through their own action, is just what traditional banks need to stay competitive in our increasingly digital age.

What is open banking?

Let’s start with defining open banking. In short, it is the practice of opening up consumer financial data through APIs. A bank that embraces the open banking model will create APIs that define how a program can reliably and securely access its customers’ data.  In addition, there are opportunities outside the scope of this article to also monetize said APIs. Creating entirely net new revenue streams.

By creating these specifications, open banking simplifies the process of building third-party apps that need access to consumer data. In fact, you’ve probably been the beneficiary of open banking if you’ve ever used apps such as Mint, Wealthfront, Venmo or TurboTax. Even if you’re wary of just handing over your bank account credentials to a third party, you probably have no problem with checking a box that allows only select data, like transactions, to be shared to authorized and properly vetted apps. And that, in a nutshell, is the power of open banking: it engenders the kind of trust that startup or niche third-party digital service providers need in order to gain traction.

What is the U.K. model?

What I’ve just given is the technical definition of open banking. But open banking is also a political, or more specifically a regulatory, concept. In the U.K., the Second Payment Services Directive (“PSD2” for short) requires the country’s largest banks, including HSBC, Barclay’s and Lloyd’s, to make certain customer data accessible through APIs. Some of the goals of the directive are to increase competition, counteract monopoly-like effects, and make banks work harder to get (and please and retain) customers.

Those are laudable and important goals for maintaining a robust liberal economic system, but perhaps the most salient aspect of the directive for British citizens will be how it encourages innovation in banking. The idea is that by lowering the barrier to entry for fintech startups, PSD2 will lead to the creation of new startups and products that benefit consumers.

Why should banks embrace open banking?

If open banking is so good for startups and other non-banks, what incentive do major banks in the U.S. have to implement open banking? Aren’t they just enabling their competition and digging their own graves?

The answer is “not really,” but before we get into that I should note that customers are already starting to expect open banking-like features from their banks. They want it to be easy and secure to set up Apple Pay or integrate their transaction data into Mint or TurboTax. Banks must keep up with consumer expectations if they want to retain customers.

Additionally, most of these apps don’t really represent competition for banks; in fact, these services tend to be complementary or adjacent to traditional banking services. If a bank’s core business is stowing customers’ money and giving out loans, then it has little to fear from a personal finance or tax app using its customers’ data. All sharing data can do in that case is make their customers more responsible with their money.

The security benefits of open banking shouldn’t be downplayed, either. Defining exactly how apps can gain access to just the data they need will reduce the practice of customers handing over their account credentials to get the digital services they want. That represents a huge reduction in risk for banks with comparatively little investment on their part.

Bringing Banking into the 21st century

Finally, let’s not forget that U.S. banks are already dabbling in open banking voluntarily — at least selectively. That suggests banking leaders must already agree to some extent that opening up data drives innovation. And innovation is something the banking industry could definitely use a dose of. While fintech startups have made splashes specializing in making just one aspect of the consumer financial experience better, banks have attempted to expand their services into every little corner of the banking-adjacent market. What that results in is bloated organizations and unprofitable units siphoning resources from making banking better.

Banks should be using their resources to develop better APIs and deeper data analytics to help them make better loan decisions or catch fraud — not trying to get into the mobile app business. It’s unlikely that big banks will ever become “lean” organizations, but embracing open banking can at last allow banks to offload non-core work and get back to the fundamental services that make them profitable.

 

Can We Make Zelle Cool?

Let’s face it: people like using cool technology. They want the latest iPhone, the hottest games and the newest social platform. One can argue the relative merits of this obsession, or the need for the brightest and shiniest object, but the reality remains: everyone wants the latest, most up-to-date products and apps. Anyone who remembers moving from Myspace to Facebook knows this reality. And one of today’s “cool” apps is Venmo, the person-to-person payments app that boasts over 40 million users, 50 percent of them being millennials. That’s a big problem for Zelle®?, but it doesn’t have to be.

Digital payments platforms—the person-to-person kind—aren’t especially new. PayPal and Square have been around for more than 10 years, Stripe is newer, but still well established, and long-forgotten startups like Brodia and Qpass existed even before then. But somehow Venmo, a PayPal subsidiary after a recent acquisition, managed to become so hip that the company name actually became a verb. Think of it as the fun, cool little brother of the PayPal behemoth. Unfortunately, they were so successful that traditional financial institutions started wondering why they were being shut out of the incredibly lucrative mobile peer-to-peer payments market. That’s how we ended up with Zelle, which was launched in 2017 with the express purpose of beating Venmo at its own game.

At face value, Zelle checks off all the boxes. It was built for a specific task and does pretty much everything it’s supposed to do. But where it has succeeded on the technical side, it’s still coming up short where it really matters: user adoption. Chalk it up to any number of factors, from security concerns to Venmo’s massive head start, but the reality is an inescapable one: Venmo is cool and Zelle isn’t. Venmo is what millennials use while their parents use Zelle, or so the perception goes. That may not matter to C-level executives at banks, whose aspirations run far beyond what is cool, but it’s a huge determinant of success or failure in the marketplace.

The good news for Zelle is that this problem can be fixed. The bad news is that it’s going to take some heavy lifting to change consumer opinion and drive adoption by people who see Zelle in the same way that seventh graders see the assistant principal at the school dance. Even if he has the best moves in the room, he’ll always be a square.

It might be tempting to see this as a marketing problem, but if Zelle is going to gain market traction—and silence the naysayers—it’s going to need to make some technical changes to make it better, and more useful, than Venmo.

First, some low-hanging fruit. Venmo explicitly prohibits money transfers for the purchase of goods or services unless one is a Venmo verified merchant. It is meant for person-to-person payments. For Zelle, this is a massive opportunity to gain adoption where Venmo fears to tread. And because it’s backed by banks, they have the data and the AI capabilities to pull it off with much less risk than an independent player—even one as large as Venmo.

It could also be argued that if one can’t win on the cool factor—and cool is so intangible as to be ethereal—can a company really make a cool app? No, coolness flows from something as a result of its utility. Yet, one can win in other ways.

Be Friendly. For starters, apps need to be open and inviting. Big banks have a sometimes well-earned reputation for being fiercely competitive and protective of their turf. Millennials—or to be more broad, consumers under age 35—want openness, not interoperability. Though it’s an advantage, systems and tools that are perceived as playing nicely with other ecosystems are unnecessary. Big banks making the news for increased difficulty for consumers trying to set up Venmo is not the way to win favor with the target customers. Adopting Open Banking is.

Be open. Give consumers a choice. The banks that combined forces to create Zelle are all but forcing their customers to use it. With more banks in the queue to have Zelle as their primary P2P payments tool, more people will become Zelle users by default, not by choice. As a marketing strategy, this won’t work. Yes, adoption will grow, but customer satisfaction will drop. If the end game is a broader use of the bank and all of its products and services, Zelle is but one tool among many, not the end all, be all. Banks would do well to focus on the war to gain customers, not the battle to win users.

Simplify. When in doubt, software engineers and designers should focus on this one point. Although under-35-year-olds are known to be capable of understanding and learning technology easily, this doesn’t mean that they want to spend hours working through the app interface. When designing the app, remember that less is more. Simplify, eliminate options and remember KISS.

Integrate. Better yet, integrate with social media. Yes, Zelle has tried to copy this element of Venmo’s success, but somehow it feels awkward. Social media integration needs to be a priority, and it needs to be done well, something that Zelle still seems to be missing.

In the person-to-person payments discussion, there are many factors to consider. In the end though, this is just a skirmish in the broader contest for consumer-banking relationships. By forcing users to adopt, and adapt to, Zelle, banks are taking away agency from the consumers. This will inevitably result in Zelle never becoming the “cool” payments app. Banks need to consider that larger picture in this conversation, and that goes far beyond what app is victorious in the battle for what’s cool. This is not likely to be a winner take all situation, rather a gradual war of attrition, with many winners, and more losers.

Banks would do well to remember this.

Chuck Fried is president and chief executive of TxMQ. Prior to TxMQ, Chuck founded multiple businesses in the IT and other technology services spaces. Before that, he served in IT leadership roles in both the public and private sectors. In 2020, he was named an IBM Champion, affirming his commitment to IBM products, offerings and solutions.

 

Digital Transformation: When It Makes Sense and When It Doesn’t

TxMQ DIgital Transformation

This isn’t the first time I’ve written about digital transformation, nor is it likely to be the last.

Digital transformation has become a “must use” catchphrase for investor and analyst briefings and annual reports. Heaven help the foolish Fortune 500 company that fails to use the buzzword in their quarterly briefing. It’s the “keto diet” of the technology world.

It’s a popular term, but what does digital transformation really mean? 

Legacy debt. 

In a world of enterprises that have been around for longer than a few years, there is significant investment in legacy processes and technical systems (together what we like to call legacy debt) that can inhibit rapid decision making.  This is a combination of not just core systems, but also decades-old processes and decision-making cycles…bureaucracy in other words.

So why do we care about rapid decision making? Simply put, in years past, decisions were less consumer-driven and more company-driven, or dare I say it, focus-group driven.

Companies could afford to take their time making decisions because no one expected overnight change. Good things used to take a long time. 

We now live in a world where consumers demand rapid change and improvement to not just technology, but also processes. On a basic level, this makes sense. After all, who hasn’t had enough of poorly designed AI-driven, voice-activated phone trees when we just want to ask the pharmacist a question about our prescription refill? 

Too often, however, legacy debt leads to rapid implementations to meet customer demands – often with unintended (and catastrophic) consequences.  Often this is the result of rapid, poorly built (or bought) point solutions. This is where disruptors (aka startup companies) often pop up with quick, neat, point solutions of their own to solve a specific problem: a better AI-driven phone solution, a cuter user interface for web banking, sometimes even a new business model entirely. Your CIO sees this in an article or at a conference and wonders, “why can’t we build this stuff in-house?”

Chasing the latest greatest feature set is not digital transformation. Rather, digital transformation begins with recognizing that legacy debt must be considered when evaluating what needs changing, then figuring out how to bring about said change, and how to enable future rapid decision making. If legacy systems and processes are so rigid or outdated that a company cannot implement change quickly enough to stay competitive, then, by all means, rapid external help must be sought. Things must change.

However, in many cases what passes for transformation is really just evolution. Legacy systems, while sometimes truly needing a redo, do not always need to be tossed away overnight in favor of the hottest new app. Rather, they need to be continually evaluated for better integration or modernization options. Usually by better exposing back end systems. Transformation is just another word for solving a problem that needs solving, not introducing a shiny object no one has asked for. Do our systems and processes, both new and old, allow us to operate as nimbly as we must, to continue to grow, thrive and meet our customer demands today and tomorrow?

The Steve Jobs effect

Steve Jobs once famously stated (it’s captured on video, so apparently it really happened), when asked why he wasn’t running focus groups to evaluate the iPod, “How would people know if they need it, love it or want it if I haven’t invented it yet?”

Many corporate decision-makers think they are capable of emulating Steve Jobs. Dare I say it, they are not, nor are most people. Innovating in a vacuum is a very tricky business. It’s best to let the market and our customers drive innovation decisions. Certainly, I advocate for healthy investment in research and development, yet too often innovation-minus-customers equals wasted dollars. Unless one is funding R&D for its own sake, certainly a worthy cause, one needs some relative measure of the value and outcomes around these efforts. Which usually translates to marketability and ultimately profits.

Measurement

Perhaps the most often forgotten reality of our technology investments is understanding what the end goal, or end-state, is, and measuring whether or not we accomplished what we set out to do. Identifying a problem and setting a budget to solve that problem makes sense. But failing to measure the effectiveness after the fact is a lost opportunity. Just getting to the end goal isn’t enough, if in the end the problem we sought to solve remains. Or worse yet we created other more onerous unintended consequences.

Digital transformation isn’t about buzzwords or “moving faster” or outpacing the competition. It’s all of that, and none of that at the same time. It’s having IT processes and systems that allow a firm to react to customer-driven needs and wants, in a measured, appropriate, and timely way. And yes, occasionally to try to innovate toward anticipated future needs.
Technology is just the set of tools we use to solve problems.

Does it answer the business case?

“IT” is never — or at least shouldn’t be — an end-in-itself: it must always answer to the business case. What I’ve been describing here is an approach to IT that treats technology as a means to an end. Call it “digital transformation,” call it whatever you want — I have no use for buzz words. If market research informs you that customers need faster web applications, or employees tell you they need more data integration, then it’s IT’s job to make it happen. The point is that none of that necessitates ripping and replacing your incumbent solution. 

IT leaders who chase trends or always want the latest platform just for the sake of being cool are wasting money, plain and simple. Instead, IT leaders must recognize legacy debt as the investment it is. In my experience, if you plug this into the decision-making calculus, you’ll find that the infrastructure you already have can do a lot more than you might think. Leverage your legacy debt, and you’ll not only save time delivering new products or services, but you’ll also minimize business interruption — and reduce risk in the process. 

That’s the kind of digital transformation I can get behind.

TxMQ’s Chuck Fried and Craig Drabik Named 2020 IBM Champions

More than 40 years ago, TxMQ was founded by veterans of IBM who believed in supporting mainframe customers through new solutions built for IBM products. We’ve come a long way since 1979: we’ve moved our headquarters from Toronto to the U.S., our leadership team has grown, and we continue to enhance our roster of services. And though our capabilities and products have advanced, we’ve still managed to maintain a close connection to our roots at IBM. Our mission has also remained the same: to empower companies to become more dynamic, secure and nimble through technology solutions.

This mission has helped us assemble a team of innovators who constantly strive to help our clients meet their business goals through technological advancements.

Chuck Fried and Craig Drabik are great examples of TxMQ’s consistent excellence in bringing the best solutions to our enterprise clients. They were recently named to IBM’s 2020 Class of Champions for demonstrating extraordinary expertise, support and advocacy for IBM technologies, communities and solutions. Champions are thought leaders in the technical community who continuously strive to innovate and support new and legacy IBM products. As IBM states, “champions are enthusiasts and advocates… who support and mentor others to help them get the most out of IBM software, solutions, and services.” Here, Chuck and Craig share what IBM and being named IBM Champions means to them:

IBM Champion of Cloud, Cloud Integration, and Blockchain

Chuck Fried
President, TxMQ

“I’ve been building technological solutions for over 30 years, and have worked with many large software and technology companies. As we help our clients evolve, I am constantly drawn back to IBM. They are thought leaders in the technology industry, bringing the best new software and services to the market. Working with them, we know that our clients are getting the best possible solution. I’m proud to continue advocating for their brand.”

IBM Champion of Blockchain

Craig DrabikCraig Drabik
Technical Lead, Disruptive Technologies Group

“Although IBM is often associated with mainframe and legacy technologies, they offer so much more to the technology industry. Being named a Champion, when involved in disruptive technologies, proves this.  IBM is progressive and innovative, and strives to develop solutions for a range of products and industries. Working with IBM, we have access to world-renowned solutions that are trustworthy.”

As TxMQ builds new tools to support and grow the IBM ecosystem, having two Champions is a great achievement for our company. With this recognition, we can continue fostering our relationship with IBM and building life-changing technology for our customers.

Generating OpenAPI or Swagger From Code is an Anti-Pattern, and Here’s Why

(This article was originally posted on Medium.)

I’ve been using Swagger/OpenAPI for a few years now, and RAML before that. I’m a big fan of these “API documentation” tools because they provide a number of additional benefits beyond simply being able to generate nice-looking documentation for customers and keep client-side and server-side development teams on the same page. However, many projects fail to fully realize the potential of OpenAPI because they approach it the way they approach Javadoc or JSDoc: they add it to their code, instead of using it as an API design tool.

Here are five reasons why generating OpenAPI specifications from code is a bad idea.

You wind up with a poorer API design when you fail to design your API.

You do actually design your API, right? It seems pretty obvious, but in order to produce a high-quality API, you need to put in some up-front design work before you start writing code. If you don’t know what data objects your application will need or how you do and don’t want to allow API consumers to manipulate those objects, you can’t produce a quality API design.

OpenAPI gives you a lightweight, easy to understand way to describe what those objects are at a high level and what the relationships are between those objects without getting bogged down in the details of how they’ll be represented in a database. Separating your API object definitions from the back-end code that implements them also helps you break another anti-pattern: deriving your API object model from your database object model. Similarly, it helps you to “think in REST” by separating the semantics of invoking the API from the operations themselves. For example, a user (noun) can’t log in (verb), because the “log in” verb doesn’t exist in REST — you’d create (POST) a session resource instead. In this case, limiting the vocabulary you have to work with results in a better design.

It takes longer to get development teams moving when you start with code.

It’s simply quicker to rough out an API by writing OpenAPI YAML than it is to start creating and annotating Java classes or writing and decorating Express stubs. All it takes to generate basic sample data out of an OpenAPI-generated API is to fill out the example property for each field. Code generators are available for just about every mainstream client and server-side development platform you can think of, and you can easily integrate those generators into your build workflow or CI pipeline. You can have skeleton codebases for both your client and server-side plus sample data with little more than a properly configured CI pipeline and a YAML file.

You’ll wind up reworking the API more often when you start with code.

This is really a side-effect of #1, above. If your API grows organically from your implementation, you’re going to eventually hit a point where you want to reorganize things to make the API easier to use. Is it possible to have enough discipline to avoid this pitfall? Maybe, but I haven’t seen it in the wild.

It’s harder to rework your API design when you find a problem with it.

If you want to move things around in a code-first API, you have to go into your code, find all of the affected paths or objects, and rework them individually. Then test. If you’re good, lucky, or your API is small enough, maybe that’s not a huge amount of work or risk. If you’re at this point at all, though, it’s likely that you’ve got some spaghetti on your hands that you need to straighten out. If you started with OpenAPI, you simply update your paths and objects in the YAML file and re-generate the API. As long as your tags and operation Ids have remained consistent, and you’ve used some mechanism to separate hand-written code from generated code, all you’re left to change is business logic and the mapping of the API’s object model to its representation in the database.

The bigger your team, the more single-threaded your API development workflow becomes.

In larger teams building in mixed development environments, it’s likely you have people who specialize in client-side versus server-side development. So, what happens when you need to add to or change your API? Well, typically your server-side developer makes the changes to the API before handing it off to the client-side developer to build against. Or, you exchange a few emails, each developer goes off to do his own thing, and you hope that when everyone’s done that the client implementation matches up with the server implementation. In a setting where the team reviews the proposed changes to the API before moving forward with implementation, you’re in a situation where code you write might be thrown away if the team decides to go in a different direction than the developer proposed.

It’s easy to avoid this if you start with the OpenAPI definition. It’s faster to sketch out the changes and easier for the rest of the team to review. They can read the YAML, or they can read HTML-formatted documentation generated from the YAML. If changes need to be made, they can be made quickly without throwing away any code. Finally, any developer can make changes to the design. You don’t have to know the server-side implementation language to contribute to the API. Once approved, your CI pipeline or build process will generate stubs and mock data so that everyone can get started on their piece of the implementation right away.

The quality of your generated documentation is worse.

Developers are lazy documenters. We just are. If it doesn’t make the code run, we don’t want to do it. That leads us to omit or skimp on documentation, skip the example values, and generally speaking weasel out of work that seems unimportant, but really isn’t. Writing OpenAPI YAML is just less work than decorating code with annotations that don’t contribute to its function.

Bringing Offshore Contracts Back Onshore

User Groups Hero Banner

Although many companies rely on offshore technology teams to save costs and build capacity, there are still many challenges around outsourcing. Time zone issues, frequent staff turnover, difficulty managing workers, language barriers—the list goes on and on. Offshore workers can allow companies to save money. But what if offshore pricing was available for onshore talent? What if the best of both worlds – an easily managed workforce at a competitive cost – was possible. In fact, it is.

For all the pains and issues related to building global technology teams, outsourcing remains a viable option for many companies that need to build their engineering groups while controlling costs. With the U.S. and Europe making up almost half of the world’s economic output, but only 10% of the world’s population, it’s no secret that some of the world’s best talent can be found in other countries. That’s how countries such as India, China, and Belarus have become global hubs for engineering. And why not? They have great engineering schools, low costs of living, and large numbers of people who are fully qualified to work on most major platforms.

Reinventing Outsourcing 

This is basic supply and demand: companies want to hire people at as competitive a price point as possible without sacrificing quality. This is exactly how Bengaluru and Pune became technology juggernauts in the 1990s, and how Minsk became a go-to destination a decade later. The problem, of course, is that what was once a well-kept secret became well known…and wages started creeping up.

With salaries increasing in countries that typically are used for offshore talent, the cost of offshore labor is also on the rise. In India, a traditional favorite of offshore workers, annual salaries have been steadily rising by 10% since 2015, making it less beneficial for companies to hire workers there. In fact, in one of the biggest outsourced areas, call centers, workers in the U.S. earn on average only 14% more than outsourced workers. In the next few years, the gap will be narrow enough that the benefits of setting up a call center in Ireland or India just won’t make sense. What the laws of supply-and-demand can give, they can also take away. That’s why “outsource it to India” is no longer an automatic move for growing technology companies, financial institutions, and other businesses looking to rapidly grow their teams. It’s also why major Indian outsourcing companies such as Wipro and Infosys are diversifying into other parts of the world. 

As political and economic instability grow, moving a company’s outsourced work domestically can help to mitigate the risks of an uncertain landscape. A perfect example of this is China. Hundreds of American companies have set up development offices in China to take advantage of a skilled workforce at a low price point. So far so good, right? Well, not really. Due to concerns about cybersecurity and intellectual property theft, companies such as IBM have mandated that NONE of their code can come from China. All of a sudden, setting up shop in Des Moines is a lot more attractive than going to Dalian.

The federal government, as well as many states and municipalities, are also playing an active role in keeping skilled technology jobs at home through grants and tax breaks. New programs and training schools are also emerging, which are helping to build talent in the U.S. at a lower cost and helping companies take advantage of talented workers outside of large cities with low costs of living. hiring 100 engineers in midtown Manhattan might not be cost-effective, but places like Phoenix and Jacksonville allow companies to attract world-class talent without breaking the bank.

This doesn’t mean the end of offshoring, of course, When looking for options to handle mainframe support, and legacy systems services, including Sparc, AIX, HPUX, and lots of back-leveled IBM, Oracle and Microsoft products, the lure of inexpensive offshore labor often wins. Unlike emerging technologies, legacy systems do not require constant updates and front-end improvements to keep up with competitors. The typical issues that affect offshore outsourcing aren’t as big of an issue when legacy systems are involved. So where does it make sense to build teams, or hire contractors, domestically?

Domestic Offshoring (sometimes called near-shoring)

There is a key difference between outsourcing development to overseas labs and building global teams, but the driving force behind both approaches is pretty much the same: cut costs while preserving quality. Working with IT consulting and staffing companies like TxMQ is a prime example of how businesses can take advantage of outsourcing onshore without going into the red. Unlike technology hubs such as Silicon Valley, these companies are typically located in areas such as the Great Lakes region, where outstanding universities (and challenging weather!) yield inexpensive talent due to lower living costs. With aging populations creating need for skilled workers in the Eastern United States, more states are introducing benefits to attract more workers. This is already creating an advantage for companies that provide outsourced staffing because they can charge lower prices than traditional technology hubs. It’s the perfect mix of ease, quality, and cost.  

Global 2000 companies face challenges resulting from their large legacy debt, and the costs to support their systems are high. As they struggle to transform and evolve their technology to today’s containerized, API-enabled, microservices-based world, they need lower-cost options to both support their legacy systems and build out new products.

While consulting and staffing companies are well known for transformational capabilities and API enablement, there are other advantages that aren’t as well known. For these transformational services, many companies also support older, often monolithic, applications, including those likely to remain on the mainframe forever. From platform support on IBM Power systems to complete mainframe management and full support for most IBM back-leveled products, companies like TxMQ have found a niche providing economical support for enterprise legacy systems, including most middleware products of today, and yesterday. This allows companies to invest properly in their enterprise transformation while maintaining their critical legacy systems.

The Future of Work

In a 2018 study of IT leaders and executives, more than 28 percent planned to increase their on-shore spending in the next year. With the ability to move work online, companies can support outsourced teams easily, whether onshore or offshore. To mitigate age-old issues such as different time zones and language barriers, and as the pay gap closes between the U.S. and other nations, employing the talents of outsourced workers onshore can help companies benefit from outsourcing without having to fly 15 hours across two oceans to do it.

Contemplations of Modernizing Legacy Enterprise Technology

What should you think about when modernizing your legacy systems?

Updating and replacing legacy technology takes a lot of planning and consideration. It can take years for a plan to become fully realized. Often poor choices during the initial planning process can destabilize your entire system, and it’s not unheard of for shoddy strategic technology planning to put an entire organization out of business.

At TxMQ we play a major role in helping our clients plan, integrate, and maintain legacy and hybrid systems. I’ve outlined a few areas to think about in the course of planning, (or in some cases re-planning) your modernization of legacy systems.

1.The true and total cost of maintenance
2.Utilize technology that integrates well with other technologies and systems
3. Take your customer’s journey into consideration
4. Ensure that Technical Debt doesn’t become compounded
5. Focus on fixing substantiated validated issues
6. Avoid technology lock-in

1. The true and total cost of maintenance

Your ultimate goal may be to replace the entire system but taking that first step typically means making the move to a hybrid environment.

Hybrid Environments utilize multiple systems and technologies for various processes, and can be extremely effective, but difficult to manage by yourself. If you are a large corporation with seemingly endless resources and have an agile staff with an array of skill sets, then you may be prepared. However, the reality is most IT departments are on a tight budget, people are multitasking and working far more than 40 hours a week just to maintain current systems.

These days most IT Departments just don’t have the resources. This is why so many organizations are moving to Managed IT Services to help mitigate the costs, take back some time, and are becoming more agile in the process.

When you’re deciding to modernize your old legacy systems you have to take into consideration the actual cost of maintaining multiple technologies. As new tech enters the marketplace, and older technologies and applications are moving towards retirement, so are the people who historically managed those technologies for you. It’s nearly impossible today to find a person willing to put time into learning technology that’s on it’s last leg. It’s a waste of time for them, and will be a huge drain on time and economical resources for you. It’s like learning how to fix a steam engine over learning more modern electric engines. I’m sure it’s a fun hobby but it it will probably never pay the bills.

You can’t expect newer IT talent to accept work that means refining and utilizing skills that will soon no longer be needed, unless you’re willing to pay them a hefty sum not to care. Even then, it’s just a short term answer and don’t expect them to stick around for long, always have a back up plan. Also, it’s always good to have someone on call that can help in a pinch and provide needed fractional IT Support.

2. Utilize technology that integrates well with other technologies and systems.

Unless you’re looking to rip and replace your entire system, you need to ensure that the new plays well with the old. Spoiler alert!, different technologies and systems often don’t play well together.

When you thought you’ve found that missing piece of software that seems to fix all the problems your business leaders insist they need, you will find that integrating it into your existing technology stack is much more complicated than you expected. If you’re going at this alone please take into consideration when planning a project, two disparate pieces of technology often act like two only children playing together. Sure they might get along for a bit but as soon as you turn your back there’s going to be a miscommunication and they start fighting.

Take time in finding someone with expertise in integrations, preferably a consultant or partner with plenty of resources and experience in integrating heterogeneous systems.

3. Take your customer’s journey into consideration

The principal reason any business should contemplate upgrading legacy technology is that it improves on the customer experience. Many organizations make decisions based on how it will increase profit and revenue without taking into consideration how profit and revenue are made.

If you have an established customer base, improving their experience should be a top priority because they require minimal effort to retain. However, no matter how superior your services or products are if there is a process with a smoother customer experience you can be sure that your long time customer will move on. As humans, we almost always take the path of least resistance. If you can improve even one aspect of a customer’s journey you have chosen wisely.

4. Ensure that Technical Debt doesn’t become compounded


Technical Debt is the idea that choosing the simple low cost solution will most certainly ensure you pay a higher price in the long run. The more you choose this option, the more likely you are to increase this debt, and in the long run you will be paying more, with interest.

Unfortunately, this is one of the most common mistakes when undertaking a legacy upgrade project. This is where being frugal will not pay off. If you can convince the decision makers and powers that be of one thing, it should be to not choose exclusively based on lower upfront costs. You must take into account the total cost of ownership. If you’re going to take time and considerable effort to do something you should always make sure it’s done right the first time, or it could end up costing a lot more.

5. Focus on fixing substantiated validated issues

It’s not often, but sometimes when a new technology comes along, like blockchain for instance, we become so enamored by the possibilities that we forget to ask, do we need it?

It’s like having a hammer in hand and running around looking for something to nail. Great you have the tool, but if there’s not an obvious problem to fix it with then it’s just a status symbol and that doesn’t get you very far in business. There is no cure-all technology. You need outline what problems you have, then prioritize to find the right technology that best suits your needs. Problem first, then Solution.

6. Avoid technology and Vendor lock-in

After you’ve defined what processes you need to modernize be very mindful when choosing the right technology and vendor to fix that problem. Vendor lock-in is serious and has been the bane of many technology leaders. If you make the wrong choice here, it could end up costing you substantially to make a switch. Even more than the initial project itself.

A good tip here is to look into what competitors are doing. You don’t have to copy what everyone else is doing, but to remain competitive you have to at least be as good as your competitors. Take the time to understand and research all of the technologies and vendors available to you, and ensure your consultant has a good grasp on how to plan your project taking vendor lock-in into account.

Next Steps:

Modernizing a major legacy system may be one of the biggest projects your IT department has ever taken on. There are many aspects to consider and no two environments are exactly the same, but it’s by no means an impossible task. It has been done before, and thankfully you can draw from these experiences.

The best suggestion I can give, is to have experience available to help guide you through this process. If that’s not accessible within your current team, find a consultant or partner that has the needed experience so you don’t have to worry about making the wrong choices and end up creating a bigger issue than you had in the first place.

At TxMQ we’ve been helping businesses assess, plan, implement, and manage desperate systems for 39 years. If you need any help or have any question please reach-out today. We would love to hear from you.

North America: Don’t Ignore GDPR – It Affects us too!

Hey, North America – GDPR Means Us, Too!

It’s well documented, and fairly well socialized across North America that on May 25th of 2018, the GDPR, or the General Data Protection Regulation, formally goes into effect in the European Union (EU).
Perhaps less well known, is how corporations located in North America, and around the world, are actually impacted by the legislation.

The broad stroke is, if your business transacts with and/or markets to citizens of the EU, the rules of GDPR apply to you.

For those North American-based businesses that have mature information security programs in place (such as those following PCI, HIPAA, NIST and ISO standards), your path to compliance with the GDPR should not be terribly long. There will be, however, some added steps needed to meet the EU’s new requirements; steps that this blog is not designed to enumerate, nor counsel on.
It’s safe to say that data protection and privacy is a concern involving a combination of legal, governance, process, and technical considerations. Here is an interesting and helpful FAQ link on the General Data Protection Regulation policies.
Most of my customers represent enterprise organizations, which have a far-reaching base of clients and trading partners. They are the kinds of companies who touch sensitive information, are acutely aware of data security, and are likely to be impacted by the GDPR.
These enterprises leverage TxMQ for, among other things, expertise around Integration Technology and Application Infrastructure.
Internal and external system access and integration points are areas where immediate steps can be taken to enhance data protection and security.

Critical technical and procedural components include (but are not limited to):

  • Enterprise Gateways
  • ESB’s and Messaging (including MQ and FTP – also see Leif Davidsen’s blog)
  • Application & Web Servers
  • API Management Strategy and Solutions
  • Technology Lifecycle Management
    • Change Management
    • Patch Management
    • Asset Management

The right technology investment, architecture, configuration, and governance model go a long way towards GDPR compliance.
Tech industry best practices should be addressed through a living program within any corporate entity. In the long run, setting and adhering to these policies protect your business, and save your business money (through compliance and efficiency).
In short, GDPR has given North America another important reason to improve upon our data and information security.
It affects us, and what’s more, it’s just a good idea.