IBM MQ v8.x End of Support Options

Are you still running IBM MQ v8.0.x (or alternately, an even earlier build)?  

IBM has announced as of April 30th, 2020 it will end of support for IBM MQ v8.0.x. If you are still using that version it’s recommended that you upgrade to v9.1 to avoid any potential security issues that may occur in earlier, unsupported versions of MQ. 
MQ v9.1 Highlights: 

  • IBM MQ Console
  • The administrative REST API
  • The messaging REST API
  • Improvements in error logging 
  • Connectivity to Salesforce with IBM MQ Bridge to Salesforce
  • Connectivity to Blockchain 

What are your plans for IBM MQ?

I plan to upgrade:
It’s never too late to start planning your upgrade and upgrading to IBM MQ’s newest v9.1 is a great option. There are great new features that help you manage costs better, improve efficiency, and manageability. 
Take a closer look here at some of the enhancements.  
If you are still considering your plans, now’s a great time to speak with our SME Integration Upgrade Team. Reach out to us today to set up a free Discovery Session or contact us directly for any questions

I would like to continue to use v8.0.x (or earlier versions):

It’s ok if you’re not ready for the newest version of MQ just yet. However, it’s important to remember that without support you may not be protected against avoidable security risks and additional support costs. IBM does offer Extended Premium support but be prepared, that option will be very expensive. 
Alternatively, as an IBM Business Partner TxMQ offers expert support options. As a business partner, we have highly specialized skills in IBM software. We can help guide you through issues that may arise at a fraction of the cost with the added benefit of flexibility in services. Check out more on TxMQ’s Extended Support Page. (https://www.txmq.com/end-life-software-support/)

I will support it internally: 

If you have an amazing internal team inhouse, odds are they don’t have much time to spare. Putting the gravity of a big project on your internal team can cut into their productivity. For many organizations, this will limit a team’s ability to focus on innovation and improving customer experience. This will make your competitors happy but your customers and clients definitely won’t be. 
Utilizing a trusted partner like TxMQ can help cut costs and give back some time to your internal team to focus on improvements and not just break/fix maintenance. Reach out to our team and learn how we can help maintain your existing legacy applications and platforms so your star team can focus on innovation again. Reach out and ask how we can help today. 

I don’t know, I still need help!

Reach out to TxMQ today and schedule a free Discovery Session to learn what your best options are! TxMQ.com/Contact/

John Carr Set to Present at IBM’s 2020 Integration Tech Conference

Presentation

One of the Best Events of the year is just around the corner!
IBM’s 2020 Integration Technical Conference is scheduled to be held on March 16th through the 19th this year, at the Renaissance Hotel in Dallas, TX.

What is the Integration Technical Conference? 

The Integration Technical Conference was started in 2019 as a replacement for a previously popular MQ and Integration focused Technical Conference known as MQTC. 
Last year’s inaugural Integration Tech Conference was one of the most highly acclaimed conferences of the year. It was praised for having a strong technical focus, great presentations, and in-depth training opportunities, and not just being another sales conference. 
IBM is working even harder this year to improve on last year’s conference and they have officially confirmed the Partners and Subject Matter Experts that have been chosen to present. 

We are proud to announce TxMQ’s very own John Carr, a Sr. Integration Architect, has been selected to present “Practical IBM MQ Implementations for the Cloud: A Journey of DevOps from an IBM MQ admin”. You may have already caught some of John’s previous presentations including “MQ Upgrade Best Practices” last year at the 2019 conference or through one of our webinars (which you can view here).

This year John will be discussing migrating your IBM MQ network from on-prem to the cloud. For those already undertaking their own modernization efforts, this will be a great topic for discussion. The session will walk through a case where TxMQ helped a Mortgage Services organization migrate their entire self-managed data center into the public cloud. You can learn more from the breakdown of the session abstracts here

If you’re lucky enough to attend your days are going to be full, so plan accordingly! This year John’s presentation will be on Day 3, Wednesday, March 18th at 9:50 AM, and on Thursday, March 19th at 4:40 PM. Both will be held in Salon F/G at the Renaissance Dallas Hotel. Lock it into your calendar so you don’t miss it! 

If you are attending don’t forget to stop by the TxMQ booth for some helpful guides, giveaways, and prizes. Also, please drop us a note at [email protected] or give us a call (716.636.0070) to connect at the conference, we’d love to hear from you.

Have a great time! We’ll see you there.

Digital Transformation: When It Makes Sense and When It Doesn’t

TxMQ DIgital Transformation

This isn’t the first time I’ve written about digital transformation, nor is it likely to be the last.

Digital transformation has become a “must use” catchphrase for investor and analyst briefings and annual reports. Heaven help the foolish Fortune 500 company that fails to use the buzzword in their quarterly briefing. It’s the “keto diet” of the technology world.

It’s a popular term, but what does digital transformation really mean? 

Legacy debt. 

In a world of enterprises that have been around for longer than a few years, there is significant investment in legacy processes and technical systems (together what we like to call legacy debt) that can inhibit rapid decision making.  This is a combination of not just core systems, but also decades-old processes and decision-making cycles…bureaucracy in other words.

So why do we care about rapid decision making? Simply put, in years past, decisions were less consumer-driven and more company-driven, or dare I say it, focus-group driven.

Companies could afford to take their time making decisions because no one expected overnight change. Good things used to take a long time. 

We now live in a world where consumers demand rapid change and improvement to not just technology, but also processes. On a basic level, this makes sense. After all, who hasn’t had enough of poorly designed AI-driven, voice-activated phone trees when we just want to ask the pharmacist a question about our prescription refill? 

Too often, however, legacy debt leads to rapid implementations to meet customer demands – often with unintended (and catastrophic) consequences.  Often this is the result of rapid, poorly built (or bought) point solutions. This is where disruptors (aka startup companies) often pop up with quick, neat, point solutions of their own to solve a specific problem: a better AI-driven phone solution, a cuter user interface for web banking, sometimes even a new business model entirely. Your CIO sees this in an article or at a conference and wonders, “why can’t we build this stuff in-house?”

Chasing the latest greatest feature set is not digital transformation. Rather, digital transformation begins with recognizing that legacy debt must be considered when evaluating what needs changing, then figuring out how to bring about said change, and how to enable future rapid decision making. If legacy systems and processes are so rigid or outdated that a company cannot implement change quickly enough to stay competitive, then, by all means, rapid external help must be sought. Things must change.

However, in many cases what passes for transformation is really just evolution. Legacy systems, while sometimes truly needing a redo, do not always need to be tossed away overnight in favor of the hottest new app. Rather, they need to be continually evaluated for better integration or modernization options. Usually by better exposing back end systems. Transformation is just another word for solving a problem that needs solving, not introducing a shiny object no one has asked for. Do our systems and processes, both new and old, allow us to operate as nimbly as we must, to continue to grow, thrive and meet our customer demands today and tomorrow?

The Steve Jobs effect

Steve Jobs once famously stated (it’s captured on video, so apparently it really happened), when asked why he wasn’t running focus groups to evaluate the iPod, “How would people know if they need it, love it or want it if I haven’t invented it yet?”

Many corporate decision-makers think they are capable of emulating Steve Jobs. Dare I say it, they are not, nor are most people. Innovating in a vacuum is a very tricky business. It’s best to let the market and our customers drive innovation decisions. Certainly, I advocate for healthy investment in research and development, yet too often innovation-minus-customers equals wasted dollars. Unless one is funding R&D for its own sake, certainly a worthy cause, one needs some relative measure of the value and outcomes around these efforts. Which usually translates to marketability and ultimately profits.

Measurement

Perhaps the most often forgotten reality of our technology investments is understanding what the end goal, or end-state, is, and measuring whether or not we accomplished what we set out to do. Identifying a problem and setting a budget to solve that problem makes sense. But failing to measure the effectiveness after the fact is a lost opportunity. Just getting to the end goal isn’t enough, if in the end the problem we sought to solve remains. Or worse yet we created other more onerous unintended consequences.

Digital transformation isn’t about buzzwords or “moving faster” or outpacing the competition. It’s all of that, and none of that at the same time. It’s having IT processes and systems that allow a firm to react to customer-driven needs and wants, in a measured, appropriate, and timely way. And yes, occasionally to try to innovate toward anticipated future needs.
Technology is just the set of tools we use to solve problems.

Does it answer the business case?

“IT” is never — or at least shouldn’t be — an end-in-itself: it must always answer to the business case. What I’ve been describing here is an approach to IT that treats technology as a means to an end. Call it “digital transformation,” call it whatever you want — I have no use for buzz words. If market research informs you that customers need faster web applications, or employees tell you they need more data integration, then it’s IT’s job to make it happen. The point is that none of that necessitates ripping and replacing your incumbent solution. 

IT leaders who chase trends or always want the latest platform just for the sake of being cool are wasting money, plain and simple. Instead, IT leaders must recognize legacy debt as the investment it is. In my experience, if you plug this into the decision-making calculus, you’ll find that the infrastructure you already have can do a lot more than you might think. Leverage your legacy debt, and you’ll not only save time delivering new products or services, but you’ll also minimize business interruption — and reduce risk in the process. 

That’s the kind of digital transformation I can get behind.

TxMQ’s Chuck Fried and Craig Drabik Named 2020 IBM Champions

More than 40 years ago, TxMQ was founded by veterans of IBM who believed in supporting mainframe customers through new solutions built for IBM products. We’ve come a long way since 1979: we’ve moved our headquarters from Toronto to the U.S., our leadership team has grown, and we continue to enhance our roster of services. And though our capabilities and products have advanced, we’ve still managed to maintain a close connection to our roots at IBM. Our mission has also remained the same: to empower companies to become more dynamic, secure and nimble through technology solutions.

This mission has helped us assemble a team of innovators who constantly strive to help our clients meet their business goals through technological advancements.

Chuck Fried and Craig Drabik are great examples of TxMQ’s consistent excellence in bringing the best solutions to our enterprise clients. They were recently named to IBM’s 2020 Class of Champions for demonstrating extraordinary expertise, support and advocacy for IBM technologies, communities and solutions. Champions are thought leaders in the technical community who continuously strive to innovate and support new and legacy IBM products. As IBM states, “champions are enthusiasts and advocates… who support and mentor others to help them get the most out of IBM software, solutions, and services.” Here, Chuck and Craig share what IBM and being named IBM Champions means to them:

IBM Champion of Cloud, Cloud Integration, and Blockchain

Chuck Fried
President, TxMQ

“I’ve been building technological solutions for over 30 years, and have worked with many large software and technology companies. As we help our clients evolve, I am constantly drawn back to IBM. They are thought leaders in the technology industry, bringing the best new software and services to the market. Working with them, we know that our clients are getting the best possible solution. I’m proud to continue advocating for their brand.”

IBM Champion of Blockchain

Craig DrabikCraig Drabik
Technical Lead, Disruptive Technologies Group

“Although IBM is often associated with mainframe and legacy technologies, they offer so much more to the technology industry. Being named a Champion, when involved in disruptive technologies, proves this.  IBM is progressive and innovative, and strives to develop solutions for a range of products and industries. Working with IBM, we have access to world-renowned solutions that are trustworthy.”

As TxMQ builds new tools to support and grow the IBM ecosystem, having two Champions is a great achievement for our company. With this recognition, we can continue fostering our relationship with IBM and building life-changing technology for our customers.

Generating OpenAPI or Swagger From Code is an Anti-Pattern, and Here’s Why

(This article was originally posted on Medium.)

I’ve been using Swagger/OpenAPI for a few years now, and RAML before that. I’m a big fan of these “API documentation” tools because they provide a number of additional benefits beyond simply being able to generate nice-looking documentation for customers and keep client-side and server-side development teams on the same page. However, many projects fail to fully realize the potential of OpenAPI because they approach it the way they approach Javadoc or JSDoc: they add it to their code, instead of using it as an API design tool.

Here are five reasons why generating OpenAPI specifications from code is a bad idea.

You wind up with a poorer API design when you fail to design your API.

You do actually design your API, right? It seems pretty obvious, but in order to produce a high-quality API, you need to put in some up-front design work before you start writing code. If you don’t know what data objects your application will need or how you do and don’t want to allow API consumers to manipulate those objects, you can’t produce a quality API design.

OpenAPI gives you a lightweight, easy to understand way to describe what those objects are at a high level and what the relationships are between those objects without getting bogged down in the details of how they’ll be represented in a database. Separating your API object definitions from the back-end code that implements them also helps you break another anti-pattern: deriving your API object model from your database object model. Similarly, it helps you to “think in REST” by separating the semantics of invoking the API from the operations themselves. For example, a user (noun) can’t log in (verb), because the “log in” verb doesn’t exist in REST — you’d create (POST) a session resource instead. In this case, limiting the vocabulary you have to work with results in a better design.

It takes longer to get development teams moving when you start with code.

It’s simply quicker to rough out an API by writing OpenAPI YAML than it is to start creating and annotating Java classes or writing and decorating Express stubs. All it takes to generate basic sample data out of an OpenAPI-generated API is to fill out the example property for each field. Code generators are available for just about every mainstream client and server-side development platform you can think of, and you can easily integrate those generators into your build workflow or CI pipeline. You can have skeleton codebases for both your client and server-side plus sample data with little more than a properly configured CI pipeline and a YAML file.

You’ll wind up reworking the API more often when you start with code.

This is really a side-effect of #1, above. If your API grows organically from your implementation, you’re going to eventually hit a point where you want to reorganize things to make the API easier to use. Is it possible to have enough discipline to avoid this pitfall? Maybe, but I haven’t seen it in the wild.

It’s harder to rework your API design when you find a problem with it.

If you want to move things around in a code-first API, you have to go into your code, find all of the affected paths or objects, and rework them individually. Then test. If you’re good, lucky, or your API is small enough, maybe that’s not a huge amount of work or risk. If you’re at this point at all, though, it’s likely that you’ve got some spaghetti on your hands that you need to straighten out. If you started with OpenAPI, you simply update your paths and objects in the YAML file and re-generate the API. As long as your tags and operation Ids have remained consistent, and you’ve used some mechanism to separate hand-written code from generated code, all you’re left to change is business logic and the mapping of the API’s object model to its representation in the database.

The bigger your team, the more single-threaded your API development workflow becomes.

In larger teams building in mixed development environments, it’s likely you have people who specialize in client-side versus server-side development. So, what happens when you need to add to or change your API? Well, typically your server-side developer makes the changes to the API before handing it off to the client-side developer to build against. Or, you exchange a few emails, each developer goes off to do his own thing, and you hope that when everyone’s done that the client implementation matches up with the server implementation. In a setting where the team reviews the proposed changes to the API before moving forward with implementation, you’re in a situation where code you write might be thrown away if the team decides to go in a different direction than the developer proposed.

It’s easy to avoid this if you start with the OpenAPI definition. It’s faster to sketch out the changes and easier for the rest of the team to review. They can read the YAML, or they can read HTML-formatted documentation generated from the YAML. If changes need to be made, they can be made quickly without throwing away any code. Finally, any developer can make changes to the design. You don’t have to know the server-side implementation language to contribute to the API. Once approved, your CI pipeline or build process will generate stubs and mock data so that everyone can get started on their piece of the implementation right away.

The quality of your generated documentation is worse.

Developers are lazy documenters. We just are. If it doesn’t make the code run, we don’t want to do it. That leads us to omit or skimp on documentation, skip the example values, and generally speaking weasel out of work that seems unimportant, but really isn’t. Writing OpenAPI YAML is just less work than decorating code with annotations that don’t contribute to its function.

IBM Db2 v10.5 End of Support: April 30, 2020

Are you still running IBM Db2 v10.5 (or alternately, an even earlier build)?

IBM has announced an end of support date on Db2 v10.5 for Linux, Unix, and Windows. If you are still using that version it’s recommended that you upgrade to v11.1 to avoid any potential security issues that may occur in earlier, unsupported versions of Db2. 
Db2 v11.1 Highlights:

  • Column-organized table support for partitioned database environments
  • Advances to column-organized tables
  • Enterprise encryption key management
  • IBM Db2 purScale Feature enhancements 
  • Improved manageability and performance
  • Enhanced upgrade performance from earlier versions 

What are your plans for DB2?

I plan to upgrade:

It’s never too late to start planning your upgrade and upgrading to IBM Db2’s newest v11.1 is a great option. There are great new features that help you manage costs better, improve efficiency, and manageability. 
Take a closer look here at some of the enhancements.  
If you are still considering your plans, now’s a great time to speak with our SME Integration Upgrade Team. Reach out to us today to set up a free Discovery Session or contact us directly for any questions

I would like to continue to use v10.5 (or earlier versions):

It’s ok if you’re not ready for the newest version of Db2 just yet. However, it’s important to remember that without support you may not be protected against avoidable security risks and additional support costs. IBM does offer Extended Premium support but be prepared, that option will be very expensive. 
Alternatively, as an IBM Business Partner TxMQ offers expert support options. As a business partner, we have highly specialized skills in IBM software. We can help guide you through issues that may arise at a fraction of the cost with the added benefit of flexibility in services. Check out more on TxMQ’s Extended Support Page.

I will support it internally:

If you have an amazing internal team inhouse, odds are they don’t have much time to spare. Putting the gravity of a big project on your internal team can cut into their productivity. For many organizations, this will limit a team’s ability to focus on innovation and improving customer experience. This will make your competitors happy but your customers and clients definitely won’t be. 
Utilizing a trusted partner like TxMQ can help cut costs and give back some time to your internal team to focus on improvements and not just break/fix maintenance. Reach out to our team and learn how we can help maintain your existing legacy applications and platforms so your star team can focus on innovation again. Reach out and ask how we can help today.

I don’t know, I still need help!

Reach out to TxMQ today and schedule a free Discovery Session to learn what your best options are! TxMQ.com/Contact/

DLT Applications: Tracking medication through the healthcare supply chain

This article was originally published by MCOL.com on 12/19. 

It’s no secret that we have a dangerous opioid epidemic in the United States, as well as in many other parts of the world. Efforts to address the issue have come from both industry and government entities alike. In 2017, there were 47,600 overdose deaths in the U.S. involving opioids, which led to the U.S. Department of Health & Human Services (HHS) declaring a healthcare crisis. In April 2017, HHS outlined an Opioid Strategy, which included, among other components, the desire to create methods to strengthen public health data reporting and collection to inform a real-time public health response as the epidemic evolves.

Opioids are strong pain medications that mimic the pain-reducing qualities of opium, and when used improperly, are extremely dangerous and highly addictive. The increasing epidemic has highlighted the need for organizations to keep secure, reliable and actionable product lifecycle data, ensuring that they can track the entire supply chain for sensitive medications. In addition to meeting regulatory compliance requirements, cost and efficiency benefits may also be realized through tighter tracking and better data. Most importantly, it can help to cut down on the lives that are lost because of opioids and other medications being misused.

Healthcare Supply Chains

When discussing technology integration in a highly regulated industry like healthcare, it is hard to find solutions that work to both reduce costs and improve efficiencies, while still maintaining high levels of security and usability. This is why many healthcare organizations are turning towards supply chain management for new solutions; it will still improve efficiencies and cost, but it rarely involves personal health information, making it easier to satisfy regulatory requirements. In cases that use blockchain or distributed ledger solutions, it can also use immutable data and analytics, which can address suppliers’ fears of being hacked or losing sensitive proprietary information. On top of that, supply chain management can provide results to healthcare organizations to ensure that the solution is working effectively. In a 2018 Global Healthcare Exchange survey, nearly 60 percent of respondents said that data and analytics improvements were their highest priority. Supply chain management has many benefits for healthcare organizations, without having to work around highly regulated and secure data.

Supply chain management involves tracking supplies from the distributors or manufacturers, all the way through the healthcare organization to the patients receiving the medication or supplies. Many organizations still track supplies by hand, which can result in high margins of error. Also, many healthcare management systems are not integrated with each other, which means that patients can take advantage of these systems and access dangerous medications more easily. As healthcare systems move beyond hospitals and into non-acute sites, supply chain management becomes increasingly complex and difficult to manage. With supply chain management, healthcare organizations can track down errors and find out who made the error and when. When prescription drugs are involved, this would include knowing which patients, physicians, or prescribers are abusing the system by accessing more pain medications and opioids than they actually require or by over-prescribing more than should be allowed. This can help end addictions and overdoses.

Distributed Ledger Technology Solution

Accurate, timely information is critical in any supply chain. In the pharmaceutical industry, regulatory oversight and the potential for serious consequences for patients make supply chain traceability even more important. The ability to assess the behavior of the participants—patients, providers, distributors, manufacturers and pharmacists—within the supply chain is a useful tool in the battle against substance abuse. Developing a controlled substance management system as a robust, compliant supply chain management solution can help to track the movement of orders and medications through the pharmaceutical supply chain, from manufacturer and distributor to pharmacy and patient. Participants generate activity daily by consuming medication and refilling their prescriptions when they run out. Similarly, pharmacies and distributors place orders when supplies run low. Building this solution on a distributed ledger technology such as Hashgraph allows for increased security, immutable, time-stamped data, fast throughput, and easy customization to meet the needs of healthcare providers. It can even be customized to flag violations of laws or best practices, such as refilling prescriptions too often or over-prescribing.

Distributed ledger solutions have the ability to enforce rules on each participant with regards to the amount of medication that can be consumed, manufactured, distributed or prescribed. Patients’ refills can be limited based on their needs. Physicians, pharmacies and distributors have limits on the amount of medication they can prescribe or order in a given period of time to ensure that they are not abusing the system either. Participants who exceed these set limits are flagged by the system and can be removed, meaning that they are no longer able to order, prescribe or refill specific substances. This system can track a number of elements or components. In this case, it could track the distributor, the manufacturer, the prescribing physician, the pharmacy, the medication or opioid, and the patient. Time-stamped, immutable data allows for healthcare organizations to easily see when an error or an abuse of the system took place.

Distributed ledger solutions are built on a system of nodes and each node processes each transaction. Each record or transaction is signed using the signature of the previous transaction to guarantee the integrity of the chain or the ledger. This means that these systems are difficult to breach or hack. Although supply chain management does not directly use confidential patient health information, it is important that all solutions that are integrated into a healthcare system are secure to ensure that data cannot be manipulated, allowing for further abuse of dangerous medications.

Finding a Solution

To save lives, it is imperative to find effective solutions to issues facing healthcare and the opioid epidemic. Unfortunately, within this industry, it can be hard to innovate due to privacy and regulations. Distributed ledger technology has the chance to innovate and potentially save lives when implemented as a sensitive medication supply chain management system. Its high-security, transparency and immediate auditability makes it an effective solution to track how harmful medications are being abused and to put an immediate stop to these issues. Technology already exists to solve these problems; it is only a matter of the healthcare industry taking these solutions seriously and implementing them before more lives are lost.

 

Bringing Offshore Contracts Back Onshore

User Groups Hero Banner

Although many companies rely on offshore technology teams to save costs and build capacity, there are still many challenges around outsourcing. Time zone issues, frequent staff turnover, difficulty managing workers, language barriers—the list goes on and on. Offshore workers can allow companies to save money. But what if offshore pricing was available for onshore talent? What if the best of both worlds – an easily managed workforce at a competitive cost – was possible. In fact, it is.

For all the pains and issues related to building global technology teams, outsourcing remains a viable option for many companies that need to build their engineering groups while controlling costs. With the U.S. and Europe making up almost half of the world’s economic output, but only 10% of the world’s population, it’s no secret that some of the world’s best talent can be found in other countries. That’s how countries such as India, China, and Belarus have become global hubs for engineering. And why not? They have great engineering schools, low costs of living, and large numbers of people who are fully qualified to work on most major platforms.

Reinventing Outsourcing 

This is basic supply and demand: companies want to hire people at as competitive a price point as possible without sacrificing quality. This is exactly how Bengaluru and Pune became technology juggernauts in the 1990s, and how Minsk became a go-to destination a decade later. The problem, of course, is that what was once a well-kept secret became well known…and wages started creeping up.

With salaries increasing in countries that typically are used for offshore talent, the cost of offshore labor is also on the rise. In India, a traditional favorite of offshore workers, annual salaries have been steadily rising by 10% since 2015, making it less beneficial for companies to hire workers there. In fact, in one of the biggest outsourced areas, call centers, workers in the U.S. earn on average only 14% more than outsourced workers. In the next few years, the gap will be narrow enough that the benefits of setting up a call center in Ireland or India just won’t make sense. What the laws of supply-and-demand can give, they can also take away. That’s why “outsource it to India” is no longer an automatic move for growing technology companies, financial institutions, and other businesses looking to rapidly grow their teams. It’s also why major Indian outsourcing companies such as Wipro and Infosys are diversifying into other parts of the world. 

As political and economic instability grow, moving a company’s outsourced work domestically can help to mitigate the risks of an uncertain landscape. A perfect example of this is China. Hundreds of American companies have set up development offices in China to take advantage of a skilled workforce at a low price point. So far so good, right? Well, not really. Due to concerns about cybersecurity and intellectual property theft, companies such as IBM have mandated that NONE of their code can come from China. All of a sudden, setting up shop in Des Moines is a lot more attractive than going to Dalian.

The federal government, as well as many states and municipalities, are also playing an active role in keeping skilled technology jobs at home through grants and tax breaks. New programs and training schools are also emerging, which are helping to build talent in the U.S. at a lower cost and helping companies take advantage of talented workers outside of large cities with low costs of living. hiring 100 engineers in midtown Manhattan might not be cost-effective, but places like Phoenix and Jacksonville allow companies to attract world-class talent without breaking the bank.

This doesn’t mean the end of offshoring, of course, When looking for options to handle mainframe support, and legacy systems services, including Sparc, AIX, HPUX, and lots of back-leveled IBM, Oracle and Microsoft products, the lure of inexpensive offshore labor often wins. Unlike emerging technologies, legacy systems do not require constant updates and front-end improvements to keep up with competitors. The typical issues that affect offshore outsourcing aren’t as big of an issue when legacy systems are involved. So where does it make sense to build teams, or hire contractors, domestically?

Domestic Offshoring (sometimes called near-shoring)

There is a key difference between outsourcing development to overseas labs and building global teams, but the driving force behind both approaches is pretty much the same: cut costs while preserving quality. Working with IT consulting and staffing companies like TxMQ is a prime example of how businesses can take advantage of outsourcing onshore without going into the red. Unlike technology hubs such as Silicon Valley, these companies are typically located in areas such as the Great Lakes region, where outstanding universities (and challenging weather!) yield inexpensive talent due to lower living costs. With aging populations creating need for skilled workers in the Eastern United States, more states are introducing benefits to attract more workers. This is already creating an advantage for companies that provide outsourced staffing because they can charge lower prices than traditional technology hubs. It’s the perfect mix of ease, quality, and cost.  

Global 2000 companies face challenges resulting from their large legacy debt, and the costs to support their systems are high. As they struggle to transform and evolve their technology to today’s containerized, API-enabled, microservices-based world, they need lower-cost options to both support their legacy systems and build out new products.

While consulting and staffing companies are well known for transformational capabilities and API enablement, there are other advantages that aren’t as well known. For these transformational services, many companies also support older, often monolithic, applications, including those likely to remain on the mainframe forever. From platform support on IBM Power systems to complete mainframe management and full support for most IBM back-leveled products, companies like TxMQ have found a niche providing economical support for enterprise legacy systems, including most middleware products of today, and yesterday. This allows companies to invest properly in their enterprise transformation while maintaining their critical legacy systems.

The Future of Work

In a 2018 study of IT leaders and executives, more than 28 percent planned to increase their on-shore spending in the next year. With the ability to move work online, companies can support outsourced teams easily, whether onshore or offshore. To mitigate age-old issues such as different time zones and language barriers, and as the pay gap closes between the U.S. and other nations, employing the talents of outsourced workers onshore can help companies benefit from outsourcing without having to fly 15 hours across two oceans to do it.

Tackling digital transformation proactively — before a crisis hits

Digital Disruption Image

This article was originally published by CIO Dive on 12/9/19.

Digital transformation is the headline driver in most enterprises today, as companies realize that in order to stay relevant, engage current and new customers and thrive, they need to constantly reevaluate their technology stack.

Unfortunately, real transformation is an intensive process that is neither easy nor smooth. Digital transformation tends to take place reactively, whether it’s in response to losing a customer as new competition arises or as a way to manage issues.

But reactive digital transformation tends to be unsustainable because without a real strategy no one knows what they are trying to change and why. The “why” needs to evolve from what the customer expects and what can be implemented in order to retain business.

In a data-driven age, digital transformation needs to ensure that customers can access what they need, when, how and where they want, while still keeping their information safe. The fundamental structure of how data is gathered, compiled, used and distributed needs to evolve.

As new competitors arrive with the latest greatest new age disruptive technology, companies must deal with their legacy debt and the associated costs which prohibit quick upgrades to systems and processes (not to mention the critical retraining of staff).

CIOs are aware that this change is necessary and 43% think legacy and core modernization will have the largest impact on businesses over the next three years. But knowing it’s needed and knowing how to prepare require different mindsets.

In order to ensure they are addressing change effectively, CIOs need to prepare for digital transformation proactively — before a crisis arises.

Proactive digital transformation: The cost of legacy debt

There are three types of debt impacting most companies: process, people and technical.

Process debt refers to the appropriate frameworks needed to operate using modern systems and is highly important to address as it has a cascading effect into other types of debt.

A new type of workforce is needed to negotiate emerging trends and new technologies, and a lack of highly skilled workers is referred to as people debt. Similarly, existing staff must be retrained on new technologies and processes.

Technical debt involves legacy software, computers and tools that have become outdated, and are expensive to manage or upgrade. Companies often find the cost of maintaining legacy technologies is so great it diverts resources away from modernizing, evolving or adapting the business.

The cost of legacy debt is more than just the amount of money and time it will take to upgrade to the latest solutions. It impacts all parts of the business, including the productivity of the company.

Companies with high technical debt are 1.6 times less productive than companies with low or no legacy debt. The importance of upgrading is clearly necessary to drive better business results.

In order to address all types of debt, companies need to foster an environment where people are willing to learn and update processes in order to modernize.

Preparing for a digital transformation

Some 20% of companies fail to achieve their digital transformation goals, so preparation is key when it comes to finding successes. Here are integral steps for how companies can prepare for the transformation.

1. Involve executives

Although the CIO may be spearheading the digital transformation initiative, all C-level executives should be involved from the start to align goals.

One must establish a consistent and clear story across the organization and ensure that all executives are prepared to communicate this across departments. Ensure alignment on objectives and create a clear path to success by creating goals for each department that focus on transformation.

People debt can be a huge barrier to successful digital transformation. To help mitigate this issue, offer leadership development opportunities that focus on knowledge for digital transformation and coaching programs to help manage employees in their new mode of working.

It also may be necessary to redefine roles at the organization to support digital transformation. This can help clarify what the roles will look like in the digital-first environment. Companies that integrate this practice in their digital transformation plan are 1.5 times more likely to succeed. Culture, after all, starts at the top of all organizations.

2. Define the customers’ needs

Although digital transformation manifests as a technological change, the real driver for the changes are customers’ needs.

Among obvious concerns customers express are security, mobile and digital experiences, and digital support. Conduct research and speak with the entire organization to identify pain points internally and externally that are impacting the customer experience.

Currently, only 28% of organizations are starting their digital transformation initiatives with clients as a priority. By placing emphasis on customers’ needs and looking for solutions that directly impact their experiences, the enterprise will have a unique advantage over other companies that are working on digital transformation.

3. Break down silos

Although not all departments may reap the benefits of digital transformation at once, if they understand that the client needs come first and feel as though they have input on how to create change, there is a better chance that the transformation will be smooth. Collaboration on the unified vision will be key to supporting the goals of the company.

Employees, much like executives, will also have to understand that they will be asked to work in new ways. Emphasize agility in the work environment; the ability for employees to adapt and change will be pivotal to the success of the company. Also, encourage employees to find new ways to work that support the path to digital transformation.

4. Break down goals

Implementing an entirely new digital strategy can be overwhelming. Discuss and decide upon key priorities with the organization and with stakeholders.

From there, break down those goals into smaller stepping stones that are easily achievable and work towards the overall goals of the transformation. This way, everyone knows the task at hand and can focus on achieving those smaller goals.

Communicate these small victories with the team to raise morale and ensure that they know that, although the goals are lofty, they are achievable.

Finding success

Over the years, $4.7 trillion have been invested across all industries in digital transformation initiatives, yet only 19% of customers report seeing the results of these transformations. This is because companies are failing to consider a key element of transformation—putting the customers’ needs first.

In order to retain clients and improve client satisfaction, CIOs need to have a plan in place that addresses customer concerns. Successful digital transformation will not come from a moment of panic: it requires proactive preparation.

Hedera Hashgraph – A Quick and Very Simple Explanation

What is Hedera Hashgraph?

Hedera Hashgraph is a Distributed Ledger Technology (DLT) and consensus algorithm. Even though Hashgraph is actually a DAG (Directed Acyclic Graph) it has been referred to as“Blockchain on Steroids” and “Blockchain 2.0” because it addresses many of the issues most DLT’s are currently facing with broader adoption.

Why do we need another DLT?

Hedera Hashgraph is a fast, fair and secure infrastructure to run Decentralized Applications or DApps. This technology is ridiculously fast, has a high throughput (potential for over one million transactions per second), and is asynchronous Byzantine Fault Tolerant (a-What?), and the only DLT that has a mathematically proven consensus mechanism.

Quick Answer: It works, it’s secure, and fixes the issues that have been holding back Blockchain from becoming a viable enterprise-grade technology since it’s inception.

Want to learn more about Hashgraph and other DLT’s reach out below and let us know. We’d love to talk about how to utilize Distributed Ledger Technologies to disrupt your industry by turning your use-case into a fully functioning Decentralized Application.