Why ETL Tools Might Hobble Your Real-Time Reporting & Analytics

Companies report large investments in their data warehousing (DW) or Business Intelligence (BI) systems with a large portion of the software budget spent on extract, transformation and load (ETL) tools that provide ease of populating such warehouses, and the ability to manipulate data to map to the new schemas and data structures.

Companies do need DW or BI for analytics on sales, forecasting, stock management and so much more, and they certainly wouldn’t want to run the additional analytics workloads on top of already taxed core systems, so keeping them isolated is a good idea and a solid architectural decision. Considering, however, the extended infrastructure and support costs involved with managing a new ETL layer, it’s not just the building of them that demands effort. There’s the ongoing scheduling of jobs, on-call support (when jobs fail), the challenge of an ever-shrinking batch window when such jobs are able to run without affecting other production workloads, and other such considerations which make the initial warehouse implementation expense look like penny candy.

So not only do such systems come with extravagant cost, but to make matters worse, the vast majority of jobs run overnight. That means, in most cases, the best possible scenario is that you’re looking at yesterday’s data (not today’s). All your expensive reports and analytics are at least a day later than what you require. What happened to the promised near-realtime information you were expecting?
Contrast the typical BI/DW architecture to the better option of building out your analytics and report processing using realtime routing and transformation with tools such as IBM MQ, DataPower and Integration Bus. Most of the application stacks that process this data in realtime have all the related keys in memory –customer number or ids, order numbers, account details, etc. – and are using them to create or update the core systems. Why duplicate all of that again in your BI/DW ETL layer? If you do, you’re dependent on ETLs going into the core systems to find what happened during that period and extracting all that data again to put it somewhere else.

Alongside this, most organizations are already running application messaging and notifications between applications. lf you have all the data keys in memory, use a DW object, method, function or macro to drop the data as an application message into your messaging layer. The message can then be routed to your DW or BI environment for Transformation and Loading there, no Extraction needed, and you can get rid of your ETL tools.

Simplify your environments and lower the cost of operation. If you have multiple DW or BI environments, use the Pub/Sub capabilities of IBM MQ to distribute the message. You may be exchanging a nominal increase in CPU for eliminating problems, headaches and costs to your DW or BI.

Rethinking your strategy in terms of EAI while removing the whole process and overhead of ETL may indeed bring your whole business analytics to the near-realtime reporting and analytics you expected. Consider that your strategic payoff. Best regards in your architecture endeavors!
Image by Mark Morgan.

How Do I Support My Back-Level IBM Software (And Not Blow The Budget)?

So you’re running outdated, obsolete, out-of-support versions of some of your core systems.? WebSphere MQ maybe? or WebSphere Process Server or Datapower…the list is endless.
Staff turnover may be your pain point – a lack of in-house skills – or maybe it’s lack of budget to upgrade to newer, in-support systems. A lost of times it’s just a matter of application dependencies, where you can’t get something to work in QA, and you’re not ready to migrate to current versions just yet.
The problem is that management requires you to be under support. So you get a quote from IBM to support your older software, and the price tag is astronomical – not even in the same solar system as your budget.
The good news is you do have options.

We were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.

Here at TxMQ, we have a mature and extensive migration practice, but we also offer 24×7 support (available as either 100% onshore, or part onshore, part offshore) for back-level IBM software and product support – all at a fraction of IBM rates.
Our support starts at under $1,000 a month and scales with your size and needs.
TxMQ has been supporting IBM customers for over 35 years. We have teams of architects, programmers, engineers and others across the US and Canada supporting a variety of enterprise needs.
Case Studies
A medical-equipment manufacturer planned to migrate from unsupported versions of MQ and Message Broker. The migration would run 6 to 9 months past end-of-support, but the quote from IBM for premium support was well beyond budget.
The manufacturer reached out to TxMQ and we were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.
Another customer (a large health-insurance payer) faced a similar situation. This customer was running WebSphere Process Server, Ilog, Process Manager, WAS, MQ, WSRR, Tivoli Monitoring, and outdated DataPower appliances. TxMQ built an executed a comprehensive “safety net” plan to support this customer’s entire stack during a very extensive migration period.
It’s never a good idea to run unsupported software – especially in these times of highly visible compliance and regulatory requirements.
In addition to specific middleware and application support, TxMQ can also work to build a compliance framework to ensure you’re operating within IBM’s license restrictions and requirements – especially if you’re running highly virtualized environments.
Get in touch today!
(Image from Torkild Retvedt)

IBM MQ For Click And Collect Retail

Few industries have gone through as much change as retailing. Every retailer in business today is dynamic. That’s the only way to survive – especially because the space continues to change rapidly with innovations such as multi-channel retailing, leaner and more responsive inventory management, and the new click-and-collect phenomena.
Click and collect is sometimes called “click and mortar” because a customer shops and buys online, but opts to collect the order in-store. It beats waiting (and paying) for the postman, it saves browsing time in the store, and shoppers can quickly research products and reviews.
Click and collect is scaling rapidly, but in order to offer the service, both the retailer and the customer must be able to access real-time stock and delivery schedules. Then, the item must be reserved for the customer at the instant of purchase.

…MQ Advanced is a mobile enabler, because of its quality-of-service message delivery, as well as the baked-in, lightweight MQTT protocol to support always-on push notifications that don’t hog battery or data.

That’s where IBM MQ for click and collect comes in, because MQ Advanced supports more reliable asynchronous queries to stock and information at all stores in the network. Thus stock is updated accurately and reliably in real-time via transaction coordination. And remember that many customers will be accessing the storefront via mobile, which creates its own set of problems. But MQ Advanced is a mobile enabler, because of its quality-of-service message delivery, as well as the baked-in, lightweight MQTT protocol to support always-on push notifications that don’t hog battery or data. (MQTT was originally designed for small, unreliable sensors for things like oil pipelines and machinery. It was adopted as the open Internet of Things standard and also powers Facebook Messenger.)
I should also point out the fact that retail is a great example of an industry sector that needs bulletproof, foolproof IT spread across geography. The same system must run in every store, it must run well, and it must be accessible by folks who aren’t always tech savvy. This has generally led to file-based solutions for retail IT, and the resultant need to batch process. But today’s systems, as noted above, must trickle-update based on real-time transactional data. That means stores must process data more quickly and move the data into the enterprise much more rapidly, and central system must update in real-time. The answer, again, is already within MQ Advanced.
Managed File Transfer is native inside MQ advanced, which means the point-of-sale files can move into the enterprise over an IBM MQ network with secure, reliable, traceable, guaranteed delivery.
According to IBM, a large US grocery retailer batch processed its data, which created a delayed analysis and made it difficult to detect theft. Using IBM MQ, IBM MQ Managed File Transfer and MQTT, and the company’s data warehouse now receives near-real-time transaction data from 2,400 different stores.
TxMQ is widely regarded as one of the premier MQ solutions shops in North America. We’ve been in business since 1979. Our customers typically start with our free, no-obligation discovery session. Want to improve your enterprise application performance? Want to know about how MQ is driving the digital economy, and how you can climb aboard? Let’s get the conversation started. Click Here For Our Free Discovery Session Offer.
(Image from DaveBleasdale)
[feature_headline type=”center” level=”h5″ looks_like=”h5″]Cool MQ-Related Content From TxMQ[/feature_headline]
[recent_posts type=”post” count=”6″ section=”mq” orientation=”vertical” fade=”true”]

How IBM MQ v8 Powers Secure Cloud Integration

In this quickly growing digital economy, where we have an increasing demand on things like security, cloud and mobility, IBM MQ has been growing to meet those demands. To pick two of the three topics, MQ v8 can deliver secure cloud integration straight of the box.
It is important to know what type of cloud are we’re really talking about. Are you talking about moving all of your services into the cloud – even your virtual desktops? Or are you talking about a hybrid cloud where with a mix of cloud computing supplementing your own services? Or are you talking about a private cloud, where you’ll have segments of internal computing services totally isolated from general services. There are different considerations for each scenario.
Regardless of the type of cloud-computing services you’re using, you still need to integrate these services, and you really need to ensure that your integration has security, data integrity and the capability of sending messages once-only with assured delivery. Cloud can’t provide that. MQ can and does. And it does if out of the box with several recent enhancements to ensure secure integration.
With the digital economy, we’re all sharing all this data, including personal data, banking and health data. We need to keep this data secure when it’s being shared, and control who has access to it. Then of course there’s the large compliance piece we need to meet. How does MQ meet all these demands? The answer is authentication, and MQ’s solution is still the same as being asked for ID of proof at the post office when you go to pick up a package. MQ v8 has been enhanced to support full user authentication right out of the box. No more custom exits and plugins.
For distributed platforms, you have local OS authentication, or you can actually go to centralized data. For z/OS you’re still focused on local authentication.
And this next point is important: MQ for quite some time has supported certificate authentication of applications connected to MQ services. But this always meant that the public MQ key had to be shared with everyone. MQ now has been enhanced to support the use of multiple certificates for authentication, securing of connections and encryption using separate key pairs. MQ still support SSL and TLS, although there are strong recommendations for switching from SSL to TLS based on the POODLE vulnerability.
(Image from mrdorkesq)

Rigorous Enough! MQTT For The Internet Of Things Backbone

The topic of mobile devices and mobile solutions is a hot one in the IT industry. I’ll devote a series of articles to exploring and explaining this very interesting topic. This first piece will focus on MQTT for the Internet of Things – a telemetry functionality originally provided through IBM.
MQTT provides communication in the Internet of Things – specifically, between the sensors and actuators. The reason MQTT is unique is, unlike several other communication standards, it’s rigorous enough to support low latency and poor connectivity and still provide a well-behaved message-delivery system.
Within the Internet of Things there’s a universe of devices that provide inter-communication. These devices and their communications are what enables “smart devices,” and these devices connect to other devices or networks via different wireless protocols that can operate to some extent both interactively and autonomously. It’s widely believed that these types of devices, in very short time, will outnumber any other forms of smart computing and communication, acting as useful enablers for the Internet of Things.
MQTT architecture is publish/subscribe and is designed to be open and easy to implement, with up to thousands of remote clients capable of being supported by a single server. From a networking standpoint, MQTT operates using TCP for its communication. TCP (unlike UDP) provides stability to message delivery because of its connection-oriented standard. Unlike the typical HTTP header, the MQTT header can be as little as 2 bytes, and that 2 bytes can store all of the information required to maintain a meaningful communication. The 2 bytes store the information in binary using 8 bits to a byte. It has the capability to add an optional header of any length. The 2 bytes of the standard header can carry such information as QOS, type of message, clean or not.
The quality-of-service parameters control the delivery of the message to the repository or server. The options are:

Quality-Of-Service Option Meaning
1 At most once
2 At least once
3 Exactly once

These quality-of-service options control the delivery to the destination. The first 4 bits of the byte control the type of message, which defines who’ll be interested in receipt of these messages. The type of message indicates the topic of the message, which will manage who receives the message. The last element will be the clean byte, which like the persistence in MQ will determine whether the message should be retained or not. The clean option goes a step further in that it will also tell the repository manager whether messages related to this topic should be retained.
In my next blog I’ll discuss the broker or repository for these messages. There are several repositories that can be used, including MessageSight and Mosquitto among others. The beauty of these repositories is their stability.
(Photo by University of Liverpool Faculty of Health & Life)

MQ In The Cloud: How (Im)Mature Is It?

Everyone seems to have this concept that deploying all of your stuff into the cloud is really easy – you just go up into there, set up a VM, install your data and you’re done. And when I say “everyone” I’m referring to CIOs, software salespeople, my customers and anyone else with a stake in enterprise architecture.
When I hear that, immediately I jump in and ask: Where’s the integration space in the cloud today? Remember that 18 or 20 years ago we were putting up application stacks in datacenters that were 2- or 3-tier stacks. They were quite simple stacks to put up, but there was very little or no integration going on. The purpose was singular: Deal with this application. If we’d had a bit more foresight, we’d have done things differently. And I’m seeing the same mistake right now in this rush to the cloud.
Really, what a lot of people are putting up in the cloud right now is nothing more than a vertical application stack with tons of horsepower they couldn’t otherwise afford. And guess what? That stack still can’t move sideways.
Remember: We’ve been working on datacenter integration since the old millennium. And our experience with datacenter integration shows that the problems of the last millennium haven’t been solved by cloud. In fact, the new website, the new help desk, the new business process and solutions like IBM MQ that grew to solve these issues have all matured within the datacenter. But the cloud’s still immature because there’s no native, proven and secure integration. What we’re doing in the cloud today is really the same thing we did 20 years ago in the datacenter.
I’m sounding the alarm, and I’m emphasizing the need for MQ, because in order to do meaningful and complicated things in the cloud, we need to address how we’re going to do secure, reliable, high-demand integration of systems across datacenters and the cloud. Is MQ a pivotal component of your cloud strategy? It’d better be, or we’ll have missed the learning opportunity of the last two decades.
How mature is the cloud? From an integration standpoint, it’s 18 to 20 years behind your own datacenter. So when you hear the now-familiar chant, “We’ve got to go to the cloud,” first ask why, then ask how, and finish with what then? Remind everyone that cloud service is generally a single stack, that significant effort and money will need to be spent on new integration solutions, and that your data is no more secure in the cloud than it is in a physical datacenter.
Want to talk more about MQ in the cloud? Send me an email and let’s get the conversation started.
(Photo by George Thomas and TxMQ)

The Need For MQ Networks: A New Understanding

If I surveyed each of you to find out the number and variety of technical platforms you have running at your company, I’d likely find that more than 75% of companies haven’t standardized on a single operating system – let alone a single technical platform. Vanilla’s such a rarity – largely due to the many needs of our 21st-century businesses – and with the growing popularity of the cloud (which makes knowing your supporting infrastructure even more difficult) companies today must decide on a communications standard between their technical platforms.
Why MQ networks? Simple. MQ gives you the ability to treat each of your data-sharing members as a black box. MQ gives you simple application decoupling by limiting the exchange of information between application endpoints and application messages. These application messages have a basic structure of “whatever” with an MQ header for routing destination and messaging-pattern information. The MQ message become the basis for your inter-communication protocols that an application can access no matter where the application currently runs – even when the application gets moved in the future.
This standard hands your enterprise the freedom to manage applications completely independent of one another. You can retire applications, bring up a new application, switch from one application to another or route in parallel. You can watch the volume and performance of applications in real-time, based on the enqueuing behavior of each instance to determine if it’s able to keep up with the upstream processes. No more guesswork! No more lost transactions! And it’s easy to immediately detect an application outage, complete with the history and how many messages didn’t get processed. This is the foundation for establishing Service Level Management.
The power of MQ networks gives you complete control over your critical business data. You can limit what goes where. You can secure it. You can turn it off. You can turn it on. It’s like the difference between in-home plumbing and a hike to the nearest watersource. It’s that revolutionary for the future of application management.

Measuring MQ Capacity: How To Talk To A Bully

TxMQ senior consultant Allan Bartleywood doesn’t like bullies. Didn’t like them when he was a wee lad chasing butterflies across the arid hardscrabble of the Zimbabwean landscape. And certainly won’t tolerate them today in giant enterprise shops across the world.
Here’s the deal: Allan’s an MQ architect. Pretty much the best there is. He’s got a big peacenik streak. And he likes to stick up for his guys when a company bully starts to blame MQ.
You’ve heard it before: “MQ is the bottleneck. We need more MQ connections. It’s not my application – it’s your MQ.”
We all know it isn’t, but our hands are tied because we can’t measure the true capacity of MQ under load. So we blame the app and the bully rolls his eyes and typically wins the battle because apps are sexy and MQ is not and the app bully has been there 10 years and we’ve been there 3.
But Bartleywood’s new utility – the aptly named MQ Capacity PlannerTM (MQCP) –  unties our hands and allows us to stand up to the bully.
“I’m giving everyone the information we need to defend our environments – to stand up for our MQ,” Bartelywood says. “The Tivolis, the BMCs, the MQ Statistics Tools can’t speak to capacity because they can’t gin the information to tell you what true capacity is. I absolutely love how MQCPTM allows me, and you, to turn the whole argument upside-down and ask the bully: ‘Here’s what the MQ capacity is. Does the demand you put on MQ meet what it can truly deliver? Can you actually consume connections as fast as MQ can deliver them?'”
MQCP is now available to the public for the first time. It’s simply the best tool to develop an accurate picture of the size and cost of your environment. Ask about our special demo for large enterprise shops.
Photo by Eddie~S

Managed File Transfer: Your Solution Isn't Blowing In The Wind

If FTP were a part of nature’s landscape, the process would look a lot like a dandelion gone to seed. The seeds need to go somewhere, and all it takes is a bit of wind to scatter them toward some approximate destination.
Same thing happens on computer networks every day. We take a file, we stroke a key to nudge it via FTP toward some final destination, then turn and walk away. And that’s the issue with using FTP and SFTP to send files within the enterprise: The lack of any native logging and validation. Your files are literally blowing in the wind.
The popular solution is to create custom scripts to wrap and transmit the data. That’s why there’s typically a dozen or so different homegrown FTP wrappers in any large enterprise – each crafted by a different employee, contractor or consultant with a different skillset and philosophy. And even if the file transfers run smoothly within that single enterprise, the system will ultimately fail to deliver for a B2B integration. There’s also the headache of encrypting, logging and auditing financial data and personal health information using these homegrown file-transfer scripts. Yuck.
TxMQ absolutely recommends a managed system for file transfer, because a managed system:

  • Takes security and password issues out of the hands of junior associates and elevates data security
  • Enables the highest level of data encryption for transmission, including FIPS
  • Facilitates knowledge transfer and smooth handoffs within the enterprise (homegrown scripts are notoriously wonky)
  • Offers native logging, scheduling, success/failure reporting, error checking, auditing and validation
  • Integrates with other business systems to help scale and grow your business

TxMQ typically recommends and deploys IBM’s Managed File Transfer (MFT) in two different iterations: One as part of the Sterling product stack, the other as an extension on top of MQ.
When you install MFT on top of MQ, you suddenly and seamlessly have a file-transfer infrastructure with built-in check-summing, failure reporting, audit control and everything else mentioned above. All with lock-tight security and anybody-can-learn ease of use.
MFT as part of the Sterling product stack delivers all those capabilities to an integrated B2B environment, with the flexibility to quickly test and scale new projects and integrations, and in turn attract more B2B partners.
TxMQ is currently deploying this solution and developing a standardization manual for IBM MFT. Are you worried about your file transfer process? Do you need help trading files with a new business partner? The answer IS NOT blowing in the wind. Contact us today for a free and confidential scoping.
Photo by Alberto Bondoni.

IBM WebSphere Message Broker And Integration Bus Both Vulnerable To POODLE

[fusion_text]Shortly after its announcement that WebSphere MQ could be exposed to the POODLE vulnerability, IBM issued a similar warning for its IBM WebSphere Message Broker and IBM Integration Bus (IIB) products. POODLE is short for Padding Oracle On Downgraded Legacy Encryption and it exploits an opening in SSLv3. Because SSLv3 is enabled by default in IBM WebSphere Message Broker and IBM Integration Bus, hardening against POODLE is critical. (See TxMQ’s coverage of the WebSphere MQ vulnerability here.)
OpenSSL could allow a remote attacker to bypass security restrictions. When configured with “no-ssl3” as a build option, servers could accept and complete an SSL 3.0 handshake, which could then be exploited to perform unauthorized actions.

Affected Products

The specific list of affected products includes:

  • IBM WebSphere Message Broker V7.0 and V8.0
  • IBM Integration Bus V9.0
  • IBM WebSphere Message Broker Hypervisor Edition V8.0
  • IBM Integration Bus Hypervisor Edition V9.0
  • IBM SOA Policy Pattern for Red Hat Enterprise Linux Server

Workarounds

The most important action is to disable SSLv3 and switch to TLS protocol on Message Broker and IIB servers and clients. Product-specific instructions, with direct links to the more detailed instructions in the IBM Knowledge Center, are listed below.

Inbound Connections

The attack vector is around inbound. The outbound connections may stop working if the server disallows SSLv3.
Inbound HTTP connections using the Broker-wide listener: Instructions found here.
mqsichangeproperties broker name -b httplistener -o HTTPSConnector -n sslProtocol -v TLS
Inbound HTTP connections using the integration server listener will by default use TLS (as the integration server listener defaults to TLS). If however it has been modified to match the broker-wide listener, use these instructions to make the necessary changes to use TLS.
mqsichangeproperties broker name -e integration_server_name -o HTTPSConnector -n sslProtocol -v TLS
Inbound SOAP connections using the non-default broker-wide listener: Instructions found here.
mqsichangeproperties broker name -b httplistener -o HTTPSConnector -n sslProtocol -v TLS
Inbound SOAP connections using the integration server listener (the default choice) will by default use TLS (as the integration server listener defaults to TLS). If however it has been modified to match the broker-wide listener, use these instructions to make the necessary changes to use TLS.
mqsichangeproperties broker name -e integration_server_name -o HTTPSConnector -n sslProtocol -v TLS
TCPIP Server inbound: Instructions found here.
mqsichangeproperties MYBROKER -c TCPIPServer -o myTCPIPServerService -n SSLProtocol  -v TLS
WebAdmin inbound: Instructions found here.
mqsichangeproperties brokerName -b webadmin -o HTTPSConnector -n sslProtocol -v TLS
ODBC (DataDirect) OpenSSL as configured in odbc.ini: The ODBC Oracle Wire Protocol driver allows for the EncryptionMethod connect option to be set to a value of 5, which means only use TLS1 or higher. Setting EncryptionMethod=5 for the Oracle Wire Protocol driver will avoid POODLE. This functionality has been available since 6.1 version of the Oracle WP driver. The providers of DataDirect drivers are working on similar functionality to all other ODBC drivers that support SSL and upgrading the version of OpenSSL used within the drivers to pick up the enhancement to SSL negotiation.
The client-based ODBC drivers (DB2 Client and Informix Client) rely on the SSL implementation within the database’s client libraries. See client libraries to learn about possible exposure to POODLE.

Outbound Connections

Once the servers are changed to use TLS, it’s important to update the outbound settings with the following commands. Note that in all the following instructions, TLS can be substituted for SSL_TLS or SSL_TLSv2 if needed.
For HTTP connections: Instructions found here.
Then in the SSL tab of the Request node(s) select TLS for the Protocol.
For SOAP connections that have been modified to use the non-default SSLv3 protocol: Instructions found here.
Then in the SSL tab of the Request node(s) select TLS for the Protocol.
TCPIP Client: Instructions found here.
mqsichangeproperties MYBROKER -c TCPIPClient -o myTCPIPClientService -n SSLProtocol -v TLS
JMS Nodes: Some information found here. Follow instructions as provided by your JMS Provider.
Follow instructions as provided by your JMS Provider.
CICS Nodes: Instructions found here.
the CICS nodes use TLS by default, so no change needed.

Security Providers

WSTrust: Set the environment variable MQSI_STS_SSL_PROTOCOL to “TLS”
TFIM: Set the environment variable MQSI_TFIM_SSL_PROTOCOL to “TLS”
Click here for IBM’s full CVE-2014-3566 bulletin.
TxMQ is an IBM Premier Business Partner and “MQ” is part of our name. For additional information about this vulnerability and all WebSphere-related matters, contact president Chuck Fried: 716-636-0070 x222, mailto:[email protected].
TxMQ recently introduced its MQ Capacity Planner – a new solution developed for performance-metrics analysis of enterprise-wide WebSphere MQ (now IBM MQ) infrastructure. TxMQ’s innovative technology enables MQ administrators to measure usage and capacity of an entire MQ infrastructure with one comprehensive tool.
(Photo by greg westfall under Creative Commons license.)
[/fusion_text]