What’s the difference between “managed” web servers and “unmanaged” web servers?
I’m glad you asked! There are several types of web servers that can be used with IBM WebSphere Application Server (WAS), including the Apache HTTP Server, Microsoft IIS web server and Sun Java System web server, among others. However, these non-IBM web servers CANNOT be controlled by IBM’s WebSphere Application Server (WAS).
Only the IBM HTTP Server (IHS) can be controlled by IBM WAS. And it’s the IBM HTTP Server (IHS) web server, specifically, that drives the concept of “managed” versus “unmanaged.”
A managed IHS web server is one that is installed on the same system as a WAS node agent and controlled by that WAS node agent.
WAS Admin —commands–> WAS node agent —controls–> IHS web server
An unmanaged IHS web server is one that is installed on a system that does not have any WAS node agent; therefore, it must use the IBM HTTP Server Administration Server to be controlled from WAS.
WAS Admin —commands–> IHS Admin server —controls–> IHS web server
It’s possible to use WAS Admin console to control the IHS web server in both cases. Managed simply means that the commands go from WAS Admin to a WAS node agent that controls the IHS web server on that system. Unmanaged means that the commands go from WAS Admin to an IHS Admin server which controls the IHS web server on that system.
Maybe an example will help shed some light on this concept: IHS installed on a stand-alone WAS server (no node agent) can be controlled by WAS only if the IHS Admin Server is configured and running. This is an unmanaged scenario. In version 8.0 and later, the Plug-in Configuration Tool (PCT) refers to this as “local_standalone” config type.
Here’s another example to explain further: IHS installed on a WAS node that’s federated to a WAS cell, and under the control of a WAS deployment manager, can be controlled by the WAS deployment manager – sending commands through the WAS node agent on the IHS system. This is the managed scenario. In version 8.0 and later, the Plug-in Configuration Tool (PCT) refers to this as “local_distributed” config type. Note the difference between the config types in our two examples.
What about IHS installed on the WAS deployment manager system itself?
If there’s also a federated WAS node on that same system, you can use that WAS node agent to control the IHS web server in a managed scenario (local_distributed).
If there is not any federated WAS node on that same system, you will need to use IHS Admin Server to control the IHS web server in an unmanged scenario (local_standalone).
If the IHS web server is installed on a separate system that does not have any WAS, and you want to control it remotely from the WAS Admin Console on another system, that would be considered an unmanaged scenario, so you will need to use the IHS Admin Server on the IHS system. In version 8.0 and later, the plugin Configuration Tool (PCT) refers to this as “remote” config type.
WAS Admin —commands across network—> IHS Admin server —controls–> IHS web server
For detailed instructions on how to configure IHS, plugin, or IHS Admin server, please contact [email protected]. To speak with a TxMQ WebSphere sales representative, call (716) 636-0070 (228) for company Vice President Miles Roty.
Author: admin
Cyber Security: 10 Tips For Small- To Mid-Size Businesses
I’ll start with a personal story about cyber security. Quite a few years ago (I won’t bore you with all the detail), my personal trainer’s email was hacked by a slightly saavy and jealous, ex-client’s boyfriend and personal emails between me and my trainer were distributed in a malicious manner to everyone in my trainer’s email network.
Needless to say, the backlash of this saga was incredible. My trainer escaped relatively unscathed, but the beating I took on it served as a lesson to me for the rest of my life. Don’t put anything into words via email or text that you wouldn’t say directly to someone’s face. Words on paper cannot be forgotten and it’s apparently incredibly easy to hack into someone’s “safe” network, download documents and use them as a weapon against said person or company.
When we went to the police with the breach, they scratched their heads, looked at us dumbfounded and essentially told us there was nothing we could do. It wouldn’t have mattered if there was. Reputations were already smashed, relationships and friendships were ruined and that sense of security and invincibility became an abstract thing of the past.
So this may sound like an exaggerated personal problem, but it happened and it was a traumatic event. Now imagine it’s your company and all your secure files. It’s your employees’ social security numbers, your business-banking routing numbers, your personnel files.
TxMQ attended an this morning titled “The Virtual Reality of White Collar Crime” where the discussion was about cyber attacks. The numbers are staggering.
There are an estimated 1 million cyber attacks per day. That breaks down to 50,000 attacks per hour, 840 attacks per minute and about 15 attacks per second. And they’re coming from all areas of the world.
Trends of late have seen organized cyber crime move from aiming at large, hard targets such as banks and financial institutions to softer small- and mid-size businesses.
Why?
Because it’s easier to hack into the SMB space. There are hackers who only focus the hard targets. They beat their heads against the wall until they chip away a brick, they move that brick and get one name and contact info. Then they start all over again, beating their heads against the wall to remove just one more brick, then one more, then one more. A painstaking process…
Now think about the SMB environment, where it’s much easier to export data and multiple files. Chip one brick away and all of a sudden you have the names and personal info of a thousand people. These professional services providers hold deeds and financial records, personal information and trusts.
Fact: 60% of small- and mid-sized businesses that suffer from a cyber attack go out of business within 6 months due to the cost of recovering from the attack. The average cost to recover from a cyber attack is $5.5M. Be proactive.
Fact: Cyber breach represents the largest transfer of wealth in US history. Businesses lose $250 billion a year to cyber breach and lose another $140 billion in downtime from the attack. That’s almost $400 billion per year. Process that for a moment.
And the truth of the matter is, it’s not even a matter of if it happens, it’s when. Within the past year, my personal credit card number has been stolen and used overseas three separate times.
Here are 10 recommendations for how small- and mid-sized businesses can protect themselves against a potential attack:
- Employee Background Checks
- Signed Security and/or NDA
- Written Policy as Part of Employee Handbook
- Provide Meaningful Education & Training (make sure what you have works)
- Secure Your IT Infrastructure
- Establish Password Policy
- Protect CC and Bank Accounts
- Test Your Systems
- Conduct Exit Interviews
- Take Immediate Action
Unfortunately, laws are reactive in nature, not proactive. While cyber crime is still being scoped and defined by the justice system, it’s happening all around us every day.
Get your systems reviewed. How likely are you to get hacked? Call TxMQ or a security firm to be proactive in your approach to protecting your company data.
Can you survive a cyber attack? If you’re a small- or mid-size company, likely the answer is no. And if you do survive, what’s the extraneous cost to your reputation, customers and most of all you?
Server Issues With WebSphere Application Server In Relational Database
WebSphere news from IBM: December 11, 2013
Technote (troubleshooting)
Problem(Abstract)
If you have chosen to store your WebSphere Application Server transaction and compensation logs in a Relational Database, and your system has constrained resources, the server might fail to start.
Cause
The recovery log service has attempted to obtain information from the WebSphere Application Server Directory Service before that service has fully initialized, and this recovery log service operation has timed out. The length of time taken for the directory service to initialize can vary depending on your system environment.
Diagnosing the problem
The following exception is reported in the WebSphere Application Server log file:
WsServerImpl E WSVR0009E: Error occurred during startup
com.ibm.ws.exception.RuntimeError: com.ibm.ws.recoverylog.spi.InternalLogException: Failed to locate DataSource, com.ibm.ws.recoverylog.spi.InternalLogException: Failed to locate DataSource
at com.ibm.ws.tx.util.WASTMHelper.asynchRecoveryProcessingComplete(WASTMHelper.java:176)
at com.ibm.tx.util.TMHelper.asynchRecoveryProcessingComplete(TMHelper.java:57)
at com.ibm.tx.jta.impl.RecoveryManager.recoveryFailed(RecoveryManager.java:1412)
at com.ibm.tx.jta.impl.RecoveryManager.run(RecoveryManager.java:1942)
at java.lang.Thread.run(Thread.java:773)
Caused by: com.ibm.ws.recoverylog.spi.InternalLogException: Failed to locate DataSource, com.ibm.ws.recoverylog.spi.InternalLogException: Failed to locate DataSource
at com.ibm.ws.recoverylog.custom.jdbc.impl.SQLMultiScopeRecoveryLog.openLog(SQLMultiScopeRecoveryLog.java:525)
at com.ibm.tx.jta.impl.RecoveryManager.run(RecoveryManager.java:1886)
… 1 more
Caused by: Failed to locate DataSource, com.ibm.ws.recoverylog.spi.InternalLogException: Failed to locate DataSource
at com.ibm.ws.recoverylog.custom.jdbc.impl.SQLNonTransactionalDataSource.getDataSource(SQLNonTransactionalDataSource.java:249)
at com.ibm.ws.recoverylog.custom.jdbc.impl.SQLMultiScopeRecoveryLog.getConnection(SQLMultiScopeRecoveryLog.java:760)
at com.ibm.ws.recoverylog.custom.jdbc.impl.SQLMultiScopeRecoveryLog.openLog(SQLMultiScopeRecoveryLog.java:393)
… 2 more
Resolving the problem
Increase the timeout value for the recovery log service operation by completing the following steps:
- Open the WebSphere Application Server administrative console.
- Click Servers > Server Types > WebSphere application servers > server_name.
- Under Server Infrastructure, click Java and Process Management > Process definition.
- Under Additional properties, click Java Virtual Machine > Custom properties > New.
- In the Name field, enter com.ibm.ws.recoverylog.custom.jdbc.impl.ConfigOfDataSourceTimeout.
- In the Value field, enter an integer timeout value in milliseconds; for example, to set the timeout to 10 seconds, enter 10000.
- Click OK, then click Save to save your changes to the master configuration.
The default value for the com.ibm.ws.recoverylog.custom.jdbc.impl.ConfigOfDataSourceTimeout property is two seconds.
Microsoft Officially Ends Windows XP Support: Security Issues Arise
The end of extended support for Windows XP is official. As of April 8, 2014, Microsoft will no longer develop or release security and/or updates for the ever-popular and overhauled Windows XP SP3 operating system. Microsoft has done the same for W95 & W98 in the past. Current data suggests that Windows XP is still running on 31% of desktops worldwide and is celebrating its 13th birthday, which is many years beyond its life expectancy.
Microsoft XP is just not built for the new digital world.
As well, Microsoft Vista will be end-of-life on April 11, 2017 and Windows 7 is scheduled for end-of-life January 14, 2020 – well within the 13-year-run XP has enjoyed.
What does this mean for Windows XP? It means no safeguards against viruses, spyware or intrusion from hackers, no updates, no patches and no support. Windows XP will not be able to support the latest and safest web-compatible versions of Internet Explorer or the latest hardware advances.
Web developers globally will be ecstatic to see XP-only IE 6, 7 and 8 go away. Not to mention that you can’t upgrade from Windows XP to Windows 7 – instead, it must be installed from scratch with the average enterprise migration taking 18-24 months from business case to full deployment.
What are the implications?
A lot of software that only runs on XP will not run. After April 8, 2014, you will lose send/receive email, network/internet access, network printing services and data transfers from removable media. Attackers will exploit the security code and essentially Windows XP will have “zero day” vulnerabilities forever. There are many out there who argue there’s anti-virus software that can block attacks and clean up infections if they occur, but who can say for sure or want to take that risk?
Can the APIs Used By AV Companies Be Trusted? Will Microsoft’s DEP (Data Execution Prevention) key to XP’s security be overcome by attackers?
All very good questions, indeed.
All is not lost, however. One can always look to White List solutions and/or Linux! Stay tuned!
Best Practices For Virtualization Optimization
Virtualization is a key technology for many organizations. The infrastructure allows organizations to benefit from higher server utilization, faster deployment and the ability to quickly clone, copy and deploy images. The growth of virtualization is driving businesses to perfect and optimize performance to reduce the overall challenges that come with every technology. By optimizing virtualization, companies will be able to thrive in all aspects of the business.
One of the main goals of virtualization is to centralize administrative tasks while improving scalability and workloads. IBM shows that this can be optimized through 5 entry points:
1. Image management
2. Patch & compliance
3. Backup & restore
4. Cost management
5. Monitoring and capacity planning
1. Image Management – Optimize the Virtual Image Lifecycle: A virtualization environment needs core, baseline images that must be managed properly. Optimizing the environment allows an organization to manage these images throughout their lifecycle. Creating a virtual image library improves the assessing, reporting, remediating & enforcing of image standards. This improves the process of finding unused images, images that need patching, and allowing more frequent patching to provide compliance. Other strategies for image management include increasing the ratio of managed images to administrators to decrease IT labor costs and improvement self-service capabilities for end-user direct access.
2. Patch & compliance – Optimize Path Remediation: Automating the patch assessment and management increases the first-pass patch success rate by reducing IT workload and helping organizations comply with security standards. This automatic process reduces the security risk because it decreases amount of time for repairs and provides great visibility with flexible, real-time patch monitoring and reporting from a single management console. A closed loop design allows admins to patch as fast as they can provision by enabling security and operations teams to work together. This helps to provide continuous compliance enforcement in rapidly changing virtualized/cloud environments.
3. Backup and restore – Optimize Resilience and Data Protection: Data is growing as fast, if not faster, than the elements of a virtualized & cloud based environments. Protecting and managing an organization’s data is key to virtualizing optimization. Deduplication is one way to simplify and improve data protection and management. This can be accomplished by doing incremental backups. This helps simplify protection, management of data, speed restoration and backups. It also helps to conserve the resources and increase bandwidth efficiencies due to the decrease the amount of space and time of each backup. This in turn provides the business with lower equipment and management costs.
4. Cost Management – Optimize Metering & Billing: While virtualization helps organization reduce operating costs overall, optimization of this technology helps organizations know where the costs are incurred. An automatic collection of data usage provides how many resources the internal users are consuming and gives the service providers those resources incurred for accurate billing. Using this advanced analytics helps organizations better understand the use and costs of computer storage and network resources. In turn providing overall business improvement by allowing organizations to accurately charge for services.
5. Monitoring and Capacity Planning – Optimize Availability with Resource Utilization: By monitoring performance and planning with historical data, organizations can add a proactive approach to fix issues before they’re discovered and plan for the future to ensure optimized systems and applications. This approach reduces resource consumption by supporting right size virtual machine for different workloads. It speeds deployments by spotting bottlenecks and reduces licensing costs by consolidating virtual machines onto fewer hosts.
IBM provides virtualization optimization by basing cloud services and software on an open cloud architecture. Their IBM SmartCloud foundation is designed to help organizations of all sizes quickly build and scale their virtualized & cloud environment infrastructures and platform capabilities. They help provide delivery flexibility, and choices that organizations need to evolve an existing virtualized infrastructure to cloud. They help accelerate the adoption with integrated systems & gain immediate access to managed services. IBM’s expertise, open standards, and proven infrastructure will help an organization achieve new levels of innovation, and efficiency.
Five Security Issues To Consider In The Mobile Age
Mobile applications are the new technology trend. As with any technology trend, there are exciting new business opportunities that emerge. But first, a bit about what exactly is a mobile application? Mobile applications are generally classified as one of three types:
Native Applications
Built using a device-specific software development kit (SDK) to exploit the capabilities of the device
Web-Browser Applications
Built using the fifth revision of Hypertext Markup Language (HTML5) enhancements for web applications
Hybrid Applications
Built using a library (often client-side JavaScript) to allow coding for a “generic” mobile function (that accesses device-specific capability) without the need to make different calls for each platform (such as native) and sometimes provide a runtime container
With these classifications in mind, here are the five major security issues to consider for the new Mobile Age.
1. Prepare Yourself For Success
Every environment now has a backup-and-restore plan in case of emergencies. But what most companies do not have is a success plan. SO it’s important to consider: What do you do if you do succeed? Some mobile apps go “viral” and a sudden wave of transactions may cause your network to become overloaded. But with broad technology offerings from IBM, including DataPower appliances and cloud services, you can build a plan for failover or fail-up.
2. Bring Your Own Device
Many employees already use personal phones for calls at night or for email while traveling. Why not extend this ability to other mobile applications and data? The security of mobile devices is a priority for business and IT leaders. Two challenges stand out: (1) The ability to terminate access to the server-side of the mobile app, and (2) The loss of information that may remain on the device when it “goes rogue.”
As an organization, if you don’t own the device that’s running the application, you may not be able to stop an application request from being generated on the mobile phone. That means you may receive a lot of traffic from clients that is no longer valid. If you have the technology to identify and correlate incoming requests from legitimate people, devices and applications, your strategy’s sound. However, the case is often different and you may need an application-level appliance at the application endpoint that’s capable correlating granular service-level agreements.
3. Adapt And Survive
Web-application-savvy business leaders are already prepared to filter web requests to provide differentiated quality of service. Gating traffic, however, may become more visible to your mobile users because mobile users are more aware of response time. Delays may lose the attention of the audience you’re looking to keep.
In application design, there must be the awareness of how to reduce the amount of “bad load” or “bad users” on your application, and at the same time respond quickly to validated traffic that’s driven to your businesses. This is where the defense and strategic use of DataPower appliances and IBM products can provide application efficiencies. Thea ability to differentiate, balance and distribute requests can truly yield operational advantages.
4. Mobile-First And Good Service Design
Mobile applications can help organizations enter new markets, retain and extend participation from current users of services and attract new users to services. If the goal of going mobile is to reach a larger audience and access new markets, user-interface design may be the most important aspect to consider. If you’re not trying to win over the eyes of the new market, but instead trying to get a core piece of information across to your mobile audience, then service design and the ability to deliver information quickly and securely may be the most important aspect for your company. Good service design includes understanding your own application-integration infrastructure and being able to leverage this infrastructure from a mobile device.
5. Location, Location, Location
Mobile access and mobile applications challenge the notion that there’s a boundary between the outside and the inside. Mobile employees need “unplugged” access as they travel. More customers need access to more information and they want this information faster than ever before. Mobile devices are great for providing information “on the go,” but because of their smaller screen size they’re limited in their abilities. Technology is evolving though, and there are now such things as “notifications” that can indicate when a message is incoming or that an application update is available.
The reality of life on the internet is that there are endless “moving parts.” The mobile user has a short attention span that demands an almost immediate response. It’s the job of the mobile-application developers and designers to catch and keep the attention of the customer. Applications must be more intelligent and must work with traditional IT security systems so that your operational staff can shut down access or rate-limit access
The world’s getting smarter: Join the world and learn more about WebSphere DataPower appliances and IBM Worklight. Contact TxMQ vice president Miles Roty at (716) 636-0070 x 228 or [email protected].
IBM Extends Modern Mainframe Capabilities With the zBC12
IBM is now extending the capabilities of the modern mainframe to organizations of all sizes with the introduction of the IBM zEnterprise BC12 (zBC12).
The zBC12 offers a proven hybrid-computing design to help manage and integrate workloads on multiple architectures within a single system. It provides an optimized infrastructure that’s integrated, agile, trusted and secure – an infrastructure that allows organizations to quickly and cost-effectively embrace new cloud, analytics and mobile opportunities.
The zBC12 offers many upgrades:
- Powered by microprocessors running at 4.2 GHz
- Provides up to a 36% boost in per-core capacity
- 58% increase in total system capacity for z/OS
- Up to a 62% increase in total capacity
- Offers up to 156 capacity settings
- 20% improvement over the z114 and IBM System z10 BC (z10BC)
The zBC12 is available in two models:
The H06: A single central processor complex (CPC) drawer model
The H13: A CPC two-drawer model that offers additional flexibility for I/O and coupling expansion, specialty engine scalability and memory scalability up to 496 GB
[title size = 3]Data Developments[/title]
IBM zEnterprise Data Compression (zEDC)
- Offers an industry-standard compression for cross-platform data distribution
- Disk savings by allowing better utilization of storage capacity
Shared Memory Communications: Remote Direct Memory Access (SMC-R)
- Optimizes server-to-server communications by helping to reduce latency and CPU resource consumption over traditional TCP/IP communications
- Any TCP sockets-based workloads can seamlessly use SMC-R without requiring any application changes
Data analytics solutions on the zBC12 include the IBM Smart Analytics System and the IBM DB2 Analytics Accelerator – both of which are designed to enable organizations to efficiently store, manage, retrieve and analyze vast amounts of data for business insight.
[title size = 3]Solid Security[/title]
Cryptography is one basic technology that helps protect sensitive data. The zBC12 can help organizations come into compliance with various industry standards that facilitate the environment of enterprise-security policies that govern data privacy.
The zBC12 is designed to meet the Common Criteria Evaluation Assurance Level 5+ (EAL5+) certification for security of logical partitions, helping to ensure the isolation of sensitive data and business transactions. The zBC12 offers high-speed cryptography that is built into each processor core.
[title size = 3]Cloud Capabilities[/title]
With z/OS, organizations can seamlessly run multiple disparate workloads concurrently with different service levels. With zBC12, Linux environments can expect a 36% performance boost per Integrated Facility for Linux (IFL) processor. The new IBM z/VM 6.3 provides improved economies of scale with support for 1 TB of real memory and more efficient utilization of CPU hardware resources.
For more information regarding mainframe capabilities or other TxMQ pillars of expertise, contact Miles Roty at (716) 636-0070 ext 228 or [email protected].
Nine Reasons Why “Nearshoring” Is a Better Choice than “Offshoring”
Over the past 10 years, the trend toward offshore service providers for software development or IT projects has become an accepted solution for most large corporations. Although these low-cost offshore developers provide some benefits for the company, they also come with challenges; cultural differences, time zone discrepancies and language barriers are just a few. All of these challenges can lead to communication issues that may ultimately hurt the business.
However, from this, a new trend is starting to emerge: service providers working for an affordable rate are now establishing in South America. These new groups of individuals and companies can deliver the same quality work that can be found elsewhere however they offer solutions to some of the shortcomings that offshoring holds.
What is nearshoring and how does it differ from offshore outsourcing?
Traditionally outsourcing or “offshoring” involves contracting IT or developer resources from countries in the Asia/Pacific region, with the most common locations being India, China, and the Philippines.
According to Dictionary.com the term “nearshoring” refers to the practice of moving one’s employees or business activities from a distant country to a country that is closer to home. Nearshoring capitalizes on benefits of proximity, which include time zones, cultural and linguistic similarities, and political factors. When it comes to the United States, this means that companies would turn to countries in Latin America, (such as Panama, Colombia, and Uruguay) that are rapidly becoming new outsourcing hubs for IT and development projects.
Advantages of Nearshoring
Nearshoring has already been a popular option for the last several years in the manufacturing sector.
9 major reasons that are leading to the increase in nearshoring:
- The cost of labor in Asia/Pacific countries is rising; Latin American countries are now competitive in price
- More than 10 years ago companies started using candidates from India and China. However this is rapidly changing, over the past few years the financial attractiveness of this option has become less favorable. According to Wendy Tate, assistant profess or logistics at University of Tennessee, “Chinese wages are now climbing at 15 to 20 percent per year… thanks to a supply-and-demand imbalance of skilled laborers in manufacturing regions, global pressure to upgrade Chinese labor practices and wages, and increased employee demands for better pay and conditions.”
- Time zone compatibility
- A large concern of those who use offshore talent is that their team is 10-14 time zones away. The logistics of scheduling conference calls is challenging. When workers are in different time zones the offshore members of the team are left to do tasks overnight for managers to examine the next morning. Then if there are problems, the manager has to wait sometimes half a day to get the updates done. In addition to this, unnatural working hours can be a problem for employees. It takes a toll on them and affects their quality of work. However in contrast, the Latin American countries fall in the same time zones as the United States, which allows for real-time conversations, normal work hours and a higher quality of deliverables.
- Available Talent
- A large selling point of offshoring to India, China and the Philippines is the high quality of education in those countries. However, because outsourcing in Latin America is just being discovered by the United States there is a very large pool of highly skilled, college-educated resources available. Universities in countries such as Columbia and Panama are well-respected in the educational community, hundreds of students from the United States travel there yearly as foreign exchange students. In addition to this, a large number of professionals in Latin America have attended universities in the United States and understand our market needs.
- Technology Infrastructure
- In 2011, Latin America and Eastern Europe surpassed India in the growth of outsourcing facilities (Source: www.nearshore.com). This is consistent with investment that have been made to improve the technology infrastructure over the past few years in Latin American countries. Fast internet connections, construction of new data centers, and improved telecommunications facilities are all helping to make the connection to US-based companies as seamless as possible.
- Language/Cultural similarities
- When dealing with countries like China there can be a large communication issue when English is not their native language or not as commonly spoken. However with most nearshoring countries providers are highly proficient in English (or the language of their client), even if it is not their official language. This can be a great advantage when communication is primarily via phone and email.
- There can also be cultural difference that can impact the work that the client does, in India there are completely different holidays then there are in the United States. Nearshoring greatly reduces these types of problems based on the fact that there is better communication and coordination between countries that have similar cultural backgrounds.
- Intellectual property protection
- In many Asian countries, the incidences of IP theft and counterfeiting are widespread. However many Latin American nations have signed Free Trade Agreements with the USA, which should guarantee IP rights to foreign companies.
- Political risk and security
- Geopolitical risk is a factor that should be strongly considered when evaluating outsourcing options. Does the country have a history of nationalizing privately-held business owned by foreign corporations? Can the government close down an operation that they consider to be contradictory to their philosophy? It is crucial to evaluate the political situation of the county where a service provider maintains staff.
- Trade regulations and compliance
- In October 2011, the U.S. Congress approved trade agreements with Panama and Colombia that have created the largest opportunity for exporters in decades. This has also increased the chances of doing business with these countries. In addition to this, Panama agreed to become a full participant in the WTO Information Technology Agreement.
- Low staff turnover
- According to the Associated Chambers of Commerce and Industry of India, in 2010 the IT and BPO attrition rates in India reached a startling 55%. Companies are reluctant to enter long-term projects with an offshore team, knowing that over half of the original team will be gone within one year. However this situation has not been seen in Latin American countries; the family-oriented culture of these countries along with being in the same time zones as the US makes it less likely for them to leave a position.
The benefits of nearshoring are quite clear: low costs, compatible time zones, lower staff turnover, business-friendly climate, and better protection of your intellectual property. If you are looking to outsource IT or software development, you no longer need to look halfway around the world.
TxMQ provides WebSphere® software support services to supplement our clients’ internal technical teams with proactive proble resolutions for IBM®-based and other third-party middleware processing and software.
Remote Problem Management (RPM) for Middleware includes:
- 24-hour, 7-day per week, North American-based phone support of all IBM middleware products, including but not limited to DataPower, CastIron, Portal and Process server, most Web-Servers, WebSphereMQ, WebSphere, WMB, Integration Bus, Tibco, and Database Software installed on mainframe and distributed systems
- 8×5 support for all non-severe issues requiring Level 1 support
- Support initiated via toll-free telephone number or electronic interface
- Immediate support from a middleware technician (not a generalist)
- Response time for calls placed into the toll free number is 30 minutes or less
- Pricing based on environment size, NOT user or license counts.
For more information on Remote Problem Management or for a customized quote contact Miles Roty at 716-636-0070 (228) or email [email protected].
Overview of the WAS v8.5.5 Family
WAS 8.5.5 was announced at IBM Impact and GA’ed June 14, 2013. WAS 8.5.5 provides significant performance benefits over previous releases as well as all of its competitors.
WAS 8.5.5 now includes WS Extreme Scale for caching and WAS ND 8.5 introduced last June includes WS Virtual Enterprise and WAS Compute Grid products for improved resiliency.
A full overview of the WAS v8.5.5 Family
- New WebSphere Application Server Liberty Core edition
- Entitlement to WebSphere eXtreme Scale (WXS) for some editions
- Developer install/support for WAS & WDT with active production server S&S
WAS for Developers
- Enables efficient development of innovative apps that will run on WAS in production
- Available as a no-charge edition for the developer desktop and includes Eclipse adapters
- NEW: Provide WAS and WDT editions as freely available for dev desktops and supported under production runtime licenses
WAS Hypervisor Edition
- The WAS ND server optimized to instantly run in Pure Application System, VMware, PowerVM, zVM and other server virtualization environments.
WAS ND
- Delivers near-continuous availability, with advanced performance and intelligent management capabilities, for mission-critical apps.
- Full entitlement to WXS.
WAS
- Provides secure, high performance transaction engine for moderately sized configurations with web tier clustering and failover across up to five application server profiles.
- Includes entitlement to eXtreme Scale for HTTP session caching and DynaCache on the entitled WebSphere Application Server.
WAS for z/OS
- Takes full advantage of
the z/OS Sysplex to deliver a highly secure, reliable, and resource efficient
server experience. - Entitlement to WXS z/OS client.
NEW: WAS Liberty Core
- A lightweight and low-cost Liberty profile based offering (not full-profile WAS), providing the capabilities to rapidly build and deliver web apps that do not require the full Java EE stack.
WAS Express
- A low-cost, ready-to-go solution to build dynamic Web sites & apps, including both Liberty and full-profile WAS. Restricted to a set amount of PVUs.
For more information on the WAS 8.5.5 Family or TxMQ IT Solutions and Staffing please contact Miles Roty, Vice President at [email protected] or 716-636-0070 ext 228
WAS Liberty Profile (Web Profile Only)
WebSphere Liberty Core
In June of 2012 WAS V8.5 came out and introduced the Liberty profile.
Liberty Profile is a lightweight server for faster development and easy deployment of web apps. Less than 50mb to download from the Web, customers can download easily at WASdev.net for free (for desktop).
It restarts in less than 3 seconds which is important for developers wanting a FAST environment (faster than Jboss).
With WAS 8.5.5 there is now a separate offering priced competitively with OSS: WAS Liberty Core: $26/pvu list price or competitive tradein at $13/pvu). This price is very competitive with Tomcat and likely cheaper than Jboss.
WAS Liberty Core provides full fidelity to WAS ND for customers who may want to use it for devt and ND for production.
A lightweight and low-cost Liberty profile based offering (not full-profile WAS), provides the capabilities to rapidly build and deliver web apps that do not require the full Java EE stack.
The problem of a lightweight development environment in WAS has been solved!
- WAS Liberty Profile startup & footprint are on par with Tomcat
- WAS Liberty Profile starts up in less than half the time of JBoss Web profile
Note: Tomcat , JBoss, and GlassFish were measured with the HotSpot JDK, while Liberty was measured with the IBM JDK
Liberty Profile is a lightweight server that can service requests with the speed of a full production server.
- Liberty Profile provides up to 20% better runtime performance than JBoss and 25% better than Tomcat.
Note: Tomcat , JBoss, and GlassFish were measured with the HotSpot JDK, while Liberty was measured with the IBM JDK.
For more information on WAS Liberty Profile and how it can help your business or TxMQ IT Solutions and Staffing please contact Miles Roty, Senior Account Manager?[email protected] 716-636-0070 ext 228