<$BlogRSDURL$>

IT KEE

IT bits and bytes

Thursday, April 29, 2004

IBM-HP - a 2 horse race?
HP Chairman and CEO Carly Fiorina says IT is "a two-horse race." However, IBM and HP are winning in two different areas.
HP is trying to grow in consumer markets where IBM has chosen not to participate, such as music players, big screen TVs and digital cameras. It continues to struggle in areas where IBM is strong - services, software and servers for business.
No company has ever succeeded in conquering both consumer markets and the enterprise as HP is trying to do.

posted by OttoKee  # 12:55 AM
Hitting the holy grail.

Managing heterogeneous systems as one entity has been such a utopian concept; does IBM's Virtualization Engine really mean it's Grail time?

"Well, in Indiana Jones terms, we've entered the room with the Grail, now we just need to choose wisely," says Randy Daniel, IBM eServer program director.

Is this going to make every system a customer owns work automatically, seamlessly with others and be managed as one?

"No," Daniel continues. "However, it does create a platform where customers can begin to manage their workloads as the primary objective, rather than individually managing each system."


posted by OttoKee  # 12:50 AM
Virtualization Engine sounds a lot like what HP offers through UDC/Adaptive Enterprise and Sun is offering through N-1. Could it be like Microsoft's .Net? How is Virtualization Engine different?

HP's UDC is an amalgamation of tools that aims to manage only HP systems. It's a huge amount of complex software thrown at a complex problem.
Microsoft's .Net is aimed more at managing applications across an IT infrastructure.
IBM Virtualization Engine is aimed at simplifying IT infrastructure management. We will embed the mainframe and software technologies inside the systems so it is easy to manage AND it extends to manage non-IBM systems, so customers aren't penalized for having non-IBM computers.

posted by OttoKee  # 12:49 AM
IBM Virtualization Engine will feature:

Virtualization Engine technologies:

The world’s most advanced systems micro-partitioning leveraged from the IBM mainframe for systems running IBM processors*, including virtual networking, memory and LAN, allowing customers to partition up to ten fully functioning services per processor;

Virtualization Engine services:

IBM Director Multiplatform offering a single point of control and management for IBM and non-IBM systems, clusters of systems and Grids that might span several countries. It allows a single person to manage multiple environments from a single console, eliminating the need for costly training on different types of systems.
Enterprise workload management and provisioning tools to optimize resources, simplify management and increase availability across IBM and non-IBM systems based on business policy, powered by IBM Tivoli Provisioning Manager.
Grid capabilities for distributed systems based on Open Grid Services Architecture and WebSphere technology.
IBM TotalStorage Open Software to virtualize and centralize the management of storage across heterogeneous storage devices to help clients optimize utilization, improve application availability, and increase administrator productivity.
These are the first implementations of technologies that in part have allowed mainframes, used in the world’s most secure banking and institutional transaction environments, to maintain utilization rates in the range of 80 percent – compared with as low as 15 percent in UNIX and Windows environments -- providing a dramatically lower cost of operation.

GH Young International, an international trade customs broker, is using eServer iSeries servers to manage its trade and custom management systems and overall business operations. It is already seeing benefits from virtualization technologies today.

"We have taken major steps to simplify our infrastructure by leveraging virtualization technologies with POWER Linux and Integrated xSeries Solutions on the eServer iSeries," states Nigel Fortlage, vice president of information technology, GHY International. "Prior to our server consolidation we spent 95 percent of our time just keeping our systems and network running. Now we spend 5 percent. This innovative technology has been invaluable for our organization."

PeopleSoft, an IBM business partner, is excited about the opportunities Virtualization Engine could provide to their customers.

"PeopleSoft customers want to find ways of optimizing the infrastructure that runs their PeopleSoft applications to achieve better performance, manageability and lower total cost of ownership," said Dean Alms, vice president, product management, PeopleSoft Tools & Technology. "The workload management and provisioning tools delivered through the IBM Virtualization Engine are important steps in helping our customers achieve these goals."


posted by OttoKee  # 12:43 AM

Tuesday, April 27, 2004

Mainframe shines

You would have been laughed out of town for suggesting two years ago that one day in the near future the mainframe would be responsible for an IBM turnaround in hardware sales. But here it is folks. IBM's hardware sales rose 10% in the first quarter of this year, fueled by a 34% increase in mainframe revenue.

IBM's CFO John Joyce said during a conference call Friday that the strongest growth in the first quarter was driven by new applications and Linux workloads on the zSeries in addition to increased workloads on existing applications.

On April 7, the 40th anniversary of its revolutionary System/360 mainframe, IBM introduced the zSeries 890 mainframe for medium-sized customers. The new model, extends the capabilities of the company's flagship z990 mainframe with more scalability and a new pricing structure, as well as features technology that helps customers consolidate Java-based e-business applications onto the mainframe.

The new mainframe was introduced together with a scaled-down version of the Shark Enterprise Storage Server. The ESS 750 offers the features and functionality of the Shark Model 800, but at a lower price point.

Probably just as significant, he said, is the zSeries Application Assist Processor (zAAP) that's being offered on the new system, which is essentially a dedicated processor for Java applications. He said he believes it's this offering that will make users of other platforms take notice and ultimately fuel sales. More and more businesses will begin to recognize just how versatile the mainframe has become.

According to Broderick, zAAPs allow the user to extend the value of existing zSeries investments, by integrating new Java workloads along side core zSeries business applications and data. Notably, there are no IBM software charges associated with zAAP capacity. It is not yet clear whether ISVs will follow this practice, and price their software similarly, he said.

In addition, he said zAAP has made the zSeries an very attractive Web-based server, so expect to see a lot more mainframes used for this purpose. He said zAAPs would help to ease the demands on regular general-purpose processors, offloading Java workloads and freeing the processors to run zSeries applications. This could result in having to install a smaller mainframe than would have otherwise been required. And, zAAP will help to lower overall costs of acquiring zSeries software as well. The zAAP is priced at $125,000.

Overall hardware sales rose to $6.7 billion, up by10%. IBM's pSeries servers for UNIX operating systems were up 15% year-top year. The company's xSeries Intel-based servers grew 24%. However, revenues for iSeries midrange servers declined 7%, as did Microelectronics revenues.




posted by OttoKee  # 7:04 PM
IBM's megadeal with Morgan Stanley

IBM has entered into a $575 million contract with Morgan Stanley to outfit the financial giant's Individual Investor Group with an on-demand infrastructure, the companies announced Tuesday.

IBM will migrate the group's current mainframe processing infrastructure to a shared one at an IBM on-demand data center where Morgan Stanley will pay only for the back-end computing power it uses. IBM will also provide help desk and desk side support as a managed service to approximately 20,000 users in Morgan Stanley's Individual Investor Group.

The five-year contract extends a previous IT services agreement.

IBM previously managed Morgan Stanley's fixed infrastructure at an IBM data center location. Within the next 12 months, IBM will transfer Morgan Stanley's data to systems where the resources of other companies reside—a growing trend in data center outsourcing, said Eric Ray, vice president of financial markets outsourcing business, IBM Global Services.

"This is building on a shared paradigm in the financial services industry," said Ray. Although he wouldn't discuss other financial services companies, he said that a growing number of firms are "comfortable in running their infrastructure in a shared pool of resources," as the model is highly-secure.

According to analyst Jeff Kaplan of ThinkStrategies, Wellesley, Mass., this latest agreement reinforces a year-long trend among major outsourcing companies, including Hewlett-Packard and EDS, to get companies to move from a simple transfer of assets type of outsourcing agreement to one where the company "transforms" the data center environment to this new on-demand architecture.

"People used to look at outsourcing as "offloading" to someone who would do it more efficiently," said Kaplan, "but they found that wasn't' adequate to meet their needs. Now, with new technology, what you're seeing is that not only do they want to offload the technology but they want to take advantage of it."

Many of these mega outsourcing arrangements don't look like that good of a deal for anyone other than IBM, but Kaplan said companies like Morgan Stanley are spending millions to manage their data centers themselves and are realizing they're not very efficient at it. Adopting a technology such as on-demand on their own would be unrealistic.

"Cost savings is relative," said Kaplan. "Companies like Morgan Stanley are redirecting their corporate resources…so they can use their remaining IT staff to focus on more strategic initiatives. It's a redirecting of their internal resources by gaining greater efficiencies from the IT operations."


posted by OttoKee  # 6:54 PM

Thursday, April 22, 2004

GM Buys Supercomputer From IBM
4/21/2004 11:56:00 PM

DETROIT, Apr 21, 2004 (AP Online via COMTEX) -- General Motors Corp. has bought a supercomputer from IBM that the companies say is the fastest in the automotive industry and will more than halve the time it takes to get a vehicle on the market.

The new supercomputer, based on IBM's Power 4 and Power 5 technology, more than doubles the computing capacity of the world's largest automaker, and is expected to slash the amount of time it takes to get a vehicle to market from 48 months to 18 months.

GM said in a release that the computer is the fastest in the industry, "by a wide margin," and can compute at a rate of nine teraflops, or nine trillion calculations per second.

Neither GM nor IBM would reveal how much the computer cost.

The technology is expected to allow design modifications and engineering questions in GM vehicles to be handled in a matter of hours when they previously would have taken months to resolve.

GM received the first phase of the supercomputer network in March and will receive a second phase later this year, spokesman Chris Perry said.

The supercomputer also is expected to continue cutting GM's crash test costs by advancing digital simulations. Since GM began using the system, it has cut the number of needed crash vehicles, which cost $500,000 per test, by about 85 percent.




posted by OttoKee  # 5:07 AM

Tuesday, April 20, 2004

Year 2006
- Tech recovery is solidly underway
- There is a shift in priorities, companies now need to grow again
- Offshore outsourcing is a challenge to the US
- Business process outsourcing (BPO) will be the fastest growing segment in our industry
- 2006 will be the next wave of innovation, based on 4 technologies
a) Secure broadband
b) Always on wireless devices
c) Access to resources as needed (includes Grid)
d) SOA

Prognosis from Microsoft
- 10 yrs from now, hardware will be almost free, speech will be in every device, tablet-sized devices will
replace laptops, we won't be writing as much code because we will be using visual modeling
- Interactions will be multi-modal (talking & typing to interface with your devices)
- 75% of the money Microsoft spends on R&D is really spent on development (MS is investing $6.8B)
- Modeling: expressing business processes without code
- The Spam Initiative is part of MS's security initiative & works by verifying that the e-mail really did come from
the ID that it says it came from
- Gates thinks security will come off the list of "top 5 concerns" within 2 yrs (Gartner totally disagrees) but right
now MS is really focused on it
- MS is not in the consulting business, but wants to show customers HOW to use the technology
(Gates threw a rock at IBM IGS by saying that MS doesn't charge customers $300/hr to show customers where their brain is)
- Web services will be the basis of all SW from MS & existing apps will be wrapped in web services
- Longhorn: alpha release in 2004, actual product availability is not date driven, it is function driven
- There is a Windows release (Windows XP SP2) before Longhorn, due this summer, it is purely security focused
- MS has a research lab in China and believes it is easier to do research on a distributed basis than development
- Development will stay in 1 location

posted by OttoKee  # 8:33 PM

Monday, April 19, 2004


posted by OttoKee  # 4:15 AM
Buying in to outsourcing
IBM's motivations for agreeing to buy Daksh eServices, one of India's largest call-center companies with 6,000 employees, is to cash in on that country's economy, which is growing at more than 10 percent a year, IBM spokesman Ian Colley says. The purchase had nothing to do with taking advantage of low labor costs, it's intended to service IBM customers in the travel, insurance and technology industries in that region, Colley says.

Daksh does not disclose names of its customers for "confidentiality" reasons. Its Web site says it services a Fortune 100 telecom firm, one of the world's leading Internet portal companies, a Fortune 25 financial-services conglomerate, a Fortune 100 health insurer, a major airline, a financial-software market leader and about a half-dozen other major clients. Daksh listed no companies from India or Asia, specifically.

Some within the industry suspect IBM is masking its real intentions, and that its main motivation for buying Daksh is to expand relations the company has with Amazon.com and other large U.S. concerns.

Citigroup earlier this week said it plans to spend $122 million to buy out the remaining shares of E-Serve International Ltd., a company with about 5,000 employees. The U.S. financial giant already owned about a 44-percent stake in the Indian number-crunching concern, and has been sending work to the company for nearly five years. Unlike IBM, Citigroup plans to use E-Serve's services exclusively for Citigroup.

Citigroup's acquisition could become a model for other companies who have strong existing relationships with small to mid-size outsourcing operations in India, says Bill Vance, former senior vice president of Sitel Corp.(SWW: news, chart, profile), a U.S.-based call center company. He says direct ownership can be a less costly alternative than outsourcing, and it gives management more control.

"When you have employees working for you, you have the ability as a senior manager to walk down the hall, and if there's a problem you grab a VP by the throat and say 'fix it,'" says Gregg Kirchhoefer, senior partner at outsourcing company Kirkland & Ellis. "When you have to solve a problem with an outsourcer, you have to rely on your relationship with them, and the agreement that was signed."

posted by OttoKee  # 4:13 AM

Tuesday, April 13, 2004

Managing IT challenges in an On Demand world.
1. responding to biz changes quickly and with flexibility
2. increase resource utilization
3. reduce IT costs
4. meeting service level agreements
5. manage increasing amouts of risk

what are some of the business challenges?
1. financial pressures
2. security and operational resiliency
3. simplify infrastructure complexity
4. accelerate time-to-market
5. increase revenues
6. deploy new capabilities

note:average prime-shift mainframe utilization is often >70% vs. 10-15% forUNIX and 5-10% for Intel.



posted by OttoKee  # 11:48 PM
Measuring Performance
Seventy-eight of the respondents in a recent poll by PricewaterhouseCoopers of America's fastest growing private companies indicated that they use more than five metrics to gauge performance. Fifty-one percent used five to ten metrics; 18 percent, ten to 15; and nine percent, 15 or more. The most popular metrics were operating income, revenue growth and on-time performance. A direct correlation exists between the number of corporate performance measures used regularly, and the average size and growth of companies surveyed. The bigger the company, the more metrics were used. The most popular technology for tracking performance was spreadsheets, followed by homegrown solutions. Only 28 percent had moved to commercial applications developed specifically for corporate performance management. One third of the respondents said lack of integrated systems impedes their efforts to manage performance while one quarter indicated that lack of a consistent data model was a problem.

posted by OttoKee  # 10:51 PM
Core Systems Untouched
A survey of U.S. financial services conducted by independent market analyst Datamonitor revealed that within the next two years, just six percent plan to initiate migration to a new packaged core system. Forty-nine percent are choosing to maintain existing systems. Twenty-one percent are "wrapping" legacy systems for increased integration. Large IT shops are slowly moving towards the idea that if a system is not broken, it should not be replaced but efficiently integrated into the overall infrastructure.

posted by OttoKee  # 10:47 PM
USDA Inks Deal with Cybermation
In a deal in the works for 18 months, Cybermation Inc., a leading developer of integrated enterprise job scheduling and software change management solutions, announced an agreement with the United States Department of Agriculture (USDA) Farm Service Agency (FSA). ESP, Cybermation's enterprise IT automation solution for job scheduling, will be used to automate the USDA/FSA's mainframe job scheduling environment, saving an estimated 160 hours of processing time each month, and providing faster delivery of products. "We have reduced the latency on 948,000 jobs a year," Kimberley O'Brien, corporate communication manager at Cybermation, told 5 Minute Briefing on the exhibition floor at FOSE, the federal government technology conference last week. "We took it from between one to 1.5 minutes between jobs to three to five seconds."
..... The USDA supports over 2500 offices nationally, uploading and downloading business and financial transactions across a multi-platform environment. ESP was chosen because of its ability to manage complex, multiple-platform job processing from a single point of control.
.....Cybermation develops enterprise IT automation solutions for job scheduling and software change management environments. "Our product is more flexible (than our competitors' products)," O'Brien said. "And we offer a single point of control for both mainframe and distributed environments." For more information, go to www.cybermation.com.


posted by OttoKee  # 10:45 PM
Vietnam, granades and the mainframe
I thought the S/360 series were cool, especially with all the flickering lights and more importantly when systems programmers were real systems programmers! Then came the 370's, 390's, etc, which were fast and impressive, but they just weren't the same. As a matter of fact, when I was in Vietnam in 1971, just outside of Siagon, we, the United States Army, had S/360's scattered all over in little air conditioned trailers surrounded with Claymore mines and yes.... live granades inside, just in case we got taken over. How would you like to be feeding IBM Hollerith cards with a live grenade next to you? Our orders were to destroy the hardware if we were overtaken. We would get all the orders for GI's coming in and out of Vietnam in punched cards. Many syspros nowadays don't even know what a punch card is or was.

posted by OttoKee  # 7:48 PM
Who Is Doing What with Your Databases?

Recently, databases were in the spotlight after the spread of the SQL Slammer worm. Despite the inconvenience, most people were just glad that this particular worm did not attack their company's data assets. A compromise of that data could be far more serious than a hacker defacing a Web site or a virus infecting PCs. How many of you are prepared for an attack in which someone who wants to steal, delete, or even worse, alter your business data without you knowing?
...Rigid data center practices, physical security and a security team have kept companies protected. However, those practices must change when databases become interconnectable and accessible outside company walls. While attention has been focused on Web access and firewalls, mobile applications, application servers and Web services are reliant on database back-ends that provide the foundation for critical business applications.
...Database systems have an array of capabilities that can be misused or exploited, compromising the confidentiality, availability and integrity of data. An improperly managed database can compromise an entire network's security infrastructure. Even in a secure environment, someone could exploit access to a database to gain administrator command access to the underlying operating system. Because data is protected behind firewalls from unauthorized access, many organizations assume the data is untainted. However, what about those who have been accessing data from within the firewall, or those with administrative privileges within the database? Accidental alterations, deletions or malicious data manipulation can lead to lawsuits, fines or bad publicity, putting a company at risk or out of business. Industry analysts have said that the majority of unauthorized data access comes from internal employees. Internal employees or contractors may know how business applications connect and transact data; some may even have database login privileges. Rights, roles and passwords help restrict queries, but if someone can access the database files directly, from a backup or a test environment, they could get access to everything.
...Companies amass business data in off-the-shelf relational databases. These databases may contain sensitive financial data, customer information, credit card numbers, source code, marketing plans, payroll or medical information - data that should be protected on both sides of the firewall.
...DBAs must be included in all business and project decisions surrounding the data and its intended use. Companies do not realize the risks associated with sensitive information within databases until they perform an audit, and by that time, it may already be too late, and the data already stolen.
...Many organizations already govern their network and system infrastructures by securability and availability. It is crucial to include the database and Web services associated with them. Service levels should address the levels of data protection, transaction integrity and notification mechanism in the event of data compromise. This summer, businesses in California encountering a confidential data breech will need to quickly identify and disclose the compromise or face legal action. This state-specific law that may have even greater ramifications for businesses outside of the state who are selling products or services to the California residents.
...Protecting databases and data contained within is not just an IT activity, but a crtical company-wide responsibility. With new security technologies, best practices, education and good communication, increasing complexity and new projects can be better managed, secured and monitored, keeping a company's data assets secure and companies in business.

Solution from IBM: DB2 Log Analysis Tool for z/OS v2.1

http://www-306.ibm.com/software/data/db2imstools/db2tools/db2lat.html

posted by OttoKee  # 7:04 PM
Apply Automation to Database Performance Management

The IT infrastructure that supports business operations must be broad, flexible and operate with continual change in mind to enable business, not impede it. Database performance management is dependent on many factors, including the server platform, database configuration, the number of users, query workload and type of database. Since there are so many interdependencies, it is essential to collect measurable performance data points to quantitatively monitor and perform ongoing performance analysis. Performance management is an iterative process that involves constant attention to identify any changes and apply appropriate corrective action to meet established performance objectives.
...As databases increase in size and complexity and as companies create or acquire additional line-of-business applications, DBAs are no longer able to manually maintain the same level of service for each database. At some point, the DBA will be challenged with trying to keep up with daily reactive issues. If DBAs are spending more than 10 to 15 percent of their time tweaking or coding scripts, it is an indication that available, off-the-shelf automation technology should be employed. Database monitoring and daily performance metric collection is just one critical aspect of comprehensive database performance management. By taking control of reactive database issues and establishing thresholds, automated event notification and response enables organizations to make the transition from being reactive to proactive. Corrective action can be seen as advanced event driven automation executed to prevent a costly outage or provide information used to identify and adjust when additional capacity is required to meet established service levels.
...A common problem is a database that runs out of space, thereby halting business. With automated infrastructure management technology in place, database file space, database extents and server file system space are all monitored and managed. Automated self-management ensures compliance of service levels by adapting to changes when the demand occurs. Comprehensive infrastructure management, combined with self-management automation, enables IT to proactively manage increasingly complex environments and still maintain service levels.

Solution from IBM: DB2 automation tools for z/OS v1.3

http://www-306.ibm.com/software/data/db2imstools/db2tools/db2autotool.html


posted by OttoKee  # 7:01 PM
Take the Cover off of Mainframe Host Integration
By Jennifer Shettleroe

In the evolutionary landscape of information technology, one thing you can always count on is change. Technology is changing, computer users are changing, and business organizations have to make changes as well.
.... This calls for a re-examination of the systems and processes that businesses have held closely for years - and a good plan for how to best carry them into the future. Mainframe host systems continue to hold important business logic and data for large corporations, and are proving to have a longer life than anyone expected. Today, more and more user groups need access to information locked in these mainframe applications. You could rebuild the applications, or you can use legacy integration tools that allow the existing applications to fully participate in new business initiatives and architectures.
.... In the past, it may have seemed infeasible to access mainframe logic and data in modern ways. Older monolithic systems don't always work with newer ones, direct data access can be risky - and finding people with the right skill sets to unlock mainframe data is becoming a rarity. Today, there are powerful integration tools that can mask the complexity of legacy assets and make host data and applications available to a new generation of users (and programs) using the latest technologies.

What is 'legacy' anymore?
Since legacy can be defined as 'heritage', it is a misconception to assume that legacy data is outdated or unusable. To many organizations, host logic and data are still mission critical, defining assets of the business. Over the years, emulation made legacy data and applications available to PC power users. But legacy host applications were written years ago, and may not meet the needs of new business users. You can make services provided by these applications accessible to a larger user base and, in doing so, make them universally reusable. Why not recast legacy applications with new and custom Web-based applications to use them in modern ways? It is now easier than ever to extend your legacy assets.

Programmatic integration options give you choices
Programmatic integration products enable IT organizations to encapsulate legacy data and business logic as callable services that expose application programming interfaces (APIs) through various technologies, including J2EE Connector Architecture (JCA), Enterprise JavaBeans, Web Services, XML, and COM.
.... With a programmatic integration approach, you can quickly capture transactions from their native format, and reassemble them for use in new application development. You have choices - easy-to-use screen access, powerful transaction access, or direct data access. You will want to be aware of your application developers' expertise and requirements.

Screen Access
Let's be candid about screen access. Screen access is the safest and simplest method to access and integrate legacy data and applications. In some circumstances, it is the only method. Forget about the perceived negatives of "screen scraping" and look at your project requirements. From a development timeline point of view, screen access is a quick, viable - and non-invasive - solution for reusing complex business logic. Direct screen access options blend screen access with native host installation to leverage mainframe performance, reliability, security, and scalability.

Transactional Access
Transactional access, whether installed on the mainframe or on a separate server, enables the reusability of business logic embedded in the transactions from programs written for the CICS and IMS development environments. This method executes transactions through their defined interface and integrates them, like any other service, into newly designed applications. Essentially it links mainframe and non-mainframe applications together at the client level. Good solutions will have a metadata editor to import COBOL copybooks for more rapid development. Look for extended sets of services, including two-phase commit which enables client-managed transactions to support both commits and rollbacks, and host-initiated events which allow host applications to interact asynchronously and bi-directionally with other enterprise applications. Options like these ensure data integrity and synchronization.

Direct Data Access
Many projects start out with the goal of using mainframe data directly from data stores such as Adabas, DB2, IMS/DB, and VSAM, in new projects. Often, this starts as a batch process - replicated mainframe data is made available for use in non-mainframe applications. This approach can be very limiting - the issues of updates, timeliness, and data integrity can quickly defeat the intended purpose. Today, technology is available that turns these workhorses of data management into accessible data sources. These powerful tools allow real-time use for read and write access against live data repositories. In addition, new approaches allow non-relational data sources to be seen and used as if they were standard SQL tables. You can build federated tables, combining and merging data from multiple sources. This means a non-mainframe user can access mainframe-based data sources in real-time without having to understand them, or needing to manually collect the data.
Your organization might be looking for a solution that provides built-in connectivity to mainframe-based data sources and applications across a distributed environment. Direct access to mainframe data sources is a good method if you are re-purposing host data and want to maintain the business logic and transactions from the host.

What to look for in mainframe integration tools
If your organization is tackling integration projects, it is important to look for solutions that allow you to grow and change when you need to. Any checklist for integration tools should address the following issues:

Ease-of-use - Because programmatic legacy integration is complex, find a solution that delivers ease-of-use. For example, drag-and-drop functionality can greatly speed development time and intelligently "learn" the application. In a screen access environment, this kind of functionality allows you to navigate existing screen flows to capture legacy transactions, combining them into a single programmatically reusable component, without any knowledge or programming of the original legacy code.
Service-Oriented Architecture (SOA) and reusability- A goal of SOA is to enable new uses of existing application logic. Today, application boundaries are loosening through standards for SOA. One application may be augmented by implementing business logic from another application. Why not use your legacy investments and expose encapsulated, well-defined tasks using a service interface? Legacy host applications are prime candidates to participate in a service-oriented architecture. Immediate returns can be achieved by exposing services as simple as answering a request, or as sophisticated as a long-running business process. Exposing legacy application transactions as services reduces development and deployment costs for integration projects. At the same time, it provides a long-lived, extensible method of access that leverages the control of the host security structure.
Open standards and flexibility - Open standards should be fundamental to your legacy integration solution. Take a look at your infrastructure. Do you own your mainframe? Do you have multiple host types? What client platforms are you working with? Your integration solutions should make legacy assets consumable for your back-end and front-end products as Web Services, COM(+), .NET , XML, JavaBeans, EJB, JDBC, ODBC, JCA, OLE DB, or ADO. If applicable, look for interoperability with major application servers, integration brokers, IDEs, higher-tier BPM tools, and a range of back-end systems. Adhering to open standards allows you to find a solution today that you can work with tomorrow.
Security - Because of the inherent value of your organization's host assets, security within the existing infrastructure is essential. Determine what you will need in a front-end or back-end security system. For example, a direct transactional implementation against an IBM mainframe might require that every transaction be discreetly authenticated against RACF. Or a Web-based screen access project might require authorization against a front-side ticket server or directory service. With new integration tools, developers can leverage the mainframe's trusted security mechanisms, rather than building new models
Scalability and future needs - Is your organization growing? Do you plan to enable more customer and partner access to your host applications? To meet the scalability requirements of your organization, look for a solution that includes industry-standard load balancing and clustering methods to ensure high availability and continuity of performance.
When do you start?
Many issues drive legacy integration such as increased ROI, and real-time access. Integration tools should offer long-term flexibility, while providing a fast return on investment. Reliability and ease-of-use are also important factors in any purchasing decisions, because IT staffs no longer have the luxury of time and lengthy implementation. You can simplify the access and integration of legacy data and applications with new application development tools - and the sooner the better.

About the Author:
Jennifer Shettleroe is vice president of product development for Attachmate Corporation. Send email to her c/o pr@attachmate.com.

posted by OttoKee  # 6:52 PM
Virtual Mainframes
Q&A with Vince Re, Chief Architect and Technology Strategist, Computer Associates

DCTA: There's a lot of attention on the autonomic data center these days. What's CA's strategy?
Re: Both autonomic and on-demand are very important strategic initiatives for CA. We're looking at end-to-end management from mainframe all the way down to the smallest platforms. There are a lot of components to that. Virtualization is one important piece it. On the mainframe, VM is an obvious thing. On Intel platforms, that might be VMWare, or Microsoft's Virtualization Technology. We're working in all of those environments.

DCTA: How can virtualization benefit customers?
Re: The Holy Grail there is that utilization and efficiency balance. Today, people tend to have a mentality of running applications with dedicated servers. The problem with that is you end up with incredibly low utilization. Leveraging virtualization technologies and autonomic provisioning changes the system to adapt to the workload, rather than being allocated in advance. I think we can get those efficiencies up dramatically. If you study a large environment, you find the utilization numbers are really horrible, 10 percent or less in a lot of cases. Even if we just improve that to 20 percent, we've still made a huge dent in the on-demand problem….

DCTA: Is this a problem at mainframe sites as well?
Re: It's less of an issue for mainframe sites. People who invest in mainframes already know how to run those environments at high utilization. They're very expensive servers, and you wouldn't buy a mainframe to run at 10 percent utilized. It seems the autonomic and on-demand computing is really something that goes beyond what traditional mainframe sites would be interested in.

DCTA: Virtualization is nothing new to the mainframe world, then.
Re: We've always had a big investment in z/VM. What we're seeing now is a kind of in-between approach. I could manage my Linux environment using z/VM as the focal point. We've seen quite a new life in the z/VM space.

DCTA: Is Computer Associates seeing a lot of Linux in its mainframe customer base?
Re: We were the first serious commercial supplier there, with a very early product line out on mainframe Linux. I don't have specific percentages of numbers, but mainframe Linux comes up in every one-on-one customer briefing we do these days.

DCTA: In your opinion, what's driving the popularity of mainframe Linux?
Re: IBM deserves some credit there. They've done a lot of marketing to make it attractive for large sites to run Linux. The new zSeries 990 computers, and the pricing of Linux CPU engines on these, has done a lot to get the TCO numbers to where the customers want to see them. That's always been the problem with the mainframe. It's a great platform, it's got all kinds of sophisticated and advanced features, but how do you get the TCO numbers to work out right?

DCTA: You're seeing a renaissance for big iron, then?
Re: Mainframe Linux hasn't been out that long yet, but already, we're seeing a few generations almost leapfrogging those technologies. When mainframe Linux first came out on the older 31-bit mainframes, there were some cost advantages there, but then blade technology offered even bigger cost advantages. Now we're seeing the pendulum swing back towards mainframes a bit, with offerings such as the zSeries 990.

DCTA: With more application choices as well?
Re: We've seen some companies do pilot projects on zOS or OS/390 with new, leading-edge technology - such as big Web applications - and it just wasn't a good fit there. Now that Linux is an alternative, some of them are saying, 'our app fits much better in Linux than it would of in zOS, let's bring it over and run there.'

DCTA: Are you seeing any customers adopting mainframes that are new to big iron?
Re: We've seen a few, but they'd be the exception rather than the rule. They're most often ISPs concerned about costs, and the idea of hosting huge numbers of Linux systems under z/VM on a single mainframe fits their business model very well. For many non-mainframe sites, there's a fear factor there. When you talk about mainframes, sites that have never seen one have a mental image of what that requires. They're focused on the skill set issue, and getting people to manage it, and the costs of managing.

DCTA: CA has a lot of mainframe tools, such as databases and job schedulers. Are you adding Linux support into these tools?
Re: We have customers managing Linux as a distributed platform, like any other Unix platform. Others are managing it as an extension of z/OS. We've ported our traditional Unix family of products - backup, databases - up to mainframe Linux. We also provide solutions to manage Linux from existing z/OS tools. With our job scheduling products, for example, we can put a small scheduling agent on a Linux system, and manage it from our z/OS products.

DCTA: What's the biggest challenge in moving sites to mainframe Linux?
Re: There's an uneasiness with the availability of mainframe skills. To really make the Linux equation work well on the mainframe, you need z/VM in the picture some place. But that's a new set of skills in most sites as well. There are certainly not major pools of z/VM talent out there in the market today.

DCTA: Why is z/VM so critical?
Re: z/VM is another important mainframe operating system that also provides important virtualization capabilities that in fact benefit Linux users tremendously by enabling hundreds of Linux instances to coexist on a single computer. Linux, in fact, has been responsible for a resurgence in z/VM popularity. Where Linux comes into play is that it provides important new capabilities to z/OS - access to the vast body of open source applications, for instance - while allowing for important new server consolidation efforts alongside existing mainframe workloads.

DCTA: Any other obstacles to mainframe Linux adoption?
Re: Software availability is another issue. This is the case especially at big complex sites, with a lot of applications and layers and layers of middleware that come from lots of different vendors. A lot of those vendors have inconsistent commitments to mainframe Linux. Depending on what you chose as your middleware stack, you might have some work to do to convince your vendor to support mainframe Linux before you can get your application there. Oracle, for instance, took about a year before they decided to go ahead and do it.

DCTA: How do you see Linux changing the shape of large data centers?
Re: Linux provides new options to save money through server consolidation - such as re hosting many existing Unix and Linux/Intel workloads on single mainframes. It's causing some rethinking of specific hardware choices and policies. For instance, in some situations, it's less expensive to have Linux on small, standalone mainframes - such as the IBM z800 models. Linux is also causing a growth in basic Unix/Linux skills in many sites.

DCTA: Are mainframe centers becoming big boosters of open-source, then?
Re: Linux is triggering sites to become involved with the open source community. This encourages integration of innovative open source tools within traditional data centers. It's also causing sites to re think commitments to single platform proprietary infrastructures, such as .NET.

DCTA: How about the impact on mainframe storage?
Re: There's a rethinking of TCO numbers around storage devices. Linux on the mainframe includes some basic SCSI FC capabilities - for the first time, this is enabling mainframes to participate in advanced SAN setups. However, there is also a hidden problem. If you have a big workload running on HP-UX, or some other Unix platform, odds are good that data that goes with that application is going to be tied to that platform through some type of SAN infrastructure. If you move a lot of that work to mainframe Linux, you've got a conceptual problem: how do you get all the data there? A lot of sites don't want to be in the situation where they have to duplicate their investment in disk just to move transactions over to Linux. They'd like to find some way of just kind of unplugging all that data from one platform and plugging it on the mainframe. That's where IBM needs to do a better job supporting the SAN environment today.

DCTA: Are you seeing migrations off traditional mainframe operating systems to Linux?
Re: I don't know if there's a lot of OS/390 or z/OS work going to Linux at this point. There is more going from other Unix platforms to Linux. We've worked with a few customers who had heavy investments in Unix, and are pushing some of that work to Linux on mainframe. Though Linux is gradually becoming established in more and more sites, sites needing absolute peak performance, scalability, security and fault tolerance will generally continue to rely on z/OS in addition to Linux.

DCTA: Where is it best to keep running z/OS?
Re: Large mainframe sites with databases such as CA Datacom rely on z/OS and Parallel Sysplex to achieve transaction rates in the vicinity of 100,000 transactions per second with complete fault tolerance and security. This level of performance requires a database engine that's finely tuned to the underlying hardware architecture - which today means z/OS. Mainframe database and transaction processing systems also contain untold terabytes of data and billions of lines of application code migration to Linux or any other platform is a major undertaking, and given the rich functionality of z/OS, most sites are content to leave existing systems there.

posted by OttoKee  # 6:50 PM
DB2 for z/OS V8 Unveiled

Almost three years to the date since it released DB2 UDB Version 7, IBM announced general availability of Version 8 for IBM's eServer zSeries mainframes. "Because of the pause, this is the largest set of enhancements in our history," Jeff Jones, director of strategy at IBM DB2 Information Management Software, told DBTA in an exclusive interview. The new database software delivers over 100 new features and functions.
.... According to Jones, the enhancements for Version 8 can be clustered into several categories. It takes fuller advantage of the 64-bit technology found in the zSeries, he said. Moreover, it offers fine-grain security control at the row level, the ability to use longer table and column names, and new high-availability features. "We have a loyal and vocal customer base. These are all features they want," Jones said. The new column and table name scheme, for example, will make it easier to integrate mainframe data with packaged applications and other data found on other platforms.
.... DB2 UDB Version 8 for z/OS has new technology that eventually will be incorporated on DB2 for distributed platforms that will allow administrators to make changes to the underlying schema while keeping DB2 online. "This is in keeping with the idea that the database has to be up 24/7," Jones said. In related news, BMC Software, Inc., a leading provider of enterprise management solutions, has announced its plans for support for current and future versions of DB2 Universal Database (UDB) for z/OS. The company has unveiled a nine-month roadmap of DB2 UDB for z/OS V8-supported SmartDBA data management solutions that span the areas of performance, administration and recovery. "We have had a copy in-house since November 2002," Rick Weaver, product manager at BMC, who plans strategic efforts for the mainframe sector, told DBTA in a private interview. "We will have a phased release of products as customers migrate to Version 8."
.... According to Weaver, BMC customer surveys indicate that many companies will begin their migration efforts in the next six to 18 months and will require an additional six to 18 months to begin to use Version 8's new features. Based on that schedule, the 20 SmartDBA data management solutions deemed most critical by customers to support DB2 V8 will be released in approximately 90 days. A second and third wave of products will be delivered within six months and nine months, respectively. Additional functionality, automation and integration into the SmartDBA family of data management solutions will be added in parallel with this support. "Supporting Version 8 has been our top priority," Weaver said.


posted by OttoKee  # 6:47 PM
Mainframe Share of the DB Market
Mainframes will serve as the platform for only 15 percent of the world's databases in 2007, compared to 25 percent two years ago and 35 percent in 1997, according to a new study by the Aberdeen Group. Databases running on Microsoft operating systems will be the big gainer, jumping from 10 percent of the market in 1997 to 35 percent in 2007. Nonetheless, the biggest, most powerful, most mission critical databases should continue to be associated with mainframe technology. Moreover, the report noted, the cost of replacing legacy databases is often prohibitive. While holding a smaller share, mainframe database technology will continue to play a critical role in many enterprises.

posted by OttoKee  # 6:44 PM
The Mainframe Turns 40
Last week, the Computer History Museum in Mountain View, CA, celebrated the mainframe computer's fortieth birthday. In the early 1960s, IBM took $5 billion (the equivalent to $30 billion today) and bet the company on new computer technology. And on April 7, 1964, legendary IBM leader Tom Watson, Jr. announced the launch of the System/360, hailed as the largest privately financed commercial project ever. Of course, it changed the world.
.... At the celebration hosted by IBM's Nicholas Donofrio, senior vice president of technology and manufacturing, System/360 pioneers Bob Evans and Fred Brooks provided a behind-the-scenes view of the bold decisions that led to the new mainframe technology. And both credited the hundreds of IBM team members that made the 360 a success. Evans noted in his presentation that for the first few months after the announcement, sales lagged. "Damn, we've made the equivalent of the 1935 Chrysler Airstream. It's too modern to sell," Evans said. It wasn't until July that they started selling and by year's end, sales more than doubled projections. By 1966, they were shipping 1000 System/360s per month. Revenue grew from $3.2 billion in 1964 to $4.2 billion in 1966, to $7.5 billion in 1970.
.... Brooks noted just a few of the business innovations originating from the System/360: SABRE, the American Airline reservation system; Medicare, the processing of 19 million ID cards for the Social Security Administration; NASA's projects Vanguard, Mercury, Gemini, and Apollo space missions; and the Universal Product Code, (UPC), now known as the "bar code." Moreover, the IMS database was built for NASA as part of the Apollo 11 project that put the first man on the moon.

posted by OttoKee  # 6:37 PM

Archives

04/01/2004 - 05/01/2004   05/01/2004 - 06/01/2004   06/01/2004 - 07/01/2004   07/01/2004 - 08/01/2004   08/01/2004 - 09/01/2004   09/01/2004 - 10/01/2004   12/01/2004 - 01/01/2005   01/01/2005 - 02/01/2005   02/01/2005 - 03/01/2005   04/01/2005 - 05/01/2005   05/01/2005 - 06/01/2005   06/01/2005 - 07/01/2005   07/01/2005 - 08/01/2005   08/01/2005 - 09/01/2005   03/01/2006 - 04/01/2006   06/01/2006 - 07/01/2006   08/01/2006 - 09/01/2006   09/01/2006 - 10/01/2006   10/01/2006 - 11/01/2006   11/01/2006 - 12/01/2006   12/01/2006 - 01/01/2007   03/01/2007 - 04/01/2007   04/01/2007 - 05/01/2007   05/01/2007 - 06/01/2007   08/01/2007 - 09/01/2007   09/01/2007 - 10/01/2007   01/01/2008 - 02/01/2008   02/01/2008 - 03/01/2008   03/01/2008 - 04/01/2008   06/01/2008 - 07/01/2008   07/01/2008 - 08/01/2008   09/01/2008 - 10/01/2008   10/01/2008 - 11/01/2008   11/01/2008 - 12/01/2008   03/01/2009 - 04/01/2009   04/01/2009 - 05/01/2009   09/01/2009 - 10/01/2009   12/01/2009 - 01/01/2010   05/01/2010 - 06/01/2010   07/01/2010 - 08/01/2010   08/01/2010 - 09/01/2010   12/01/2010 - 01/01/2011   01/01/2011 - 02/01/2011   10/01/2011 - 11/01/2011   01/01/2012 - 02/01/2012   02/01/2012 - 03/01/2012   03/01/2012 - 04/01/2012   09/01/2015 - 10/01/2015  

This page is powered by Blogger. Isn't yours?