] >
45494341058053020010601
Information Systems ManagementSummer2001, Vol. 18 Issue 3, p301058-05304549434Information Systems ManagementAuerbach Publications Inc.
T
WAP AND EMERGING TECHNOLOGIES INTERNET TECHNOLOGIES FOR IMPROVING DATA ACCESS

The term "Internet technologies" has recently been coined to describe a set of software capabilities aimed at improving data communications and data access in contemporary enterprise information systems. This article presents a framework for implementing Internet technologies to Improve enterprisewide data access, particularly in terms of the user interface, communications networking, the application platform, and the hardware operating systems platform. Examples of software products are given to illustrate how Internet technologies affect these platforms.

CONTEMPORARY CORPORATIONS TODAY, more than ever, are faced with tremendous competition in a rapidly changing environment. To cope with increasing

market demands, many corporations are turning to information technology (IT) in order to improve the efficiency and effectiveness of data processing and to use it as a competitive advantage. Since the emergence of the Internet (and its technology which has often been seen as the most significant technological enabler of the latest decade), researchers and IT professionals consider the application of this technology in business to be one of the Internet's most valuable contributions. The term "Internet Technologies" (InT) has recently been coined, and it refers to a set of software capabilities that are aimed at mainly improving data communications and data access in contemporary enterprise information systems. In addition to standard Internet services (telnet, ftp), InT's most prominent component today is Web technology. Several protocols have been developed, such as HTFP, HTML, DHTML, Java, ASP, and XML, and consequently, new approaches and paradigms in application development have emerged.

The job of IT, including both IT vendors and IT staff in organizations, is to make access to information seamless and easy, especially for managers. In contemporary conditions, it is not reasonable to expect decision makers to spend their time in IT training so that they will be able to use any software. Computer resistance by end-users is still one of the biggest problems that IT personnel must cope with in organizations. From the perspective of ease of use, it is Web technology that can help in that sense, by easing the process of connecting to several types of information: data from business-critical applications, e-mails, faxes, documents, etc.

ENTERPRISE INFORMATION SYSTEM'S REQUIREMENTS FOR INT

Organizational requirements for Internet technologies depend on the type and complexity of the business. Therefore, a contingency approach should be applied when considering possible ways to implement InT. From an enterprisewide data access perspective, in a generic case, the following requirements are expected to be fulfilled by InT:

Efficient access to all kinds of data, including remote access for telecommuters Secure data transfer Support for multiple hardware and software communication protocols Support for high-speed LAN and WAN technologies Data and application integration support (support for different data and application formats) Multiple hardware-operating system platform support

These requirements can be grouped into the following IT dimensions or IT-related contingency factors as the most important factors for establishing an efficient and effective, enterprisewide data access (Exhibit 1):

User Interface platform Communications-networking platform Application platform Hardware/operating system platform USER INTERFACE PLATFORM

The User Interface Platform provides efficient and user-friendly data access.

Traditional Data Access

Access to corporate data has always been determined by the type of information architecture on which an information system is built (Exhibit 2). In the mainframe environment, the processing is done by a mainframe computer, while the users work with character-based or "dumb" terminals. The terminals are used to enter or change data and access information from the mainframe. This was the dominant architecture until the late 1980s. A version of this computing environment is an architecture in which PCs are used to connect to host machines through so-called PC-terminal emulation programs.

A client/server-based information architecture divides processing into two major categories: (1) clients and (2) servers. The client is a computer such as a PC or a workstation attached to a computer network consisting of several dozen (hundreds or even thousands) clients and one or more servers. The server is a machine that provides clients with services. Examples of servers are the database server that provides a database and the SMTP server that provides e-mail services. Client/server applications have their own client programs that need to be installed on all client machines. End-users access data by using these client programs.

Another method of access within the traditional data access framework is based on using PC-X Windows technology, which emerged after introducing graphical user interface on host machines (e.g., X/Motif on UNIX).

Multiplatform Data Access

Access to corporate data in contemporary conditions is first determined by a device that is used by end users (Exhibit 3). From that viewpoint, the types of data access are:

Terminal-based PC-based Portable devices-based

PC-Based Access. In information technology (IT) history, the invention of graphical user interface (GUI) was a revolutionary step in improving both efficiency and effectiveness of IT end-users. GUI interface has become dominant both in operating systems (e.g., MacOS, Windows 9x, OS/2 Warp, and Motif-CDE on UNIX) and in application software. Therefore, character-based terminals are today mainly replaced by PCs and PC-terminal emulation programs. The vast majority of end-users access corporate data from desktop computers by using the following:

PC-terminal emulation programs PC-X Windows programs Standard client programs within client/server application platforms Web-to-host access tools Web-enabled client programs within client/server application platforms Enterprise Information Portal applications.

All of these programs represent different forms of user interface through which end-users access applications. The primary design goal of such a program (which is often part of the application) is that the information it contains be easily accessible and retrievable by end-users at the time they need it. Actually, such a program does not have to contain information; rather it should provide a way of accessing it, no matter where that information is stored or which device a user connects from. As mentioned above, early efforts to develop such systems were limited to the implementation of terminal emulation access tools and PC/X Windows emulation programs. An extended scope of a more efficient data access tool began with the advent of Internet and Web technologies. The structure of user interface has become more dynamic because it depends on the state-of-the-art of information technology. The lowest level uses only PC-terminal emulation tools, whereas the most sophisticated solutions include Web-based enterprise portal solutions. After introducing Web technology in 1994, it turned out that a Web browser is the most convenient way of using computers for end-users because it is completely based on a "mouse-click" operation. This became possible thanks to HTTP protocol, HTML language, and other Internet/Web technology advances.

Web-to-host access tools as a specific subset of Internet technology are used to improve and ease access to several types of information: legacy data, messaging systems electronic documents, business intelligence, and so on. Access to legacy data through user-friendly applications (standard client/server applications and Web-based applications for intranets and Internet) requires a processing layer between the applications and the data. Web-to-host technology makes it possible for users to access data stored on legacy hosts just by clicking a Web link. What's more, it cuts the costs of software ownership through centralized management.

Enterprise portal is a new approach in intranet-based applications, and therefore it is often referred to as next-generation intranet. It goes a step further in the "Webification" of applications and integration of corporate data. There have already been several "portal-based" products, particularly in the business intelligence area.

The concept has been extended to "enterprise information portal," which describes a system that combines the company's internal data with external information. An integrated portal solution on an enterprise level provides an efficient Web-based interface to all kinds of data coming from all relevant business applications (TPS, messaging system, document management system, and business intelligence system). Also, it adds an access to external information such as news services and customers or suppliers Web sites. The Gartner Group (www.gartner.com) lists eight components that it identifies as critical for a complete EIP solution: (1) security, (2) caching, (3) taxonomy, (4) multirepository support, (5) search, (6) personalization, (7) application integration, and (8) a metadata dictionary. The Hummingbird Enterprise Information Portal (EIP) is an example of an integrated enterprisewide portal solution (www.hummingbird.com). It provides companies with Web-based interface to structured and unstructured data sources and applications.

Portable Devices-Based Access. The ultimate goal of using portable computing devices that are designed as companion products to personal computers is again to improve information access of mobile users or teleworkers, first just to access and download data, but later to upload data as well. Currently, these devices are mainly used by managers and service workers to manage their schedules, contacts, and other business information. They have the utility to synchronize information with a PC. In addition to standard office scheduling needs, it is a customer interaction software-customer relationships management (CIS-CRM) that drives the personal digital assistant (PDA) market. These are applications such as: sales force automation, customer support, service support, maintenance, etc. At the same time, both ERP and CIS/CRM vendors are already working on introducing non-PC links to their sites (PDAs, Windows CE-based hand-held PCs, GSM). There are three different forms of portable devices:

1. Standard hand-held devices or hand-held PCs (H/PCs) -- These provide the user with a screen and a small but usable keyboard. Data entry and access are provided via keyboard, function buttons, and even a mouse. These devices mainly run the Windows CE operating system.

Windows CE incorporates many elements of the well-known Windows 9x OS platform. Basic Windows CE programs for hand-held PCs include pocket versions of Microsoft Office suite. By using Microsoft ActiveSync(TM) technology, the Windows CE Services component automatically synchronizes information between a handheld PC and the desktop.

2. Palm-held devices or personal digital assistants (PDAs) -- These are keyboard-less devices that rely on function buttons to activate applications and access or enter information. They run either Windows CE or 3Com's PalmOS. 3. Cellular telephone-based devices -- Even these standard phone communication devices are being enhanced from a visual information access perspective enabling users with keyboards and small screens. Some GSM vendors, Nokia, Ericsson, Motorola, and Psion, announced formation of a joint venture called Symbian that will standardize creation of wireless information devices, such as smartphones and communicators. They will run the EPOC operating system for mobile wireless information devices and applications designed by Starfish Software (www.starfish.com). COMMUNICATIONS-NETWORKING PLATFORM

With the enormous growth of computer networking, business data communications, and particularly the Internet, the demand for a fast, reliable, and cost-effective data communications backbone has also been growing. Several high-speed communication technologies are available, such as Fast Ethernet, FDDI, leased lines, ISDN, ATM, xDSL, cable modems, wireless connections, etc. Most of these technologies are costly to implement and maintain; therefore the selection of the appropriate one is a critical point in creating an IS architecture. In the section that follows, these high-speed networking technologies are briefly explained.

Technologies for WAN Infrastructure

Leased Lines. These have traditionally been used as a WAN backbone for establishing an intra-company network infrastructure. From a technological perspective, they are telephone lines that are leased for private use, forming a dedicated telephone line between two points. Leased lines are capable of carrying data at several rates, ranging from 56 Kbps, up to 1, 2, or more Mbps. T1 lines are widely used as the major data transfer backbone in the U.S. and have a capacity of 1.544 Mbps. T3 lines transmit data at 30 times that rate. For small businesses with a few users who rely on standard utilization of an e-mail messaging system and the Internet, a 56-Kbps leased line would be sufficient. Businesses that rely on heavy e-mail messaging traffic and heavy use of Web technologies should select a T1 or T3.

Fiber Distributed Data Interface (FDDI). FDDI is both a LAN and a WAN technology. It is mainly used as a network backbone connecting two or more LAN segments. A simple backbone might connect two servers through a high-speed link consisting of network adapter cards and cable. Fiber channel refers to a relatively new technology, with the most common usage being the connection of clustered servers in a distributed computing environment. FDDI and fiber channel support data transmission speeds of 100 Mbps.

X.25 and Frame Relay Packet-Switched Network Protocols. x.25 is a simple, commonly used, and inexpensive WAN technology. Although it is widely available, X.25 is slow when compared to newer technologies. Frame relay works at the data-link layer of the OSI model and provides data transfer rates from 56 Kbps to 1.544 Mbps. Frame relay services are typically provided by telecommunications carriers. This technology is less expensive than other WAN technologies because it provides bandwidth on demand, rather than dedicating lines whether data are being transmitted or not. A version of frame relay called International Frame Relay is suitable as a WAN backbone for international corporations.

Cell Relay or Asynchronous Transfer Mode (ATM). ATM is also both a LAN and a WAN technology that is usually implemented as a backbone technology. ATM is a very scalable networking platform, with data transfer rates ranging from 25 Mbps to 2.4 Gbps.

Synchronous Optical Network. Synchronous Optical Network (SONET) is a WAN technology that works at the physical layer of the OSI model. It provides data transfer rates from 51.8 Mbps to 2.48 Gbps.

Virtual Private Network (VPN). VPN is a way of organizing a WAN infrastructure by using public switched lines with secure messaging protocols. Actually, public Internet infrastructure is used for business data communications. It should be noted here that the attribute "virtual" does not necessarily mean that it is a dedicated platform for virtual business. VPN infrastructure can be used in any type of business, not only virtual business. With the VPN, users from remote locations (branch offices) not only access a company messaging system (e-mail and faxing), its intranet, but they can also use applications running on servers. WAN-VPN platforms are usually established, maintained, and managed by telecom companies or ISPs. They are then outsourced to companies willing to use this type of WAN. If a company wants to maintain control over its WAN-VPN infrastructure, it may chose to build its own VPN instead of outsourcing it. This approach is cost effective for companies with a number of remote offices, not only because of making an efficient network connection, but also because of the possibility of a centralized network management.

The VPN-based WAN usually includes:

A gateway that encrypts data packets and authenticates users: VPN gateways sit behind firewalls that at most sites are incorporated into the routers. VPN management software that lets network managers configure and manage VPNs from a single computer: This software is usually sold in the form of an integrated suite, which integrates hardware, software, and services, in order to simplify deployment of VPNs. A VPN system requires firewall and tunneling software with LZO compression utility that improves dial-up connection. Client software for users to connect remotely: This allows telecommuters, mobile workers, and other remote users to take advantage of dialed Internet connections for convenient, low-cost, secure remote access.

However, the VPN concept has some disadvantages. VPN-based WANs are slow because of data compression, less robust, and vulnerable to hackers.

Technologies for Remote Access

56-Kbps Dial-Up Modem Connection. New technology called x2 introduced by U.S. Robotic, together with V.90, a data transmission recommendation developed by ITU (International Telecommunications Union), provides a specification for achieving data transfer speeds of up to 56 Kbps over standard public telephone lines. With standard V.42 compression, 56K modem technology can download at speeds up to 115 Kbps.

ISDN. ISDN is a set of protocols that integrate data, voice, and video signals into digital telephone lines. ISDN offers data transfer rates between 56 Kbps and either 1.544 or 2.048 Mbps, depending on the telecom infrastructure of the century. It requires special equipment at the users' site: the user can talk on the telephone and make file transfers at the same time. In addition to remote access, ISDN can also be used as a WAN backbone.

Cable Modem. Cable modem technology makes use of cable-television (CATV) infrastructure by hooking up a computer to a local CATV line. It is expected to replace standard dial-up and ISDN connections very soon because data transfer speed can reach 1.5 Mbps. The cable system is a shared medium, a fact that should be taken into consideration when considering this type of connection. The cable modem usually has two ports: (1) an Ethernet port for attaching a standard Ethernet card in the computer and (2) a coaxial port that is used for the incoming CATV wire.

ADSL. In recent years, several versions of DSL technologies (digital subscriber link) have emerged. Because of several possible models, this technology is often referred to as xDSL technology. The xDSL is a digital packet technology like ISDN, but it is a technology that usually uses a dedicated rather than a switched connection. With the appropriate devices, it can deliver signals at speeds in the range of 1.5 to 6 Mbps over the current telephone wiring system. Therefore, Asymmetric DSL (or ADSL), is often considered as an alternative to dial-up or even ISDN.

Wireless LAN and Wireless Internet. With a wireless LAN technology, mobile users can connect to a LAN through a radio connection. A wireless LAN is a data communication system that uses electromagnetic waves for transmitting data over the air. It can be implemented either as an extension to the existing standard LAN or as an alternative to it. It has gained public interest with the emergence of remote access computing devices such as notebooks, hand-held computers, and PDAs. These devices can be used for more efficient and effective communication among users, as well as for data exchange with host systems. Wireless Internet access is also supported. For example, hand-held PCs running the Windows CE operating system include the Pocket Internet Explorer browser for remotely accessing the Web or a company's intranet.

LAN Technology: Ethernet and Fast Ethernet

Ethernet protocol is a typical LAN technology. Standard Ethernet-based LANs transmit data at speeds up to 10 Mbps. New Ethernet cards known as Fast Ethernet represent high-speed LAN technology because it can provide data transfer rates as high as 100 Mbps. Two new Ethernet standards that are currently being developed are gigabit Ethernet (up to 1000 Mbps) and 10 gigabit Ethernet (with data transfer rate of 10,000 Mbps).

When Ethernet cards are used to connect computers to LANs, at the same time they are an entry point in establishing connection to a WAN and Internet -- hence their importance from a data access perspective. Additionally, Ethernet cards are used today in combination with other communication devices for remote access, e.g., cable-modem technology.

Communication Protocols and Applications

Communication protocols and applications are in fact what drives data access. In short, protocols are sets of hardware and software rules that communication end-points must follow in order to exchange some sort of information. Starting with e-mail, other Internet services (telnet and ftp), Web-based technologies, videoconferencing technology, and transaction-oriented applications such as EDI and E-commerce, together with both hardware and software communication protocols, these applications enable companies to organize more efficient data communications. Communication applications usually come in pairs with software protocols that enable them. For enterprise-wide data access, the following combinations of Internet technologies are the most important: TCP/IP-Internet, e-mail/SMTP, Web/HTTP-HTML-XML, and GSM/WAP (Wireless Application Protocol).

As a supplement to standard e-mail messaging technology, videoconferencing technology enables remote users to not only communicate with each other to exchange standard data, but also to organize virtual meetings, exchange video and audio data, and share data and applications as well. In order to be able to use this technology, in addition to a standard PC, an additional set of hardware-software facilities is needed: this includes a camera that is usually installed on top of a PC, speakers, a microphone, and videoconferencing software. Using videoconferencing technology in contemporary business is on the rise as prices for equipment and communications fall.

B2B E-commerce may take many forms, depending on the technology that is used. Over the last decade, Electronic Data Interchange (EDI) has been used as a form for exchanging business documents over private networking infrastructures, using a predefined data-document format. With the explosion of Internet and Web technology, EDI has been replaced partially by Internet-based E-commerce applications which include several forms such as online catalogues, virtual malls, online buying and selling, etc. The main advantage of E-commerce over EDI is that no additional equipment is needed, and transactions can be made over public network infrastructure; however, the primary concern with E-commerce is still security. Currently, EDI is expected to be replaced by XML which will become the standard for E-commerce.

APPLICATION PLATFORM

The application platform improves efficiency in legacy data access, application development, and data and application integration. It also reduces the applications' TCO. In contemporary business, the following are most commonly in use:

Transaction processing applications -- applications that capture business data in the course of doing all business operations Business intelligence applications -- aimed at improving decision-making performance Messaging and collaboration applications Document management applications E-commerce applications

Even though these systems can be implemented separately, they are interrelated and some sort of integration is a prerequisite. Therefore, enterprise application integration tools play a critical role in every information system. Consequently, in a contemporary information system, the application platform consists of several applications such as a (also see Exhibit 4):

Transaction Processing System (TPS) Messaging and Collaboration System Document Management System Business Intelligence System CRM and SCM systems (Customer Relationships Management and Supply Chain Management), as special cases of TPS for managing relations with customers and suppliers Electronic Commerce System with links to the electronic marketplace

Internet technology -- actually, its most prominent component, Web technology -- has particularly influenced corporate application platforms by:

The so-called "Webification" of data access to legacy systems (Web-to-host connectivity tools) Introduction of new approaches in enhancing application development (Web-enabled applications) and application integration (middleware, Enterprise Application Integration Tools) Invention of the ASP model (application service provider) of a corporate application platform Webification of Data Access to Legacy Systems

With the emergence of Web technology and the Web browser as a unique GUI interface, independent software vendors (ISV) started working on Web-based gateway or middleware products that should provide browser-based access to corporate legacy data. Among others, the following Web-to-host products are available:

WRQ Reflection EnterView (www.wrq.com) Cyberprise Server and Cyberprise Host (www.walldata.com) Host On-Demand (www.ibm.com) HostView Server (www.attachmate.com) OC://WebConnect Pro (www.openconnect.com) On Web Host (www.netmanage.com) Persona (www.persoft.com) Salvo (www.simware.com) ClientSoft (www.clientsoft.com) ACUCOBOL-GT (www.acucorp.com) ISG Navigator (www.isg.com)

All of these programs are created for different host/OS platforms, such as IBM OS/390, IBM OS/400, and Digital/Compaq OpenVMS, or for specific application platforms, e.g., COBOL apps, RMS-based applications, etc. Some of them provide only access to host data, whereas some programs figure as middleware or a gateway that they enable by adding GUI capabilities, by integrating with client/server applications, or even by converting non-DBMS data into DBMS format. Programs are mainly based on the host emulation server -- software that runs on any Web server platform. The emulation server is used to download Java or ActiveX applets to browser. The applets permit the browser to establish the connection to the host using appropriate terminal emulation protocol: TN3270 for IBM mainframes, TN5250 for IBM AS/400 systems, and VT100-400 for Digital VAX/Alpha systems. A recent report by International Data Corp. (www.idc.com) found that the worldwide market for Web-to-host browser license shipments was exploding: from 67,000 desktop licenses in 1996 to an estimated 17 million in 2002. IDC also predicts that shipments of Web-to-host gateways will surge from 2,200 units in 1996 to more than 330,000 in the year 2001, representing sales in excess of $1 billion.

According to The Gartner Group (www.gartner.com), 74 percent of all corporate data still resides on legacy mainframes. Meta Group also estimates that more than 70 percent of corporate data in the world is still on mainframe systems (www.metagroup.com).

Legacy Systems (Legacy Data or Legacy Applications) refer to older or mature applications which were developed from the late 1950s to the early 1990s. Such systems are mainly mainframe or proprietary systems (e.g., IBM MVS, Digital OpenVMS, HP MPE) or distributed systems in which the mainframe plays the major processing role and the terminals or PCs are used for application running and data uploading-downloading. Most companies still rely on these platforms because they are more secure, available, reliable, and scalable than UNIX and Windows NT systems.

Web-Enabled Applications

Web technology has been used in contemporary information systems (IS) mainly in three different ways:

For establishing Internet presence, building intranet and extranet infrastructures For improving access to corporate data, both legacy and client/server applications For rapid application development

The role of Web technology in improving data access can be considered from the following perspectives:

End-user's perspective -- with the main objective of "how to provide end-users with an easy and efficient access to corporate data" Application developer's perspective -- "how to improve applications' efficiency" by using Web technology in:

-- Creating middleware and gateway applications that provide more efficient access to the existing applications (legacy data and client/server applications)

-- Developing Web-enabled client/server applications with the primary aim of providing a "thinner" client side (based only on Web browser)

-- Designing dynamic Web pages for corporate intranet and extranet infrastructures (Dynamic HTML, XML, Java, ASP)

With the increased deployment of client/server applications, system administration and version control have become significant problems for IT professionals. One of the main advantages of legacy mainframe systems was that the application resided on a single system. There was no need for any software on the client side. This made the process of software upgrades much easier to manage. Web applications bring in a very similar advantage. In such a platform, the only software needed on the user's PC is the Web browser. A Web-based application development paradigm can be considered in some sense as an environment very similar to a mainframe paradigm. The Web server plays the role of a legacy system whereas the Web browser replaces a text-based terminal.

In addition to the Webification of legacy systems, several other applications are also "Webified" so that no client software is needed. Some of them are:

Web-to-Reporting programs that go a step further than standard Web-to-host connectivity tools and provide reporting facilities applied on host data. For example, Report.Web (www.nsainc.com) is a Web-to-Reporting program, actually an intranet report distribution tool. The Web-to-Mail or Mail-to-Web (www.mail2Web.com) program is a service that lets users use their POP3 e-mail accounts through an easy Web interface. The Web-to-Fax (www-usa.tpc.int) program, which is very similar to Web-to-Mail, provides an opportunity to send and receive faxes from Web browsers with no additional software. Web-to-GSM software allows users to send GSM messages through a Web-browser interface (www.mtnsms.com). This capability is implemented today by most GSM operators. The newest versions of Document Management applications support Web access as well. Some examples include Keyfile (www.keyfile.com) and FileNET Panagon (www.filenet.com). Decision support by using desktop DSS tools is also available via a Web browser. For example, the Web version of DecisionPro (www.vanguardsw.com) allows decision makers to run DecisionPro models remotely. In addition to Web access to desktop DSS tools, this type of GUI interface is supported by enterprisewide decision support systems as well. Some products are Business Objects' WebIntelligence (www.businessobjects.com), MicroStrategy's DSS Web (www.strategy.com), and Cognos' Impromptu Web Reports (www.cognos.com). Business intelligence portal is the next trend in enterprisewide decision support. Examples include:

-- Information Advantage's MyEureka business intelligence suite, which was the industry's first business intelligence portal (now Sterling Software, www.sterling.com).

-- WebIntelligence from Business Objects (www.businessobjects.com), which includes a business-intelligence portal that gives users a single, Web entry point for both WebIntelligence and BusinessObjects, the company's client-server reporting and OLAP system.

-- Brio.Portal from Brio Technology (www.brio.com), which is another example of an integrated business intelligence portal software capable of retrieving, analyzing, and reporting information over the Internet.

Enterprise Resource Planning (ERP) software vendors are also re-architecting their applications to be used over the Internet, through a Web-browser interface. SAP has introduced System mySAP.com (an add-on to its mySAP.com Web portal product introduced in May 1999).

Middleware and Web-Enabled Middleware

Efficient data access is important not only at the end-user's side, but from an application developer's perspective as well. The development of new client/server applications that would exchange data with existing legacy systems requires a sort of software called middleware that overcomes the differences in data formats. Different data access middleware products exist and they are specific to a single platform, e.g., RMS files on OpenVMS machines, IBM mainframes, or different UNIX machines. As examples, information about three such tools will be given.

ISG Navigator (www.isg.co.il) is a data access middleware tool that provides an efficient data exchange between the Windows platform and several host platforms such as OpenVMS for Digital Alpha and VAX, Digital UNIX, HP-UX, Sun Solaris, and IBM AIX. ISG Navigator enables access to nonrelational data in almost the same way that relational data is accessed. What is more important, application developers can build new Internet-based applications that will use data from legacy systems by using data integration standards such as OLE DB, ADO, COM, DCOM, CORBA. The newest version of this product, called Attunity Connect, can be used in integrating E-business and B2B applications with legacy systems. Its interface component is based on XML and JDBS technologies.

Another solution, ACUCOBOL, a product of Acucorp (www.acucorp.com), enables data integration capabilities between COBOL legacy applications on one side and RDBMS systems, Windows applications, on the other side, providing:

Interface from COBOL to most commonly used relational databases (Oracle, Informix, Sybase, SQL Server, and to ODBC compliant databases) Access of COBOL data using standard Office applications such as Excel Access of remote data from indexed file systems or relational databases Access of remote files across a client/server environment Access of applications or data over the Internet

In addition to these standard data integration capabilities, ACUCOBOL allows developers to Web-enable existing COBOL legacy applications.

ClientSoft's ClientBuilder Enterprise (www.clientsoft.com) and other solutions provide important capabilities in the development of Web-enabled client/server applications and their integration with legacy data:

Data integration between desktop Windows applications and legacy data from IBM S/390 and AS/400 machines Development of GUI interface to existing host-based legacy applications ODBC support to relational databases Access to applications residing on IBM systems through the use of wireless communications technologies Access to IBM host machines through the use of Web technologies within E-commerce systems

In the last couple of years, in addition to middleware tools, application integration has been boosted significantly by another technology -- distributed computing. Distributed computing is a new framework within the object-oriented software engineering paradigm in which different parts of an application may be running on separate computers on LAN, WAN, or even Internet. Two standards exist that are used for providing such application environments:

CORBA (Common Object Request Broker Architecture) is an architecture and specification for creating, distributing, and managing distributed program objects in a network. CORBA allows programs that are at different locations and that have been developed by different vendors to communicate in a network through an "interface broker." CORBA was developed by a consortium of vendors through the Object Management Group (www.omg.org). DCOM (Distributed Component Object Model) is a set of Microsoft concepts and program interfaces in which client program objects can request services from server program objects on other computers in a network. DCOM is based on the Component Object Model (COM) that provides a set of interfaces allowing clients and servers to communicate within the same computer.

Creators of CORBA and Microsoft have agreed on a gateway approach so that a client object developed with DCOM will be able to communicate with a CORBA server and vice versa.

While middleware products serve as a data gateway between legacy systems and Windows-based client/server and desktop applications, Web-based application development products support building Web-enabled client/server applications (e.g., Microsoft's Visual Studio, Inprise-Borland's Delphi, C++Builder, and JBuilder). Many Web-to-host products offer APIs to host systems that developers can use to build custom intranet applications, especially for reporting. This model usually includes taking data from host systems, converting it into an HTML format and placing it onto Windows NT IIS which acts as an intranet server.

Another important area of business applications in which integration is needed is implementation of ERP systems. This activity usually means a lot of customization effort. Many companies decide to continue with some legacy code and seek a way to connect ERP with legacy systems. A new generation of enterprise integration application (EIA) or enterprise application integration (EAI) products provides that kind of integration (e.g. Prospero, www.oberon.com; CrossWorlds, www.crossworlds.com; Level8 EAI products, www.level8.com; etc.). ERP vendors such as SAP, Oracle, Baan, and others are developing their own compatible front-office solutions in order to achieve a higher level of integration. Also there is a need for efficient and effective integration of ERP systems with messaging, document management, and business intelligence systems, etc. Therefore, vendors of these applications provide ERP-gateways to integrate their programs with ERP systems. Also, after the successful implementation of ERP software, companies add business-intelligence tools to their ERP systems to enhance access to data and improve organizational decision making and ERP vendors provide such business-intelligence products, the core of which is always a data warehouse (SAP Business Information Warehouse, Oracle Business Warehouse, PeopleSoft Enterprise Warehouse, etc.).

ASP Model of Application Platform

Internet technology has affected the emergence of a new application platform called an ASP (Application Service Provider). In the beginning, there was the ISP (Internet Service Providers) concept, a model used by companies to provide Internet access and standard Web hosting. This approach has now been extended to the whole application platform, not just standard Internet home pages. Companies preferring to focus only on their core business may decide to outsource the running of business applications to another company that specializes in it -- an ASP.

The so-called "Rent, Don't Buy" model is a new approach to deploy corporate-wide applications, not only business-critical but desktop applications as well. ASPs rent applications running platforms -- mostly those applications that are very complex and hard to implement (ERP, data warehousing, electronic commerce, and customer relationship management). Actually, ASPs recently emerged as a result of an effort to make an ERP suite an application platform for small- and mid-size companies by reducing the costs of implementation, software upgrade, and maintenance. The traditional approach in implementing ERP packages was based on a single license for this software that could cost thousands of dollars per seat (between $2000 and $4000), but the real expense was in implementing these programs (consulting, process rework, customization, integration, and testing). ERP implementation costs should fall in the range of $3 to $10 per $1 spent on the software itself. Unlike ISPs, ASPs help companies in that they install, implement, and manage complex applications at the ASPs site and bill for these services, usually on a monthly basis. ASPs provide application hosting services mostly by partnering with software vendors and networking companies. Rental fees may include software customization, integration with other back-end systems, and ongoing maintenance of the applications at fault-tolerant data centers. Application and data servers are usually located in the ASP company, and applications and data are accessed on remote basis. The main prerequisite for this model of application platform is a high-speed and reliable communications backbone. The main concerns are owning data by another company, viability of ASPs, security, etc.

HARDWARE/OS PLATFORM

The Hardware/OS Platform provides stability, reliability, and scalability. Today's computer platforms are classified mainly with respect to the type of computer system configuration and operating system that is running on it. Apart from desktop computing, when mission-critical applications are considered, they are installed either on the old, "big-iron" mainframe computers, proprietary minicomputers, or contemporary enterprise server systems. The following platforms are most commonly used today:

Mainframes running IBM MVS/OS390 UNIX servers (HP HP-UX, Sun Solaris, IBM AIX, Compaq Tru64, SGI IRIX, etc.) Proprietary systems running single-vendor operating systems (VAX OpenVMS, Alpha OpenVMS, AS/400, HP MPE, etc.) Intel servers running Windows NT Server, Novell NetWare, OS/2 Warp Server, or Linux

Over the last decade there have been many discussions about which platform is more suitable for running business-critical applications. Although there are many requirements that should be taken into account when speaking about that suitability (hardware platform support, application support, applications development tools support, network support, systems management, etc.), a major issue is almost always related: the so-called RAS model (Reliability, Availability and Scalability) of a specific platform. From that viewpoint, it is well known that mainframes have been delivering exceptional systems availability and reliability for long time. This means that end-users can use data and applications and the system continues to run even when systems or application software are being upgraded or backed-up. In addition to this, mainframes have the possibility of supporting parallel databases -- a platform that is very important for fast and efficient access to large amounts of data (data warehousing applications, OLAP systems, integrated decision support applications, ERP systems).

With new approaches in multiprocessing technologies such as SMP (Symmetric Multiprocessing), Clustering, and NUMA (Nonuniform Memory Access), UNIX vendors and vendors of some other proprietary systems can also provide a mainframe-like uptime environment. Some examples of such platforms are the well-known and 20-year present OpenVMS clustering system; Silicon Graphics' servers with NUMA ("cluster in a box") technology; new Sun's High Performance Computing ClusterTools which allows connecting up to 16 Sun UE10000 servers, each of them working with up to 64 UltraSparc II RISC processors; and HP's Enterprise Parallel Server with support of several dozen 64-bit V-class servers, each of them supporting up to 32 processors. Also, with a new facility called Dynamic Loadable Kernel, the UNIX operating system can now be upgraded without the need to shut it down. In short, UNIX advantages include performance, clustering, robust systems management tools, widespread applications support, the ability to handle very large databases, and scalability.

On the other hand, the Windows NT Server has many advantages at the workgroup and small business size level. NT's key advantages over UNIX are ease of use and administering, price, support, and integration with API Windows application development environments. Most major applications vendors have written NT versions of their packages, e.g., some 45 percent of the implementations of SAP's R/3 ERP package these days are on NT, according to SAP. PlugIn Datamation/Cowen findings indicate that NT leads in the Web server market; 50 percent of survey respondents use NT as a Web server today, and that number will climb to 62 percent in the next one to two years. NT also has made a big impact on file-and-print; some 43 percent of those surveyed use NT as a file-and-print server today, and 56 percent will use it in the next year or two (www.datamation.com).

In the context of the hardware/OS platform, some results from a survey made by the Gallup Organization are noteworthy. According to Gallup, the availability of mainframes can be as high as 99.999 percent, which corresponds to a downtime of less than 5 minutes per year. The average downtime for a PC server is measured at 1.6 hours per week, while UNIX systems with RAID devices, clustering technologies, can have close to 99.99 percent availability, which in terms of system downtime can be as much as an hour or more per year (www.datamation.com). It should be noted that these results are based on Windows NT Server 4.0. Better results are expected with the 2000 models (Advanced Server and Data Center).

For enterprisewide data access, the hardware/OS platform is important from the following aspects:

Application servers, data servers, e-mail servers, and Web servers should run on reliable machines with high availability ratio. Servers must be scalable enough. Servers must support open hardware-software communication protocols in order to be able to exchange data. Servers must support remote access capabilities.

Web technology has also influenced today's server platforms and server operating systems. Most IT vendors offer so-called E-business servers by providing preinstalled Web technologies such as Web servers, Web-related development tools and protocols, enhanced networking capabilities, etc.

CONCLUSIONS

The article presents a framework for implementation of Internet technologies by improving enterprisewide data access. User interfaces, communications networking, application platforms, and hardware-operating systems platforms are identified as areas of an information system in which Internet technologies can help in providing more efficient and more effective data access. The ways that Internet technologies affect these platforms are explained in conjunction with examples of software products that are available on the market.

DIAGRAM: EXHIBIT 1 IT-Related Contingency Factors for Data Access

DIAGRAM: EXHIBIT 2 Traditional Data Access

DIAGRAM: EXHIBIT 3 Multiplatform Data Access

DIAGRAM: EXHIBIT 4 Contents of the Application Platform

By Nijaz Bajgoric

NIJAZ BAJGORIC is a member of the faculty at Bogazici University, Istanbul, Turkey. He may be reached at nijaz@boun.edu.tr.