文档库 最新最全的文档下载
当前位置:文档库 › Adaptive Middleware For Autonomic Systems

Adaptive Middleware For Autonomic Systems

Adaptive Middleware For Autonomic Systems
Adaptive Middleware For Autonomic Systems

Adaptive Middleware For Autonomic Systems

Steve Neely, Simon Dobson and Paddy Nixon

Systems Research Group, School of Computer Science and Informatics,

UCD Dublin, Ireland

steve.neely@ucd.ie

A BSTRACT

The increasingly dynamic nature of resource discovery and binding in modern large-scale distributed and mobile systems poses significant challenges for existing middleware platforms. Future platforms must provide strong support for adaptive behaviour in order both to maintain and optimise services in the face of changing context. We use a survey of existing middleware systems to develop some core themes that characterise and constrain the ability of these approaches to support the development of adaptive and autonomic systems, and draw some possible trends for developing future platforms more appropriate to these domains.

La nature toujours plus dynamique de la decouverte et attachement de services en ce qui concerne les larges systemes modernes mobiles et distribues pose des defis significatifs pour les plate-formes logiciels intermediaires. Ces futures plates-formes devront fournir un large support en ce qui concerne le comportement adaptif afin de maintenir et optimiser les services face a un changement de contexte. Nous exploitons une etude des systemes logiciels intermediaires pour developper des themes centraux qui caracterisent et limitent la portée de ces approches dans le developpement de systemes autonomes et adaptifs, et nous exposons des tendances plausibles pour le developpement de futures plates-formes plus appropriees a ces domaines.

I. I NTRODUCTION

Traditional middleware platforms arose as a response to the complexity of constructing multi-vendor enterprise systems. A typical application is to integrate a database, back-office, front-office and point-of-sale systems using a commonly-agreed object model or messaging framework. Most such frameworks focus on providing the tools and techniques to allow developers to perform integration across diverse products and platforms while maintaining the integrity of the overall solution in terms of robustness and reliability.

Recent trends in mobile systems, component-based frameworks, pervasive and autonomic computing have significantly increased the level of dynamism found in systems architectures. Most mobile systems will by definition “encounter” new resources over their lifetime; supporting a component view allows different services for fulfil a given system r?le at different times; while pervasive and autonomic computing seek to maintain the provision of services despite changes in their users' environments or system

characteristics, and indeed often aim to optimise some facet of service delivery against some aspect(s) of this evolving context.

The result of these trends is to intensify the need for middleware integration whilst introducing a range of new challenges not encountered in traditional enterprise systems. Specifically, middleware platforms need to be able to adapt more or less autonomously to changing contexts and situations of use, rather than having their behaviours fixed by human-mediated decisions at design- or configuration-time. Without such adaptation, applications and systems built on the middleware platforms will not be able to exhibit the levels of service demanded of them.

This raises three important questions:

1.To what extent are current middleware platforms suitable for building this next

generation of adaptive systems?

2.What deficiencies exist in their support for adaptation?

3.What approaches are emerging to deal with these deficiencies?

In this paper we seek to address these questions by surveying the most important current classes of middleware together with some illustrative systems. We examine each class of middleware with respect to its support for adaptation to changing circumstances, and develop some core themes that both characterise and constrain the ability of middleware to deliver dynamically adaptive behaviour. Finally, we bring together some emerging trends that will shape the development of future middleware systems.

II. T HE DEFINING CHARACTERISTICS OF MIDDLEWARE

It is common to think of middleware as synonymous with support for communications between distributed processes across devices. Despite being the main use of the term today, this definition is slightly misleading: the term middleware was used in the computing industry before networks of machines were realised. Perhaps the earliest use of the term in the literature can be found in the report of the 1968 NATO Software Engineering Conference [1]: during a discussion on the nature of software engineering D’Agapeyeff described a layer between application programs and service routines, performing operations such as managing a file system, which he identified as middleware. D’Agapeyeff’s argument was that generic file system functionality was either inefficient or inappropriate for all instances of applications.

In this context middleware is defined as a layer between application software and system software. It abstracts the complexities of the system, allowing the application developer to focus all efforts on the task to be solved without the distraction of orthogonal concerns at the system level. For example, the builder of hospital process software does not want to think about initial message analysis and real-time schedulers but patient records, health monitoring equipment, staff rotation schedules and so forth.

With the dawning of the networked era, middleware tools have expanded to address the added complexity encountered in networked applications. In the network-aware system

middleware can be described as software layered between applications, the operating system and the network communications layers, the purpose of which is to facilitate and coordinate some aspect of cooperative processing. The middleware provides a virtual platform spanning the nodes in the system, and providing a set of communication and co-ordination services that behave (to some extent) as a single system (Figure 1). Different approaches make different concerns transparent to programmers [2], and these variations cause major differences in terms of scalability, expressiveness and adaptability.

We now think of middleware as the “plumbing” within information systems. It routes data and information transparently between different back-end data sources and end-user applications. The common theme throughout all the definitions of middleware is that it is software that mediates between varying hardware and software platforms on a network affording group functionality. The core function of middleware is to hide complexity by automating much of the repetitive work common in enterprise systems. In complex distributed environments with arrays of hardware platforms, where intermittent failure is expected and security is paramount the cost of manual integrating can spiral beyond reasonable limits.

Figure 1 The generic middleware architecture

Architecture generique de logiciels intermediares

Middleware must typically provide support for naming, service location, service discovery, transport and binding. Services within the system must be redeemable through some symbolic name, allowing the other entities in the system to get a handle to those services to request operations. Service location and discovery mechanisms provide decoupling between entities in the system, dispensing with inflexible hard-coded references to services. A transport mechanism provides a well-defined protocol and data format for exchanging information between network nodes regardless of the operating system and language details of each node. A binding scheme must exist to allow a linking from locally-executing code to some external code. These bindings may take the form of object narrowing in systems such as CORBA, being supplied with an event queue in systems such as IBM’s MQ-Series, or some other mechanism.

As well as these basic communication and location services, middleware systems will typically support replication, stable storage, concurrency control and failure handling for machine and communication faults. They may transparently deal with these issues, or may provide the programmer with an abstraction above the low-level details. Some systems, for example, provide transactional schemes that will automatically either schedule commit procedures to execute once the system has reached a stable state or run rollback procedures if a failure occurs at any point in the (possibly nested) transaction structure.

Different middleware platforms place different emphasis on security and authentication, ranging from assumptions of trustworthiness (for systems targeting a single LAN), through static rule- and policy-based systems, up to open trust management infrastructures intended for the open mobile internet. Placing such features into the middleware provides common guarantees across a system, but perhaps at the cost of unacceptable performance overheads.

Communications mechanisms of middleware features can vary from request/reply to asynchronous messaging: some implementations provide both. The systems can be language-specific or language-independent: Microsoft’s .NET framework is mostly language-independent yet platform-specific, CORBA and most peer-to-peer systems are both language- and platform-neutral.

III. E XISTING MIDDLEWARE STYLES AND ARCHITECTURES

III.1. Object broker systems

Object-oriented programming has become the dominant approach to constructing distributed systems, as with many other fields. Unlike traditional (single-address-space) object-oriented programs, distributed object systems must answer fundamental questions about the impact that distribution has on semantics: if a call is made from a node “a” to a node “b”, including as an argument another object on node “a”, is the argument copied to the remote node?, or is a remote reference generated to refer back to the original object? How are different sorts of failures managed?, and how are they exposed to programmers? The answers to these questions subtly change the nature of the object system.

A typical object middleware is structured logically as a “container” sitting on each node, which hosts a number of objects. The container provides core management and communications services, typically including object lifecycle management, concurrency control and one or more communications protocols for interaction with objects on other containers. Calls to objects’ methods are therefore replaced, at a lower level, by network exchanges. In many cases it is possible for advanced applications to intercept this network traffic and manipulate it in some way, introducing a second mode of interaction into the middleware (call or interception) which can provide great flexibility at the cost of greater potential complexity.

III.1.1. ORBs

Object Request Brokers (ORBs) are perhaps the simplest container architecture, limited to providing a platform on which to build loosely coupled distributed systems with few in-built services. ORBs were envisaged at a time when there was little support for building server applications to fulfil the role of aiding programmers. The early 1990s saw the introduction of the Common Object Request Broker Architecture (CORBA) [3] standard. CORBA was designed to help programmers produce client-server applications as a general purpose integration technology. The commercial success of CORBA made it the de facto benchmark for further ORB developments. Most ORBs are implemented as libraries rather than as dedicated container processes.

The initial incarnations of ORBs can be described as Distributed Object Architectures (DOA), which represent an extension of object-oriented programming into middleware. Despite having good internal integration, the observed problem of interoperation between DOA systems lead to the advent of Service Oriented Architectures (SOA), based on open standards such as XML and web services. DOA and SOA are subtly different in terms of the way interfaces are designed, as discussed in [4]: however, in the context of this discussion we can view them as having the same functionality.

Figure 2 CORBA architecture

Architecture CORBA

The CORBA1 specifications were drawn up by the Object Management Group (OMG) with the goal of improving interoperability with distributed applications on various platforms. At the core of the middleware platform is the object request broker (ORB), representing that part of the system that handles distribution and heterogeneity issues including all communications between objects and clients. The ORB is typically 1 For the remainder of this section we will use CORBA as a prototypical example of an ORB. This is to provide a basis the discussion and not intended to suggest that CORBA is the only (or even best) example.

implemented as libraries that the application programmer builds their code against, facilitating the composition of CORBA services.

To enable the calling of services on remote objects their interfaces must be specified. CORBA objects and services are specified using an Interface Definition Language, CORBA IDL, based on a C-style syntax. IDL documents provide the equivalent of a distributed API for accessing published services. To publish a CORBA service a developer publishes the IDL file in some public repository. A client developer passes the IDL file through an interface compiler that outputs “stub” (for client-side) and “skeleton” (for server-side) code in any desired CORBA-compliant language. Before data moves into the ORB the stubs package or marshal it into a network-ready format. As data arrives at the other side the skeletons unmarshal it for use by the executing process (Figure 2).

Static definitions via IDL are commonplace in CORBA. However, interest in dynamic CORBA has recently received increased interest due in part to the OMG standardising two scripting languages, CORBAscript [5] and Python [6]. An example of dynamism is the invocation of objects at runtime with the Dynamic Invocation Interface (DII). The DII is a generic invoke operation which takes object reference as input and builds request handling instances on-the-fly. It allows applications to invoke operations on target objects without having compile-time knowledge of their interfaces. This is less efficient than the static stub version as much of the information to execute the object must be provided at runtime. For example the names and types of all arguments and return values must be supplied and dynamically checked.

CORBA applications have three invocation models for method invocation: synchronous, one-way and deferred synchronous. Synchronous requests have at-most-once semantics, in which the caller blocks until a response is returned or an exception is raised. This means that an application must either wait inactive for the duration of the remote call, or must execute remote operations in their own local threads and then integrate the results. One-way requests employ a best effort delivery scheme in which the caller continues immediately without waiting for any response from the server. This is equivalent to a simple point-to-point event service. Deferred synchronous requests rely on the dynamic invocation interface and also have an at-most-once failure semantics. The called continues immediately and can later block until a response is delivered.

In addition it is possible in most ORBs for applications to define interceptors and attach them to invocations of methods on particular objects. The CORBA interceptor definition is expressed in terms of DII, and so remains fairly well-structured.

III.1.2. CORBA services

The CORBA standard defines a set of services designed to support the integration and interoperation of distributed objects. The CORBA services are defined as a layer residing on top of the ORB, as opposed to being part of the core container specification. They are defined by IDL interfaces providing access to their operations. Their significance is that

they provide a standard extension to the middleware platform, raising the level of abstraction at which developers can work.

The event service provides facilities for asynchronous communication through events. It is complimented by the notification service which has advanced facilities for event-based asynchronous communication. The notification service adds the ability to transmit events in the form of data structures, event subscription, discovery, quality of service and an optional event type repository. The two services encourage the decoupling of objects in the system.

The naming, property and trading services provide mechanisms for identifying, describing and locating objects. Once named object is registered with a trader service it may move around the infrastructure and still usable through dynamic rebinding. The security service is the central mechanism for creating secure channels, authorization, access control, policy protection and trust. It defines a very flexible – but perhaps overly complicated – security model for the overall framework.

III.1.3. Building adaptive systems with objects

The CORBA services provide a good initial example of middleware support for adaptive systems development. A CORBA application must acquire references to remote objects with which to interact. In a simple system these would be hard-wired into each application, or stored in a file – neither of which allows easy adaptation to new providers. Using the services, one might (for example) use the naming service to look up the system’s trading service, and then use this to acquire other objects. This provides a scheme for managing object acquisition: once an application has bound to the naming service (which may be hard-coded or acquired via broadcast), all other bindings are acquired through a service regime. This is appropriate for systems that change relatively slowly, and has been used to great effect in many large enterprise deployments.

Part of Sun's Java Enterprise Edition (J2EE), Enterprise Java Beans (EJB), provides an alternative object-based view of distributed systems. EJB uses Java objects to represent both the state of the system (entity beans) and on-going interactions with clients (session beans), which may be stateful or stateless. Entity beans provide an object fa?ade onto rows of an underlying relational database, and as such cannot be sub-classed – there can be exactly one implementation of a given entity bean within a single application. Moreover all remote interactions occur through session beans, with the entity beans remaining hidden. This simplifies transaction management but can lead to awkward implementation patterns: if an application wants to expose entity beans it must generate a corresponding value object to carry a copy of the entity bean's state across the network. EJB's model is therefore a hybrid between a true ORB and a more service- or document-oriented architecture. It provides a far richer container model including concurrency, lifetime and persistence services as part of the core definition, and the use of Java objects and dynamic loading means that all beans can be managed by a dedicated container process. The definitions of these services are targeted at a particular class of applications exposing relational database table records (one record per object), and – while they

simplify such applications considerably – can provide a poor basis for other application styles.

However, EJB's container constraints also aid some forms of adaptation by system administrators, although not autonomic adaptation. EJB programmers focus on providing fragments of functionality (state manipulation and business methods), with the platform taking care of low-level tasks. This makes it straightforward to (for example) provide a replicated server for high-volume-transaction applications, or to use a federated database for distribution: the application code is not given facilities to over-commit to such architectural choices.

Microsoft’s .NET framework provides extensive support for building distributed applications, although without some key enterprise-level services. The more traditional framework approach are typified by ACE [7], which provides abstract classes for basic functions which are then sub-classed to provide application-specific functions, and by Globe [9], which provides caching and replication services within a rich concurrency and consistency model. X-KLAIM [8] and its variants provide a language-level integration of distributed objects into languages such as Java.

An interesting hybrid is provided by Akamai [10]. Akamai is an internet-scale content management network that allows clients to place large content objects (such a movies, audio files or applications) on a global network of managed servers. The servers monitor request traffic and transparently replicate objects on servers close to the request hot-spots, for example having objects "chase the dawn" to be near to countries waking up. By adapting the location of time-consuming content the system provides more scalable and predictable download times while simultaneously moving long-term connections off the clients' networks. In the limit a company could restrict itself to serving only HTML pages locally (requiring little bandwidth) while off-loading images and other content to Akamai. While the details of the adaptation algorithms used are largely proprietary, the service demonstrates the potential for using traffic metadata to adapt underlying storage provision.

III.2. Tuple-space systems

Tuple spaces can be viewed as an implementation of the associative memory paradigm for parallel/distributed computing based on a repository of tuples that can be accessed concurrently. Linda [12] was the first realisation of this concept, This early work in Linda stimulated the development of modern tuple space systems such as JavaSpaces [13] and Jini [14].

Although no explicit work has been reported on tuple space for autonomic systems, they have developed as a middleware platform in domains with similar characteristics, such as pervasive systems. In particular, Equip [15] is a software platform which supports the development and deployment of distributed interactive systems. One key element of Equip is its shared data service, which combines the ideas from tuple spaces and general event systems. This shared data service can be used as communication infrastructure for

context producers and consumers. It supports both querying of current state and receiving events of certain pattern. It also employs replication to improve performance.

Tuple space principles offer a number of advantages for adaptive systems. Firstly they decouple the producer and consumer of data in both time and space: the producer does not know the location of the consumer, and indeed no consumer may exist at the time the tuple is created. Secondly, the tuple space itself may be distributed and persisted to provide robustness for the evolving system, although efficient distribution and searching over wide areas are problematic.

An interesting tuple-space system that directly addresses issues of mobility is Lime [16]. Lime is a multi-space model in which tuple spaces are associated either with locations or with agents. As an agent moves, its local tuple space is dynamically merged with the tuple space at the associated location. Additional primitives allow tuples to be targeted at the tuple spaces associated with particular agents, or to associate code with the appearance in a tuple space of a tuple matching a particular pattern. In many ways this makes Lime a hybrid of tuples with events.

However, although tuple space principles are appealing they have a number of fundamental problems. Tuple spaces do not have a transactional semantics and providing fault tolerance, and in particular rollback, is inconsistent with tuple semantics. This is particularly evident in autonomic systems as the system may evolve and the producers of data may no longer exist when a rollback is initiated [17]. A second concern lies in the scalability and performance of tuple space systems. Some progress has been made in this as evidenced by some recent commercially offerings [18].

In conclusion, tuple space technologies facilitate the necessary decoupling of components in a system and thus enable evolution of the system. However, the observed shortcomings, and in particular fault tolerance, show that further developments are needed to demonstrate their long term viability.

III.3. Message- and event-oriented middleware

Message- and event-oriented systems share a common approach to co-ordination, providing point-to-point or multicast information exchange between largely independent and decoupled clients. They however exhibit significant differences in application and semantics, which tend to result in distinct technologies.

Message-Oriented Middleware (MOM) arose in the mid 1980s as a client-server architecture that decouples communications using messages. A key property of MOM is that it facilitates message passing over heterogeneous platforms encouraging portability, flexibility and interoperability. MOM systems are typically asynchronous and peer-to-peer. Messages are passed from clients to server and stored in message queues until ready for processing, with a guarantee that the message will arrive at the destination process. Networks of message servers are a natural mechanism for database integration, where messages have financial value and guarantees of delivery are paramount.

In MOM systems the client and sever are only loosely coupled. This makes application integration simpler as there are no tight bindings to handle. The systems provide support for a reliable delivery service by keeping queues in persistent storage. The receiver can easily throttle the communications during periods of heavy load and be confident of finding all requests at a more convenient time. MOM systems support the processing of messages by intermediate message server. This could be actions such as filtering, transforming, and logging.

The MOM programming abstraction is rather unwieldy and low-level (at the same level as packets). Dealing with asynchronous communications means that the programmers must write multi-threaded code: the program flow of execution must send the message, spawn a thread to do with the reply and continue on. In such systems a request/reply style of interaction is more difficult to achieve. This begins to push complexity onto the programmers – exactly what the use of middleware was seeking to avoid. The message formats in MOM systems typically are unknown to the middleware, which means the middleware cannot ensure that messages are coherent or type-correct. The queue abstraction only gives one-to-one communication which limits scalability. More importantly for open systems, MOM systems tend to lead to strong vendor lock-in due to incompatible interfaces.

Despite these disadvantages, MOM systems are popular in financial services and other domains where reliability is paramount. They provide a natural framework for database integration and other forms of highly decoupled service composition, and are sufficiently well-specified to allow reconfiguration and re-targeting as the underlying communications services evolve. For more dynamic adaptive systems, however, MOM provides too little support for modelling or metadata collection to assist automated adaptation.

The notion of a message is typically fairly heavyweight, consisting of all the information needed to perform a business-level operation. Many systems require significantly lighter interactions, for example exchanging state changes in a regular basis. Such systems with small messages are typically referred to as event systems. An event is a small packet of information, typically carrying a "payload" indicating a change in the state of an object or component. Event-based middleware systems typically provide one or more event channels to which events may be sent, with the middleware controlling the storage and distribution of the event independently of the producer.

Event-based systems have three major attributes Figure 3). Firstly, in contrast to MOM systems in which a message targets a specific component, an event system decouples event producers from event consumers. An event channel may deliver events to zero, one or many consumers, the population of which may change over time. A single event may be distributed to several points within the system, and may therefore affect the behaviour of the system in a number of different ways in a single operation.

Secondly, event systems can provide a range of semantic models which may be tailored to the needs of applications, selected on a whole-system or per-channel basis. In terms of ordering, channels may provide various guarantees on event delivery ranging from fully ordered (events are delivered to consumers in the order they entered the queue) through producer-ordered (events from a single producer are delivered in the order they were generated) and causally ordered (an event is always delivered before any other events it gave rise to from other producers [11] to completely unordered delivery; in terms of reliability, they may provide best-effort, at-most-once or exactly-once delivery. Thirdly, event systems have been shown to be extremely scalable. While naive implementations use a central server to host event queues, more advanced implementations may use multiple servers connected by communication spanning trees to distribute events efficiently. The potential efficiency is increased in systems which provide weaker ordering or delivery guarantees.

Figure 3 Event delivery architecture

Architecture de livraison d'evenements

Events are typically regarded as asynchronous communications which are delivered to consumers as they become available – sometimes referred to as the push model. An alternative, pull model forces consumers to poll (blocking or non-blocking) for events on a channel. As well as submission and delivery interfaces, event systems must provide a means for specifying the population of producers and consumers. In multi-channel systems, each channel is typically dedicated by the developer, not the middleware) to events of a particular "type". Event channels are then "discoverable" objects which may be acquired by name or by iteration. Systems typically allow both producers and

consumers to subscribe to "interesting" channels. This general architecture is referred to as "publish-and-subscribe" (or "pub-sub") events [19]: consumers subscribe for those events that interest them, and publish them to the appropriate channel.

An enormous variety of systems have been built to populate this landscape. The CORBA event service [20] represents event queues as CORBA objects exposing pull- and push-model interfaces. As with other OMG services, the event service is purely a specification that provides access to underlying implementations (for example [21], [22]). However, the interface specification provides no scope for providing metadata to adapt the event service automatically, which may limit the scope for autonomic management. The Java Messaging Service [23] (JMS) takes a similar approach, closely-integrated with the Java type system and providing client-side filtering based on message attributes.

iQueue [24] takes a more framework-based approach, providing abstract classes to define event structures. It focuses on event filtering and composition, while supporting a range of underlying delivery mechanisms including SOAP and JMS. Siena [25] is similarly framework-based, being perhaps the canonical example of a publish-and-subscribe system whose scalability comes from constructing a spanning tree of servers for event delivery.

One other approach to adaptation in event systems is to overlay the event delivery mechanism onto a lower-level adaptive middleware. The Scribe system [26] leverages the Pastry peer-to-peer infrastructure (see below) to provide adaptive distribution in a fully decentralised fashion. Events channels are represented by keys, with events being retrieved using Pastry's distributed hash table mechanism which provides unordered by highly reliable storage and retrieval. Similarly, Bayeux [27] builds a spanning-tree delivery system on top of Tapestry to support highly efficient delivery over a wide area. The STEAM system [28] acts as a stand-alone middleware using its own MAC-layer protocol to achieve high-volume real-time delivery. Many agent-based systems also use some form of event exchange, coupled with the mobility of individual agents around the network.

Event systems offer two main routes for adaptation. Firstly, the system may allow applications to select different delivery guarantees as appropriate to the semantics of the events being exchanged. This adaptively is typically only available at design time, as different guarantees have different visible effects on applications and so require different design strategies. Secondly, the system may allow the underlying delivery infrastructure to adapt to changing conditions, either autonomously or via a management API. User-level management implies the collection of meta-data on service use and underlying network conditions.

While there are obvious similarities between MOM and events, the differences are typically a matter of emphasis. MOM systems focus on point-to-point reliable delivery with predictable ordering, appropriate for database integration and financial exchanges in which these issues are of vital importance. Event systems typically focus on the extent to

which the system can scale to handle high volumes of events being transferred quickly over a wide area.

Although events systems tend to be built bottom-up, exchanging low-level information on system state changes, they can also be used at a higher level. Typically this involves constructing composite events, combining a number of low-level occurrences into single higher-level events. A good example is described by Hayton et al [29], who provide a simple algebra for event combination that can be used to "raise the level" of events and allow higher-level composite events to be created from "workflows" of more primitive events. It is an open question whether such techniques would work well in the presence of uncertainty, such as when events are generated from inference [30].

III.4. Peer-to-peer systems

Peer-to-peer (P2P) systems are an area of significant current interest. By removing the need for infrastructural support, P2P systems can potentially support wireless (and other) ad hoc networks extremely well, while distributing the load and costs of service provision over the node population. However, this flexibility comes at the cost of reduced reliability and other guarantees. Moreover P2P can offer no guarantee as to whether particular services will be available at all, as all services are provided by the nodes themselves rather than by a managed central provider.

At the protocol level, P2P systems have radically different requirements to the “standard” internet. Supporting these requirements may be accomplished in two ways: by providing a novel transport (MAC layer) protocol to replace IP, or by overlaying a higher-level protocol on top of standard protocols such as TCP/IP. The former has been explored extensively in the context of ad hoc wireless networks where there is no legacy infrastructure; the latter is more popular for wired systems, and is essentially unavoidable for systems that seek to span enterprise or the global internet. In either case the P2P network must provide at least (logical) routing, and generally other higher-level services such as node discovery and content management, to applications.

The primary issue for a P2P middleware is routing. In overlay networks, the routing system must translate P2P-level node identifiers into addresses valid for the underlying network (IP addresses for example). There is typically no essential relationship between either the address spaces or the network topology of the underlying and overlayed network.

The Pastry [31] system provides a good example of an overlay P2P system. Nodes in the network are each assigned a 128-bit identifier by a central authority. The identifier space forms a ring topology, independent of the underlying IP addresses of the nodes -- nodes that are "close together" in identifier space may be physically far apart, and vice versa. The address space is sparse, so that only a few of the possible addresses in the ring will be alive at any time.

To route packets, each node maintains a routing table mapping nodes that have identifiers close to its own to their IP addresses, learning new nodes by observing the history of

packets it routes. Each incoming packet is routed to a node that the receiver knows about that has an identifier "closer" to the target than the receiver. This strategy is robust against quite substantial amounts of node failure [32], and packets which cannot be routed to their target will end up at a node that is "close" in identifier space (Figure 4)

Figure 4 Typical peer-to-peer ring topology for node identifiers

Topologie typique d'un anneau P2P pour l'identification nodale

The main service on top of Pastry is a distributed hash table providing persistent storage. To inject content into the network, a node hashes the content, maps this hash to a node identifier, and routes the content to that node. Routing will take the content either to the target node or "close" to it, with the final node replicating the content onto "nearby" nodes. Retrieval works by querying the node identified by the content's hash. If that node is present, it will have the content; if not, the request will anyway be routed "close" to the node, and this "close" node (or nodes "close" in turn) may have a replica that can satisfy the request. One may adapt this scheme to deal with different predicted frequencies of node dynamism, trading space (more replicas) for robustness (more nodes may leave or fail). It is important to realise that this scheme inherently distributed content across the globe (if run on the open internet) through the independence of node identifiers from IP addresses: even "close" nodes may be physically remote.

Sun's JXTA technology [33] provides a somewhat more structured approach to P2P systems that provides peer groups and service advertisement in the core protocol. Tapestry [34] provides similar content replication to Pastry, augmented with facilities for locating the physically closest replica. This addresses one of the key issues with P2P, that the routing algorithm can result in routes that are significantly (and often perversely) longer than required.

The Intentional Naming System (INS) [35] uses P2P technology to build an infrastructure for locating services using descriptions of their "intention" (print service, display etc), similar in intent to the CORBA trading service. INS constructs a P2P overlay of name resolvers which perform late binding of services against service descriptions. Service requests take the form of data payloads piggy-backed onto service descriptions, allowing service invocation in a single routing step. A service announcement mechanism allows clients and resolvers to discover nodes offering services with particular intentions.

P2P systems are not constrained to use ring topologies: some peer-to-peer systems have topologies targeting specific applications, and some systems (notably T-Man [36]) allow the overlay topology to be adapted dynamically to any desired form in a surprisingly small number of local steps without global co-ordination.

P2P systems may also dispense with point-to-point message routing in favour of "gossipped" interactions in which collections of nodes periodically synchronise their local states. Construct [37] uses this approach within the framework of exchanging RDF-structured content. The advantage of this scheme is that queries can be answered using local state: the state may not be completely up-to-date, but its timeliness can be bounded statistically by the properties of the gossiping protocol. This also improves redundancy and reduces communication hot-spots.

IV. F UTURE DIRECTIONS FOR ADAPTIVE MIDDLEWARE

Design and characterisation of middleware platforms for the future is an ongoing active area of research. We have described a view of five main categories of existing systems – ORB-based, message-oriented, tuple-based, event-based and peer-to-peer – highlighting their strengths and weaknesses. In this section we outline the emerging issues and challenges for middleware architects. We indicate areas of work, exhibiting desirable features, worth further application by investigators providing solutions to these problems.

A distillation of the principles and platforms reviewed leads to the following high level research questions:

?Predictability and stability: How do we model the system and its properties, monitor these properties and act to maintain stability of the system over time.

?Fault tolerance and reliability: Autonomic behaviour fundamentally changes the notion of fault tolerance. We need to understand the fundamental trade-off

between adaptation and reliability, and in doing so develop new models of fault

tolerance in autonomic systems.

?Naming and binding: Naming resources and locating data in a system that evolves requires new approaches to the fundamental systems problem of binding.

?Programming: Finally, all the above problems coupled with the evolutionary behaviour of autonomic systems demand new primitives, abstractions and

frameworks be provided to the programmer.

Underlying these four core problems below we have identified two key issues: system description and context.

IV.1. Description and meta-description for composition

A core challenge in building middleware systems relates to how we build, share and use understandable descriptions. It should be possible to describe components and their interactions in a way that explicitly prescribes their abstract roles in a system. There must be a mechanism that affords description of not only the information but also the system and its configuration at a high level. We need to be able to reason about the semantics of these descriptions by both manual inspection and automatically. The emergent semantic web technologies are a step towards provision of mechanisms supporting this.

Systems need to be self-describing. Well-designed CORBA enterprise systems are often built in this manner. Components are fully documented, marked-up and registered with a trading (or other) service. Clients in the system can locate, interrogate and activate services based on their metadata descriptions. Understanding of component behaviour and dynamic composition of systems is facilitated in this way.

The essence of middleware is to abstract complexity whilst supporting the combination of distributed services. It should be possible to describe a system as a composition of independent components and connections. Given a closed set of components existing systems use registries and traders for this purpose. The known set of components, connectors and behaviour are defined a priori by the system designers. These are documented with rules, policies, aspects and so forth, and registered with the appropriate services. Adaptive systems will additionally introduce a variety of composition rules (for the user, applications developer, system, and hardware) which will be generated dynamically. This results in an open set of components to be reasoned about. Dynamic composition impacts on the ability to support and maintain robust predictable systems. This can be illustrated by the following scenario2. A user walks through an intelligent building issuing commands to print three documents. The middleware infrastructure in the building adapts to the user movement and routes the request to the nearest printer – with the result that the three documents end up on different printers, possibly including devices of which the user in unaware. This example of “print to nearest printer” has appeared several times in the adaptive systems literature: the problem is that the adaptation is unstable under movement by the user, and implies that the space of adaptation is completely known to the user – neither of which may in fact be the case. Making the “correct” behavioural adaptation requires that the middleware has access to substantial metadata about the space, the users’ likely tasks, and so forth.

To realise open adaptive systems, a structure for describing relationships between components coupled with a semantic description of the rules is required. We can use this to describe “closures” of adaptive systems and then analyse whether all the behaviours are “correct”. The reverse process allows a given composition to be reused via decomposition into constituent parts. It should be possible to reuse components, connectors and architectural patterns even if they have been developed for another purpose. The ability to perform such operation is strongly related to the self-describing 2 Rather irreverently referred to as the “Dude! Where’s my printout?” scenario…

property discussed above. Work on the open standards RDF, RDQL and self-describing formats, such as XML, will play an integral role.

Use of open standards will be a core aspect in the adoption of next generation middleware systems. Traditional middleware was often susceptible vendor lock-in or being closed to the outside world. The CORBA standard introduced IIOP to counter this which has resulted in a highly accessible middleware platform. The move towards openness has manifested itself in the web services platforms with XML, SOAP and the omnipresent set of web technologies.

In many ways the container architecture typical of object-broker systems provides a good platform on which to build adaptive services. The container provides a single point of control within which to manage communication and adaptation, as well as less studied but equally important services such as deployment and update. Such solutions need not be centralised, and – by off-loading complexity from application programmers – may result in more scalable and robust systems with improved potential for commercial roll-out. This implies, however, that the container architecture, semantics and service definitions are defined with proper respect for the autonomic principles they are attempting to support.

A further aspect of openness is to move towards metadata-, rule- and policy-driven adaptation. Metadata ranges from enhanced deployment descriptors to complete metadata models with complex and extensible semantics. This opening of the system architecture – away from the more prescriptive approaches of earlier middleware – has two important effects. Firstly, it makes systems “open-adaptive” [38] by allowing adaptations to be defined post-deployment in response to new techniques or available information. Secondly – and perhaps more importantly – it raises the semantic level at which adaptive behaviour is expressed, making it possible (at least in principle) to reason about the possible effects and interactions of different strategies. This is especially important in systems with minimal human oversight.

IV.2. Trade-offs, context and open problems

It is common to find the notion of optimising everything in a system, making everything efficient. There are however many examples of optimisations which are orthogonal: peer-to-peer overlay networks trade-off robustness from random distribution with longer routes to data which slow the systems down; gossiping networks have a massive duplication of data using up memory in trade off to availability. The system must be aware of what is optimising against and trade-off everything else.

Contextualisation of the system and its components will aid in such decisions. Adaptive systems need to be able to contextualise interactions in order to adapt the infrastructure, information or its delivery to the semantics of use. Given the current state, the goals and the context, a more informed decision can be made as to which adaptations to activate. There is a tendency to collect too little metadata on interactions, often in the name of efficiency. This is, however, a false economy: increased instrumentation will enable improved adaptation over the medium term, by providing a larger set of observations

against which to evaluate adaptation decisions. Collection of context needs to be built into the foundations of any adaptive middleware: Construct, for example, treats all information sources (including information about its own actions) as sensors integrating into a common model. Without such rich and interconnected context, effective adaptation will be a forlorn dream.

As computational devices become ubiquitous and sensorised, a need for sensor-based middleware emerges. This must be lightweight with an ability to handle a large set of heterogeneous hardware. Open problems with portable devices are new to the middleware developer: for example how does a component rebind to a device that has physically moved off the network? How can such errors be recovered from? How can recovery strategies be expressed without obscuring the main line of the code? These problems will appear in all adaptive middleware systems.

Adaptive systems do not involve purely local interactions, and typically need to utilise an appropriate set of local global resources to achieve the task. Furthermore, many local actions have global implications. For example when contacting a (global) key server to authenticate a local node allows communication to begin. Devices must have the ability to move between local environments and retain context, which also requires (at least) non-local information management. Distributed data management technologies such as distributed hash tables and gossiping architectures are starting to address these issues. Adaptation can interact badly with certain end-to-end properties such as security. The fact that a system is adaptive should not cause it to reveal information that needs to be concealed. Mere observation of a system adapting may allow deductions as to its state by the way it is adapting. A home automation system may give away the absence of the householder by turning the lights off at night. It is unclear whether middleware architectures can address such indirect information flow effectively.

Finally, the latency of adaptation in the system may prove highly problematic. Adaptation requires a system to collect information before arriving at an adaptation decision – but the time required for this data collection may render the adaptation irrelevant. Indeed the same phenomena may be observed in reverse: overly-aggressive (low-latency) adaptation may ignore some of the information needed to make a correct decision. The timeliness of adaptation needs to be considered alongside its correctness, and some of the techniques developed for real-time systems may have a role to play.

V. C ONCLUSION

We have presented a survey of the major middleware approaches in current use from the perspective of their support for the development of adaptive and autonomic systems. From this survey we have identified some emerging trends that may be used to shape the development of future middleware platforms that better address these core emerging domains.

All adaptive systems pose fundamental questions about the role of the human administrator: the desire for rapid and flexible adaptation exists in tension with the need

for human intervention in deciding the most appropriate adaptation strategy. It has been suggested3 that successfully addressing the challenges of autonomic systems may require solving the “strong AI” problem - essentially replacing the human in the loop with an equivalently “intelligent” (in some sense) artificial control system. While no final decision on this conjecture is possible at present (if ever), we believe that trends such as full context modelling, scalable and peer-to-peer resource sharing, and the end-to-end treatment of security and privacy provide mechanisms to substantially simplify the construction, configuration and on-going management of adaptive systems.

It is often forgotten that both adaptation and optimisation are relative terms: one adapts (or optimises) only with respect to some external criteria, with a corresponding impact on other facets of the system. Perhaps the most significant problems for adaptive middleware are (firstly) to allow designers to choose to which parameters must be adapted to, (secondly) to understand which parameters or features must be sacrificed to accomplish this, and (thirdly) to provide the information necessary to make these choices accurately on an on-going basis. These decisions arise externally, from the application domain, rather than internally from the middleware itself, implying the future middleware platforms (of whatever flavour) must embody an increased semantic understanding of the domain in which they reside and their relationship to it. We believe that such well-founded and well-informed adaptive behaviour will the critical enabler for high-performance and high-reliability middleware platforms in the future.

3During a panel discussion at the 2nd International Conference of Autonomic Computing 2005, Seattle WA.

R EFERENCES

[1]NAUR (P.), RANDELL, (B.) (eds.), Software Engineering, Report on a

conference sponsored by the NATO Science Committee,Garmisch, Germany, October 1968.

[2]WALDO (J.), WYANT (G.), WOLLRATH, (A.), Kendall, (S.), A Note on

Distributed Computing, Mobile object systems: towards the programmable Internet, Springer-Verlag, pp. 49-64, Jan. 1997.

[3]OBJECT MANAGEMENT GROUP, The Common Object Request Broker:

Architecture and Specification, 2.0 ed., July 1995.

[4]BAKER (S.), DOBSON (S.), Comparing Service-Oriented and Distributed Object

Architectures, Proceedings of the International Symposium on Distributed Objects and Applications, Volume 3760in LNCS,Springer Verlag, Meersman (R.), Tari (Z.), et al (eds), pp. 631-645, 2005.

[5]OBJECT MANAGEMENT GROUP, CORBA Scripting Language, v1.0, OMG

Document formal/01-06-05, June 2001.

[6]OBJECT MANAGEMENT GROUP, Python Language Mapping Specification,

OMG Document formal/01-02-66, February 2001.

[7]SCHMIDT (D.), SUDA (T.), An Object-Oriented Framework for

Dynamically Configuring Extensible Distributed Communication Systems, IEE/BCS Distributed Systems Engineering 2pp. 280--293, December 1994.

[8]DE NICHOLA, (R.), FERRARI (G.), PUGLIESE (R.), KLAIM: A kernel

Language for Agents Interaction and Mobility. IEEE Transactions on Software Engineering, 24(5), pp. 315- 330, 1998.

[9]VAN STEEN (M.), HOMBURG (P.),TANENBAUM (A.S.), Globe: A

Wide-Area Distributed System, IEEE Concurrency, 7(1) (January-March 1999), pp. 70-78.

[10]AKAMAI INC, Home page, "https://www.wendangku.net/doc/bd10159215.html,".

[11]ALAGAR (S.), VENKATESAN(S.), Causal Ordering in Distributed Mobile

Systems, IEEE Transactions on Computers, Volume 46, Number 3, pp. 353-361, 1997.

中间件技术

中间件技术 定义: 中间件是一种独立的系统软件或服务程序,分布式应用软件借助这种软件在不同的技术之间共享资源。中间件位于客户机/ 服务器的操作系统之上,管理计算机资源和网络通讯,是连接两个独立应用程序或独立系统的软件。相连接的系统,即使它们具有不同的接口,但通过中间件相互之间仍能交换信息。执行中间件的一个关键途径是信息传递。通过中间件,应用程序可以工作于多平台或 OS 环境。 中间件处于操作系统软件与用户的应用软件的中间。中间件在操作系统、网络和数据库之上,应用软件的下层,总的作用是为处于自己上层的应用软件提供运行与开发的环境,帮助用户灵活、高效地开发和集成复杂的应用软件。 中间件特点: ?满足大量应用的需要; ?运行于多种硬件和OS平台; ?支持分布式计算,提供跨网络、硬件和OS平台的透明性的应用或服务的交互功能; ?支持标准的协议; ?支持标准的接口。 由于中间件需要屏蔽分布环境中异构的操作系统和网络协议,它必须能够提供分布环境下的通讯服务,我们将这种通讯服务称之为平台。基于目的和实现机

制的不同,我们将平台分为以下主要几类: ?远程过程调用中间件(Remote Procedure Call) ?面向消息的中间件(MesSAge-Oriented Middleware) ?对象请求代理中间件(object RequeST Brokers) ?事务处理监控(Transaction processing monitors) 1、远程过程调用 远程过程调用是一种广泛使用的分布式应用程序处理方法。一个应用程序使用RPC来“远程”执行一个位于不同地址空间里的过程,并且从效果上看和执行本地调用相同。事实上,一个RPC应用分为两个部分:server和client。server 提供一个或多个远程过程;client向server发出远程调用。server和client 可以位于同一台计算机,也可以位于不同的计算机,甚至运行在不同的操作系统之上。它们通过网络进行通讯。 2、面向消息的中间件 MOM指的是利用高效可靠的消息传递机制进行平台无关的数据交流,并基于数据通信来进行分布式系统的集成。通过提供消息传递和消息排队模型,它可在分布环境下扩展进程间的通信,并支持多通讯协议、语言、应用程序、硬件和软件平台。目前流行的MOM中间件产品有IBM的MQSeries、BEA的MessageQ等。 3、对象请求代理 对象请求代理(ORB)是对象总线,它在CORBA规范中处于核心地位,定义异构环境下对象透明地发送请求和接收响应的基本机制,是建立对象之间client/server关系的中间件。ORB使得对象可以透明地向其他对象发出请求或接受其他对象的响应,这些对象可以位于本地也可以位于远程机器。ORB拦截请求调用,并负责找到可以实现请求的对象、传送参数、调用相应的方法、返回结果等。client对象并不知道同server对象通讯、激活或存储server对象的机制,也不必知道server对象位于何处、它是用何种语言实现的、使用什么操作系统或其他不属于对象接口的系统成分。 4、事务处理监控 事务处理监控(TPM)最早出现在大型机上,为其提供支持大规模事务处理

为什么需要中间件

为什么要中间件? 计算机技术迅速发展。从硬件技术看,CPU速度越来越高,处理能力越来越强;从软件技术看,应用程序的规模不断扩大,特别是Internet及WWW的出现,使计算机的应用范围更为广阔,许多应用程序需在网络环境的异构平台上运行。这一切都对新一代的软件开发提出了新的需求。在这种分布异构环境中,通常存在多种硬件系统平台(如PC工作站,小型机等)在这些硬件平台上又存在各种各样的系统软件(如不同的操作系统、数据库、语言编译器等),以及多种风格各异的用户界面,这些硬件系统平台还可能采用不同的网络协议和网络体系结构连接。如何把这些系统集成起来并开发新的应用是一个非常现实而困难的问题。 中间件在实际的应用过程中,是对应用软件起到支撑作用,最终用户并不直接使用中间件,中间件不是大众消费类软件产品。因此,除非是一个行业专业人士,一般不大可能与中间件打交道,不太了解什么是中间件。 因此,在系统软件之中,操作系统、数据库、中间件的三驾马车,中间件是最“神秘”的。因为,好歹大家通过Windows基本上会了解操作系统是个什么东西,尽管不会很全面,很专业,毕竟是有感觉的。数据库,虽然没有直接见过,但基本上明白数据是要一个“仓库”来储存的,因此,也大致知道数据库管理系统是干什么的。 长期以来,中间件是一个专业化非常强的细分产业。因为中间件的技术门槛比较高,玩家也不多,无论是国外还是国内都是如此。因此,行业内对什么是中间件并不特别在意。而公司名称直接叫中间件的就更少了,“金蝶中间件”应该是国内外直接在公司名称中冠以中间件字眼最早,也是很少的公司之一。另一方面,因为中间件软件还处于发展阶段,还没有完全成熟,因此对中间件的定义也就没有深究,或者权威的说法。 但现在情况有点变化,其中一个原因在于2008年底,国家启动了“核高基”重大科技专项,在基础软件领域明确提出重点支持“操作系统、数据库、中间件、文字处理”等基础软件产业的自主创新,几乎一夜之间大大小小的软件公司都宣称是做中间件的了,只要不是做最终应用软件的,他们的产品都叫中间件了,一时间,中间件变得“蓬勃发展”起来了。 作为中间件行业内的专业化和领先企业来说,大家都重视起中间件来了,这是好事,说明社会上重视了。对行业的发展和繁荣固然重要,但这也隐含了重大的风险。中间件名字被滥用,无论是对用户,对这个产业,对政府和投资人来说,都会有负面的影响。“鱼目混珠,泥沙俱下”的局面,对中间件产业的正常发展未必就是好事情了,也可能对真正的中间件自主创新带来许多困扰,模糊了中间件的本质,可能会弱化中间件核心技术的创新和发展。 因此,在这种情况下,无论是对行业内,还是行业外,突然“什么是中间件”的问题变成了一个大问题了。

应用中间件要求

投标方必须保证本项目所需软件产品获得生产厂家的合法授权,且为最新版本,并在售后服务承诺中保证提供至少一年的免费升级服务和技术支持服务。主要的应用支撑软件要求如下: 一、数据库系统 投标方提供的数据库管理系统需满足以下具体技术要求: 1、基本功能 提供丰富的数据类型支持,提供丰富的内置函数,主要包括:数学函数、字符串函数、日期时间函数、聚集函数、大对象函数等。支持自定义存储过程/函数,支持触发器,支持视图。支持完整性约束,支持事务的4种隔离级别。 支持海量数据存储和管理,数据存储量为32T以上,单个大对象的最大容量要支持到4GB。并发控制支持表锁、行锁和页锁,具有大规模并发处理能力。 支持集中的数据库管理,提供远程跨平台数据库管理工具;提供良好的性能监控、调整手段;提供跨库、跨系统数据管理能力。 2、安全要求 支持强用户身份鉴别:为用户身份鉴别提供口令、指纹和Radius等多种身份鉴别方式,并允许系统管理员自行配置用户身份鉴别类型。 支持自主访问控制机制:利用对象的ACL列表来检查某个用户是否具有对某个对象的某种访问权限,支持强制访问控制机制:提供基于标签的访问控制方式。提供多种加密方式来保证数据存储安全,至少支持外部密钥加密套件和透明加密两种方式。提供基于证书机制的数据加密传输。提供独立的安全审计,支持系统特权审计、用户审计、语句审计和对象审计四种类型的审计,既可以审计执行成功的语句也可以审计执行失败的语句。支持三权分立的安全体系,建立系统管理员、系统审计员、系统安全员的三权分立安全模型,并将访问控制的粒度细化到行级。 3、性能要求 支持多种索引,支持多种查询优化策略,支持存储过程优化、基于代价的查询优化、基于规划的查询优化,支持高效的自动数据压缩。支持物化视图,提供并行查询能力。支持一级及二级水平分区,包括:hash分区,range分区和list

浅谈未来中间件技术发展的两大趋势

浅谈未来中间件技术发展的两大趋势 中国的ERP开发与应用经过多年的发展,已经取得了很大的成就,“普及”与“深化应用”已经成为当前ERP市场的两大主题。不论普及还是深化都意味着越来越多的人更加关注ERP的实际应用效果,更加关注ERP能否为企业带来实际的效益。对于企业和ERP厂商来讲都是非常有意义的。 ERP应当普及,但如何理解普及?是不是仅依靠某种产品低价销售,或者跑马圈地的方式来普及?这有待于商榷。 首先ERP不能用一个标准的产品销售方式来推广。ERP在管理对象上非常复杂,特别是随着市场竞争的加剧,每个企业在生存和发展过程中都形成了自己的管理理念、流程和方法,不可能用一个标准的产品来适合不同的对象。 所以把ERP的普及比喻成当年的T型车,不是十分妥当。在那个年代大家对需求比较单纯,认为汽车只是交通工具,而现在人们对汽车的要求就很高,不同的型号、配置和特殊要求,需要按订单制造和装配,所以T型在这个年代是不适合的。 大家的需求非常复杂、多变,所以想用某种产品或某一类产品做成标准去推广、普及是不行的。而同时ERP确实需要普及,但如何普及则需要按ERP的发展和应用客观规律来做。首先就是企业流程优化,如果实施ERP不做流程优化,不改变企业传统模式和管理方法,不吸收一些行之有效的管理模式,其结果就是失败。 另外,ERP普及并不意味着ERP产品低价。ERP实施需要资深管理专家和专业的实施顾问,要做流程优化、设计、系统设置和培训等方面的大量工作,这些都是实实在在实施成本。从国外ERP市场情况来看,ERP整个后期实施的费用远远大于软件产品费用。所以用一个很低的软件费吸引客户采购,这种方法是危险的。 中间件(middleware),顾名思义,是处于操作系统与应用软件的之间的基础软件,其作用是为处于自己上层的应用软件提供运行与开发的环境,帮助用户灵活、高效地开发和集成复杂的应用软件。 10年前,中间件的概念刚刚提出,而如今中间件已成为一个拥有上百亿美元市场的关键软件分类,并成为构建网络分布式异构信息系统不可缺少的关键技术,与操作系统、数据库管理系统并列为基础软件体系的三大支柱。 中间件的价值在哪?中间件如何影响产业的变化? 随着IT系统对企业发展的重要性的不断提升,信息系统也变得越来越复杂,必然也无法避免多厂商产品并存的局面。于是,如何屏蔽不同厂商产品之间的差异,如何减少应用软件开发与工作的复杂性,就成为人们不能不面对的现实问题。 显然,由一个厂商去统一众多产品之间的差异是不可能的,而单独由计算机用户在自己的应用软件中去弥补其中的大片空档,由于技术深度和技术广度的要求,必然也是勉为其难。于是,中间件应运而生。中间件试图通过屏蔽各种复杂的技术细节使技术问题简单化。

weblogic中间件介绍

w e b l o g i c中间件介绍 Company Document number:WTUT-WT88Y-W8BBGB-BWYTT-19998

目录

一、Weblogic11g概述

编写目的 ■金税三期以后的综税的产品线中间件由原来Weblogic814,全面升级为Weblogic11g,JDK统一使用及以上版本。 ■为了满足三期后运维要,全面提高运维工程师运维能力。本文档全面介绍了Weblogic11g中间件的基础操作。 功能简介 ■支持最新的 Java 平台、企业版 (Java EE) 规范及Web 服务标准,从而可简化开发并 增强互操作性,以支持面向服务的体系结构 (SOA)。 ■领先的可靠性、可用性、可扩展性和业界领先的性能。 主要优势 ■J2EE应用服务器性能记录的保持者 ■应用程序和服务的可用性和运行时间 ■更好地监视和管理生产应用程序 ■更快、更高效的开发-部署-调试周期 ■卓越的最终用户客户端可用性 ■高效快速的服务器管理 ■简化新应用程序和服务的开发 适用范围

■J2EE应用服务器 ■BS三层架构的应用服务器 Weblogic11G新特性 自调优的企业级内核 ?静态的线程池参数可以不进行设置 ?系统自动维护线程池的大小 ?自动记录系统历史的吞吐量和性能统计 ?为了达到资源的最优分配,自动优化服务器 ?没有本地代码 过载保护 ?合理的处理过量的服务–过载保护 ?根据内存与队列容量的极限值的设定拒绝请求 ?通过降低非关键业务系统的使用资源,来保证关键业务系统的正常 ?过载的时候拒绝新的请求而不是降低整个服务器的服务质量 ?优雅的意外处理 ?可以选择当发生死锁、内存溢出等关键错误时,关闭或暂停服务器动态的配置变化 ?事务式的配置变化– all or nothing! ?大部分的变化不需要重启服务器

第一章:中间件技术介绍

第一章:中间件技术介绍 1.1两层结构与三层结构 长期以来,我们一直使用着"客户端/服务器"的两层结构,这种两层的结构曾让无数人 为之兴奋和惊叹,即客户端提供用户界面、处理业务逻辑,数据库服务器接受客户端SQL 语句并对数据库进行查询,更新等操作,然后操作结果返回给客户端,如图所示。 在一个比较简单的计算机应用系统中,采用两层体系结构的确给人们带来了相当的灵活性。但随着计算机应用水平的飞速发展、企业信息化水平的不断深入、企业客户的不断增 加,以及新业务的不断出现,越来越多的用户对计算机应用系统提出了更高的要求: 1.要能够同时支持成千上万乃至更多用户的并发服务请求 2.由单一的局域网向跨多个网络协议的广域网扩展 3.不仅要支持一般的信息管理,而且还要支持关键业务的联机交易处理 4.从支持单一的系统平台和数据源转向支持异构的多系统平台和多数据源 面对用户的新需求,二层结构的应用模式由于采用客户机与服务器直接联接的方式形成了其固有的一些缺陷: 1.难以维护 clie nt/server 结构用户界面、业务逻辑和数据逻辑相互交错,通常在第一次部署的时候比较 容易,但难于升级或改进,而且经常基于某种专有的协议(通常是某种数据库协议)。它使得重 用业务逻辑和界面逻辑变得非常困难。 2 ?难以扩展 随着系统的升级,系统复杂程度大大增加,难以扩展,另外它是一个封闭的系统,很难与其他的应用系统实现互操作。 3.安全性差 客户端程序可以直接访问数据库,可通过编程语言或数据库提供的工具直接对数据库进行操作,不安全

4?性能不好 客户端直接与数据库建立连接,当有大量的并发用户存在时,会使数据库不堪重负,性能迅速下降,甚至当机。 三层结构 为解决传统二层模式与应用需求日益突出的矛盾,以交易中间件为基础框架的三层应用模式应运而生,三层结构以中间层管理大量的客户端并为其联接、集成多种异构的服务器平台,通过有效的组织和管理,在极为宽广的范围内将客户机与服务器进行高效组合。同时中间件开创的以负载平衡、动态伸缩等功能为代表的管理模式,已被广泛证实为建立关键业务应用系统的最佳环境,使在二层模式下不可能实现的应用成为可能,并为应用提供了充分的扩展余地。这种模式的成功应用已为许多国际大型企业在应用的开发和部署方面节省了大量的时间和金钱。由此促使越来越多的系统开发商和用户采用三层结构模式开发和实施其应用。 三层客户机/服务器模式的核心概念是利用中间件将应用的用户界面、业务逻辑和数据逻辑 分为三个不同的处理层,如图所示? 1.表示层(用户界面):它的主要功能是实现用户交互和数据表示,为以后的处理收集数据, 向第二层的业务逻辑请求调用核心服务处理,并显示处理结果。这一层通常采用VB, PB DELPHI等语言编写,或采用浏览器实现 2.中间层(业务逻辑):实现整个应用系统核心业务逻辑,通常把业务逻辑划分成一个个独立 的模块,用中间件提供的API结合数据库提供的编程接口实现。客户端通过调用这些模块 实现相应的业务操作。 3.数据层(数据逻辑):数据库负责管理整个应用系统的数据资源,完成数据操作。中间层上应用程序 在处理客户端的请求时,通常要存取数据库。 随着市场竞争的日益加剧和企业电子信息化建设的不断深入,高度灵活、能快速部署新服务和新应用的三层结构应用系统将成为企业信息化的必由之路。采用以中间件为基础的三层结构来架构的应用系统不但具备了大型机系统稳定、安全和处理能力高等特性,同时拥有开放式系统成本低、可扩展性强、开发周期短等优点。可以很好解决两层结构所面临的问题。中间件作为构造三层结构应用系统的基础平台,在三层结构中起着关键的作用,下一节我们将对中间件技术做一个概括性的介绍。 1. 2 中间件技术简介

什么是地图发布中间件及其功能应用介绍

什么是地图发布中间件及其功能应用介绍 一、海量影像地图数据发布首选——中间件 如果需要发布海量影像数据快速构建全国离线二维GIS地理信息系统或全球离线三维地球触摸GIS系统,则需要由硬件、软件、数据和GIS平台四部分组成。 1)硬件 硬件主要包括地图数据服务器和客户端PC机。 服务器:主要用于安装中间件、布署GIS应用平台和存储全国卫星影像数据。 客户端:用于加载GIS平台,并接收中间件发布的影像数据、地名路网数据和高程数据。 2)软件 软件主要包括《水经注地图发布服务中间件》(简称“中间件”)和《水经注万能地图下载器》(简称“下载器”)。 中间件:用于发布全国或全球海量卫星影像数据、地名路网和高程数据。 下载器:用于下载卫星影像数据、地名路网和高程数据。 3)数据 用户可以自行下载数据或直接购买下载好的数据。 自行下载:卫星影像数据、地名路网数据和高程数据可以用《水经注万能地图下载器》自行下载。 直接购买:购买之后,会通过邮寄硬盘(全国数据)或阵列柜(全球数据)

的方式为用户提供。 4)GIS平台 由于中间件只是一个基于URL请求返回瓦片数据的功能部件,因此只要可以支持瓦片式影像加载的GIS平台都可以进行调用。 这里推荐几个GIS开发平台供选择: 1)Google Map 离线API 2)Openlayers 二维开源平台 3)ArcGIS API for JavaScript 4)Cesium 开源三维地球平台 5)OsgEarth开源三维地球平台 二、什么是地图发布中间件 简单的讲,地图发布中间件就是为客户端提供影像瓦片的一个Windows系统服务。它只做一件事,也就是客户端通过URL请求的方式,可以快速返回影像瓦片、地名路网瓦片和高程瓦片数据。 获取影像URL示例 http://127.0.0.1:8080/getImage?z=6&y=62&x=35 获取地图路网URL示例 http://127.0.0.1:8080/getlabel?z=6&y=62&x=35 获取高程URL示例 http://127.0.0.1:8080/getDem?z=6&y=62&x=35

中间件安装步骤

中间件服务器安装步骤 1、安装WINDOWS 2003 32位操作系统,更改计算机名,设置许可证每服务器可连接600 个客户端。 2、在控制面板里的增加删除程序界面,如图安装 3、安装相应驱动程序。 4、设置IP,并加入域https://www.wendangku.net/doc/bd10159215.html,。 5、先安装R2再安装操作系统的补丁SP2。 6、安装.NET FRAMEWORK 2.0,安装语言包补丁,安装.NET FRAMEWORK 2.0 SDK。 7、安装https://www.wendangku.net/doc/bd10159215.html,。 8、安装ORACLE客户端,要求安装ODPNET版本第一项和第五项。 9、安装.NET FRAMEWORK 32位补丁1。 10、配置COM+组件服务,如图

COM 安全中四个按钮都要打开,并加入EVERYONE 用户都有全部权限。

11、 把ICARECONTROLLER 的域用户添加到本地管理员组中 12、 拷贝更新服务器上最新的DEBUG 文件夹 13、 运行DEBUG 目录下“生成三层中间件服务器目录.bat ”文件,注册COM+组件

14、修改ICARESYSTEM的属性,如图

15、 把ICAREDA TA.DLL 和IBM.Data.DB2.dll 放到GAC 里,操作方法如下: 建立批处理文件,内容如下:建立bat 文件,内容包括如下,然后把它放在

"D:\Debug\gacutil" /il RegGacFileList.txt pause 然后新建RegGacFileList.txt,里面包括ICAREDATA.DLL、IBM.Data.DB2.dll、EMR_ICAREDATA.DLL和EMR_BASE_VO.DLL文件名即可,格式如下: iCareData.dll IBM.Data.DB2.dll EMR_ICAREDATA.DLL EMR_BASE_VO.DLL 也可以通过控制面板下管理工具中的.NET FRAMEWORK2.0工具进行添加上述四个文件。 16、更新中间件时 a)必须先右键关闭ICARESYSTEM三次,并禁用 b)然后反注册 c)再拷贝新文件 d)再注册 e)启动服务,完成之后再检查ICARESYSTEM的属性,确保没有变成库模式。 17、如果某个中间件之前已经放在GAC里面,首先必须在GAC里进行反注册该中间 件即可,操作方法如下: 建立批处理文件,内容如下 "D:\Debug\gacutil" /ul UnRegGacFileList.txt pause 然后新建UnRegGacFileList.txt,里面包括需要反注册的文件名即可,格式请参考如下:HISReg_SVC(特别注明:不需要加文件扩展名) 18、配置自动更新服务: a)必须先右键关闭UpdateSystem_Svr三次,并禁用 b)再拷贝新文件 c)修改UpdateFiles.xml的版本号和需要更新文件的标志 d)启动服务UpdateSystem_Svr 19、客户端只需要运行icare.exe即可自动更新。

安防监控技术的现状和发展前景

安防监控技术的现状和发展前景 随着安防监控技术的快速发展和信息技术需求的多样化,信息系统变得越来越庞大、复杂和多变,为此中间件技术应运而生,成为了大型信息系统集成的利器。而安防系统作为信息系统的一个重要分支,也将越来越多的利用到中间件技术。 安防监控技术的现状和发展前景 信息系统综合集成的水平反映了一个企业、一个部门,乃至整个国家信息化建设的水平。制定和遵循相关技术标准是一件重要的工作,但是不能照搬工业化的做法。面对复杂而多变的网络世界,中间件就成为信息系统综合集成的利器。 中间件(middleware)是基础软件的一大类,属于可复用软件的范畴。顾名思义,中间件处于操作系统软件与用户的应用软件的中间。中间件在操作系统、网络和数据库之上,应用软件的下层,总的作用是为处于自己上层的应用软件提供运行与开发的环境,帮助用户灵活、高效地开发和集成复杂的应用软件。 在中间件产生以前,应用软件直接使用操作系统、网络协议和数据库等开发,这些都是计算机最底层的东西,越底层越复杂,开发者不得不面临许多很棘手的问题,如操作系统的多样性,繁杂的网络程序设计、管理,复杂多变的网络环境,数据分散处理带来的不一致性问题、性能和效率、安全,等等。顺泰.伟成表示这些与用户的业务没有直接关系,但又必须解决,耗费了大量有限的时间和精力。于是,有人提出能不能将应用软件所要面临的共性问题进行提炼、抽象,在操作系统之上再形成一个可复用的部分,供成千上万的应用软件重复使用。这一技术思想最终构成了中间件这类的软件。 到目前为止,确实有越来越多的用户开始接纳中间件了,甚至在一些大型采购招标项目之中,中间件已经被明确写入了招标书。目前来看,对于中间件的应用需求,还主要集中在行业市场上,与前几年仅仅局限在金融、电信、政府等几个领域的市场状况不同,安防行业市场对于中间件的需求也逐步开始了,并有望占据比较大的市场份额,但是在应用层次上,还是有比较大的区别。

常见的服务器性能指标有哪些及简要介绍

常见的服务器性能指标有哪些及简要介绍 当前业界常见的服务器性能指标有: TPC-C TPC-E TPC-H SPECjbb2005 SPECjEnterprise2010 SPECint2006 及SPECint_rate_2006 SPECfp2006 及SPECfp_rate_2006 SAP SD 2-Tier LINPACK RPE2 一、TPC (Transaction Processing Performance Council) 即联机交易处理性能协会, 成立于1988年的非盈利组织,各主要软硬件供应商均参与,成立目标: 为业界提供可信的数据库及交易处理基准测试结果,当前发布主要基准测试为: TPC-C : 数据库在线查询(OLTP)交易性能 TPC-E : 数据库在线查询(OLTP)交易性能 TPC-H : 商业智能/ 数据仓库/ 在线分析(OLAP)交易性能

1.TPC-C测试内容:数据库事务处理测试, 模拟一个批发商的订单管理系统。实际衡量服务器及数据库软件处理在线查询交易处理(OLTP)的性能表现. 正规TPC-C 测试结果发布必须提供tpmC值, 即每分钟完成多少笔TPC-C 数据库交易(TPC-C Transaction Per Minute), 同时要提供性价比$/tpmC。如果把TPC-C 测试结果写成为tpm, TPM, TPMC, TPCC 均不属正规。 2.TPC-E测试内容:数据库事务处理测试,模拟一个证券交易系统。与TPC-C一样,实际衡量服务器及数据库软件处理在线查询交易处理(OLTP)的性能表现。正规TPC-E测试结果必须提供tpsE值,即每秒钟完成多少笔TPC-E数据库交易(transaction per second),同时提供$/tpsE。测试结果写成其他形式均不属正规。对比:TPC-E测试较TPC-C测试,在测试模型搭建上增加了应用服务器层,同时增加了数据库结构的复杂性,测试成本相对降低。截止目前,TPC-E的测试结果仅公布有50种左右,且测试环境均为PC服务器和windows操作系统,并无power服务器的测试结果。除此之外,TPC官方组织并未声明TPC-E取代TPC-C,所以,说TPC-E取代TPC-C并没有根据。附TPC-C与TPC-E数据库结构对比

中间件开发方案

中间件开发方案 一使用中间件原因 由于呼叫中心提供的调用呼叫中心的方法只支持单独页面的独立调用,无法满足BS架构的用户在多页面调用呼叫中心功能,因此需要使用中间件服务器连接呼叫中心系统,模拟建立独立通信通道,将多页面调用所需的功能发送给中间件,由中间件作为呼叫中心的唯一调用源,以此保证在符合呼叫中心调用机制的情况下完成BS架构的呼叫中心完整功能。 二开发方案1(CS架构客户端调用方式) 中间件组成部分:(服务器端和客户端) 1、服务器端功能: a)CRM用户和呼叫中心坐席关联,记录在中间件坐席信息表中。 b)接收和记录客户端状态和客户端传来的坐席状态。 c)接收和记录BS架构发送的呼叫中心调用命令。 d)记录客户端和BS架构调用呼叫中心的调用记录及调用结果。 2、客户端功能: a)连接呼叫中心的服务器,实现坐席登录功能。 b)获取呼叫中心服务器上的坐席状态,发送给中间件服务器端。 c)用呼叫中心提供的CS开发文档和开发ocx、dll调用呼叫中心的话务功能。 d)调用呼叫中心话务功能的时候讲调用记录发送给中间件服务器端。 e)接收中间件服务器端传来的BS架构调用呼叫中心话务功能的消息,并根据消 息判断触发呼叫中心话务功能类型及参数,翻译后发送给呼叫中心服务器。 f)在客户端转接、强插、监听等功能调用的时候读取中间件服务器端坐席状态表, 获取在线空闲坐席信息、在线通话中坐席信息等列表,并根据列表中的信息整 理成为调用参数,单击或双击列表中坐席调用呼叫中心话务功能。 g)监控来电事件,根据来电号码、客户端登录坐席,实现CRM弹屏。 3、CRM话务功能调用: a)发送命令消息给中间件服务器。500ms后查询服务器执行结果,若执行结果为 失败则显示失败消息,若执行结果为成功则无动作。 b)读取中间件服务器端坐席状态表,获取在线空闲坐席信息、在线通话中坐席信 息等列表,并根据列表中的信息整理成为调用参数,单击或双击列表中坐席将 转接命令消息发送给中间件服务器,并实现销售线索和来电客户数据的自动生 成或关联。

中间件技术综述

中间件技术综述 摘要:介绍了中间件的产生与发展,详细阐述了中间件的定义、分类以及功能与作用。指出了中间件的优缺点,并分析了中间件技术的现状,最后介绍了中间件的应用前景和发展趋势。 关键词:统一软件开发平台、中间件技术 1 引言 随着Internet网络应用技术的发展,基于客户机/服务器(Client/Server)模式的系统设计方法己被广泛地应用于各种类型软件系统的设计与开发中。其编程方式改变了传统的应用程序设计和系统实现方式。为此人们提出了一种介于客户端和服务器端的软件--中间件(Middleware)。中间件是处于应用软件和系统软件之间的一类软件,是独立于硬件或数据库厂商(处于其产品的中间,实现其互连)的一类软件,是客户方与服务方之间的连接件,是需要进行二次开发的中间产品。 于是集软件复用、分布式对象计算、企业级应用开发等技术为一体的“基于中间件的软件开发”伴随产生,这种技术以软件架构为组装蓝图,以可复用软件构件为组装模块,支持组装式软件的复用,大大提高了软件生产效率和软件质量。 2 中间件技术 2.1 中间件的分类 由于中间件所包括的范围十分广泛,而目前对中间件还没有一个比较精确的定义。因此,在不同的角度或不同的层次上,对中间件的分类也会有所不同。基于不同中间件的目的和实现机制的不同,一般将中间件主要分为以下几类:远程过程调用中间件(remote procedure call middle-ware); 面向消息的中间件(message oriented middleware); 对象请求代理(object request broker); 事务处理监控(transaction processing monitor); 数据库中间件(database middleware); 专用中间件(proprietary middleware)。 其中,前3类中间件称为管道,它们可向上提供不同形式的通讯服务,包括

中间件介绍

中间件介绍 文档编制序号:[KKIDT-LLE0828-LLETD298-POI08]

中间件介绍 1、Ice: ICE(Internet Communications Engine)是ZeroC提供的一款高性能的中间件,基于ICE可以实现电信级的解决方案。在设计网站架构的时候可以使用ICE实现对网站应用的基础对象操作,将基础对象操作和数据库操作封装在这一层,在业务逻辑层以及表现层(java,php,,python)进行更丰富的表现与操作,从而实现比较好的架构。基于ICE的数据层可以在未来方便的进行扩展。ICE支持分布式的部署管理,消息中间件,以及网格计算等等。 Zeroc推出的一种分布式的面向对象中间件,解决分布式的异构计算。可以用 C++,Java,c#等进行分布式的交互计算。 主要设计目标是: ·成为适用于异种环境的平台。 ·具有一组完整的特性,支持广泛的领域中的实际的的开发。 ·去掉不必要的复杂性,使平台更易于学习和使用。 ·是一种在、内存使用和CPU开销方面都很高效的实现。 ·是一种具有内建安全性的实现,使它适用于不安全的公共网络。 2、JBoss: 是一个基于J2EE的的。 JBoss代码遵循LGPL许可,可以在任何商业应用中免费使用,而不用支付费用。JBoss是一个管理EJB的容器和服务器,支持EJB 、EJB 和EJB3的规范。但JBoss核心服务不包括支持servlet/JSP的WEB容器,一般与Tomcat或Jetty绑定使用。 在J2EE领域,JBoss是发展最为迅速的应用服务器。由于JBoss遵循商业友好的LGPL授权分发,并且由开源社区开发,这使得JBoss广为流行。 另外,JBoss应用服务器还具有许多优秀的特质。 JBoss运行后后台管理界面 其一,将具有革命性的JMX服务作为其; 其二,本身就是(Service-Oriented Architecture,); 其三,具有统一的类装载器,从而能够实现应用的和热卸载能力。 因此,高度模块化的和松耦合。JBoss应用服务器是健壮的、高质量的,而且还具有良好的性能。 1、JBoss是免费的,J2EE的实现,通过许可证进行发布。但同时也有的,开源和闭源流入流出的不是同一途径。 2、JBoss需要的内存和硬盘空间比较小。 3、安装便捷:解压后,只需配置一些即可。 4、JBoss支持"热部署",部署BEAN时,只拷贝BEAN的文件到部署路径下即可自动加载;如果有改动,也会自动更新。 5、JBoss与Web服务器在同一个中运行,Servlet调用EJB不经过网络,从而大大提高运行效率,提升安全性能。

TUXEDO中间件介绍及应用

TUXEDO中间件介绍及应用 一、前言 首先介绍一下什么是中间件?中间件是一种独立的系统软件或服务程序,分布式应用软件借助这种软件在不同的技术之间共享资源,中间件位于客户机服务器的操作系统之上,管理计算资源和网络通信。 中间件屏蔽了底层操作系统的复杂性,使程序开发人员面对一个简单而统一的开发环境,减少程序设计的复杂性,将注意力集中在自己的业务上,不必再为程序在不同系统软件上的移植而重复工作,从而大大减少了技术上的负担。 世界著名的咨询机构Standish Group在一份研究报告中归纳了中间件的十大优越性: ●缩短应用的开发周期 ●节约应用的开发成本 ●减少系统初期的建设成本●降低应用开发的失败率●保护已有的投资●简化应用集成 ●减少维护费用 ●提高应用的开发质量●保证技术进步的连续性●增强应用的生命力 Tuxedo是第一个严格意义上的中间件产品。Tuxedo是1984年在当时属于A T&T的贝尔实验室开发完成的,但Tuxedo在很长一段时期里只是实验室产品。直到BEA公司1995年收购Tuxedo后,使Tuxedo现已经发展成为交易中间件领域事实上的标准。 TUXEDO是在企业、Internet 这样的分布式运算环境中,开发和管理三层结构的客户/服务器型关键任务应用系统的强有力工具。它具备分布式事务处理和应用通信功能,并提供完善的各种服务来建立、运行和管理关键任务应用系统。开发人员能够用它建立跨多个硬件平台、数据库和操作系统的可互操作的应用系统。 二、TUXEDO的组件软件模型 TUXEDO采用三层结构的组件软件模型。 图1 BEA TUXEDO 的组件软件模型概要

中间件技术介绍

中间件技术介绍 中间件(middleware)是基础软件的一大类,属于可复用软件的范畴。顾名思义,中间件处于操作系统软件与用户的应用软件的中间。中间件在操作系统、网络和数据库之上,应用软件的下层,总的作用是为处于自己上层的应用软件提供运行与开发的环境,帮助用户灵活、高效地开发和集成复杂的应用软件。 在众多关于中间件的定义中,比较普遍被接受的是IDC 表述的:中间件是一种独立的系统软件或服务程序,分布式应用软件借助这种软件在不同的技术之间共享资源,中间件位于客户机服务器的操作系统之上,管理计算资源和网络通信。 IDC对中间件的定义表明,中间件是一类软件,而非一种软件;中间件不仅仅实现互连,还要实现应用之间的互操作;中间件是基于分布式处理的软件,最突出的特点是其网络通信功能。 中科院软件所研究员仲萃豪形象地把中间件定义为:平台+通信。这个定义限定了只有用于分布式系统中的此类软件才能被称为中间件,同时此定义还可以把中间件与支撑软件和实用软件区分开来。 目前,中间件发展很快,已经与操作系统、数据库并列为三大基础软件。中间件主要分为以下几类:

1.通信处理(消息)中间件 此类中间件能在不同平台之间通信,实现分布式系统中可靠的、高效的、实时的跨平台数据传输(如Tong LINK、BEAe Link、IBM的MQ Series等)。这是中间件中唯一不可缺少的,是销售额最大的中间件产品。 2.交易中间件 在分布式事务处理系统中要处理大量事务,常常在系统中要同时做上万笔事务。例如在北京市就要设置各种运载汽车,完成日常的运载,同时要随时监视汽车运行,出现故障时,要有排除措施,发生堵塞时要进行调度。在联机事务处理系统(OLTP)中,每笔事务常常要多台服务器上的程序顺序地协调完成,一旦中间发生某种故障时,不但要完成恢复工作,而且要自动切换系统,达到系统永不停机,实现高可靠性运行;同时要使大量事务在多台应用服务器能实时并发运行,并进行负载平衡地调度,实现昂贵的可靠性机和大型计算机系统同等的功能,为了实现这个目标,要求系统具有监视和调度整个系统的功能。BEA的Tuxedo由此而著名,它成为增长率最高的厂商。一个事务处理平台,根据X/OPEN的参数模型规定,应由事务处理中间件、通信处理中间件以及数据存取管理中间件三部分组成。东方通科技公司的Tong LINK和TongEASY实现了这个参考模型规定。3.数据存取管理中间件

中间件应用部署整体要求

1.中间件应用部署整体要求 以下中间件应用部署要求主要指基于WEB服务器及Java中间件部署的WEB、J2EE等的应用。 1.1.内容要求 a)对整个系统硬件架构进行描述,提供系统架构组网图,此部分可以在主机集成部分提供。 b)对应用系统软件架构进行描述,提供应用软件架构图,对系统数据流,系统控制流以及 外部接口进行描述。 2.中间件应用部署用户要求 2.1.内容要求 a)要求对中间件软件及应用系统安装用户和组进行合理规划。 b)应用系统安装和部署必须新建用户和组,不能使用root安装。 c)对于一般应用,中间件软件与应用系统可以部署在同一用户下。 d)对于同一系统在不同主机上的相同应用,所有新建应用用户的UID,GID信息在所有主机 上保持一致。 2.2.内容实例 ● ● 3.中间件应用部署目录要求 3.1.内容要求

a)要求对中间件软件及应用系统安装目录进行合理规划。 b)应用系统要求部署在独立的文件系统上,在rootvg下建立文件系统。 c)对于同一系统在不同主机上的相同应用,所有目录部署结构在所有主机上保持一致。 d)中间件软件安装目录、域目录、应用发布目录要求独立部署。 ● 3.2.内容实例 ●WebLogic应用目录部署示例 网厅应用前台部署目录:

4.中间件软件及版本要求 4.1.内容要求 a)对使用的中间件软件及版本,32/64bit进行描述; b)对使用的JDK版本进行描述,根据中间件软件的安装要求,选择符合要求的JDK最新 稳定版本。 4.2.内容示例 5.中间件主机参数及系统包要求 5.1.内容要求 a)根据不同操作系统平台,要求的操作系统补丁; b)根据不同操作系统平台,需修改相应的核心参数,保证中间件的安装与运行; 5.2.内容示例 ●WebLoigc(AIX平台) 操作系统补丁要求: 操作系统参数要求:

中间件技术的现状及其发展_符春

作者简介:符春(1980-),女,湖南怀化人,长沙民政职业技术学院电子信息工程系讲师,研究方向为中间件技术。 中间件技术的现状及其发展 符 春 (长沙民政职业技术学院,湖南长沙410004) 摘 要:为了解决分布式环境问题,使得中间件技术受到人们广泛的关注。从中间件技术的概念、分类、特点、优势与 应用几个方面阐述了中间件技术的现状,并同时提出了中间件今后发展的趋势。关键词:中间件;分布式环境中图分类号:TP3-05 文献标识码:A 文章编号:1672-7800(2009)09-0007-02 0引言 计算机技术迅速发展,许多应用程序在网络环境的异构平 台上运行。这一切都对新一代的软件开发提出了新的需求。在这种分布式异构环境中,通常存在多种硬件系统平台(如PC 、工作站、小型机等),在这些硬件平台上又存在各种各样的系统软件(如不同的操作系统、数据库、语言编译器等),以及多种风格各异的用户界面,这些硬件系统平台还可能采用不同的网络协议和网络体系结构连接。如何把这些系统集成起来并开发新的应用是一个非常现实而困难的问题。为解决分布异构问题,人们提出了中间件(middleware )的概念。 1中间件的概念 中间件是基于构件技术,处于两种或多种软件之间(通常 是在应用程序和操作系统、网络操作系统或数据库管理系统之间)传递信息的软件;它使用户能使用一种脚本语言来选择和连接已有的服务,从而生成简单程序的软件开发工具。中间件涉及软件的各个领域,是供公用应用程序编程接口的软件。中间件在操作系统、网络和数据库之上,应用软件的下层,总的作用是为处于自己上层的应用软件提供运行与开发的环境,帮助用户灵活、高效地开发和集成复杂的应用软件。中间件是一种独立的系统软件或服务程序,分布式应用软件借助这种软件在不同的技术之间共享资源。中间件位于客户机/服务器的操作系统之上,管理计算资源和网络通讯。 2中间件的分类 随着计算机软件技术的发展,中间件技术也已经日渐成 熟,并且出现了不同层次、不同类型的中间件产品。从宏观上看,中间件可以分为3大类:①数据类。用于数据的存取、利用 和增值,此类中间件用于构建以数据为中心的应用;②处理类。把分布在网络结点上的各个应用或处理连接在一起,形成一个统一的分布式应用;③分布式构件类。支持构件式应用,未来应用的发展方向,目前竞争激烈。 然而基于目的和实现机制的不同,又可以将中间件分为5种:数据库中间件、远程过程调用(RPC )中间件、面向消息中间件、基于对象请求代理(ORB )的中间件、事务处理中间件。它们可向上提供不同形式的通讯服务,包括同步、排队、订阅发布、广播等等。在这些基本的中间件之上,可构筑各种框架,为应用程序提供不同领域内的服务,如事务处理监控器、分布数据访问、对象事务管理器等。 3中间件的特点与优势 中间件是一种位于具体应用和底层系统(包括操作系统、 网络协议栈、硬件等)之间的软件。中间件在这个软件体系中所扮演的角色是:连接应用程序和底层软硬件基础设施,协调应用各部分的连接和互操作;使系统开发者能够实现并简化基于各种不同技术的服务组件之间的集成。在应用系统开发中采用中间件技术有以下特点:①满足大量应用的需要;②运行于多种硬件和OS 平台;③支持分布式计算,提供跨网络、硬件和OS 平台的透明性的应用或服务的交互功能;④支持标准的协议; ⑤支持标准的接口。 过去的十多年是中间件技术飞速发展的时期,中间件技术已经被广泛应用到IT 行业的各个技术领域,它极大的缓解了分布式应用开发、运行、管理中的一些固有的复杂问题。中间件技术已经成为分布式软件系统不可或缺的关键基础设施,它同操作系统、数据库系统共同构成了基础软件体系的3大支柱。 中间件的主旨就是要支持网络应用的有效开发、部署、运行和管理。中间件是一种独立的系统软件或服务程序,分布式应用软件借助这种软件在不同的技术之间共享资源。中间件软 软件导刊 Software Guide 第8卷%第9期 2009年9月Vol.8No.9Sep.2009

业界常见的服务器性能指标简要介绍

当前业界常见的服务器性能指标有: TPC-C TPC-E TPC-H SPECjbb2005 SPECjEnterprise2010 SPECint2006 及SPECint_rate_2006 SPECfp2006 及SPECfp_rate_2006 SAP SD 2-Tier LINPACK RPE2 一、TPC (Transaction Processing Performance Council) 即联机交易处理性能协会, 成立于1988年的非盈利组织,各主要软硬件供应商均参与,成立目标: 为业界提供可信的数据库及交易处理基准测试结果,当前发布主要基准测试为: TPC-C : 数据库在线查询(OLTP)交易性能 TPC-E : 数据库在线查询(OLTP)交易性能 TPC-H : 商业智能/ 数据仓库/ 在线分析(OLAP)交易性能 1.TPC-C测试内容:数据库事务处理测试, 模拟一个批发商的订单管理系统。实际衡量服务器及数据库软件处理在线查询交易处理(OLTP)的性能表现. 正规TPC-C 测试结果发布必须提供tpmC值, 即每分钟完成多少笔TPC-C 数据库交易(TPC-C Transaction Per Minute), 同时要提供性价比$/tpmC。如果把TPC-C 测试结果写成为tpm, TPM, TPMC, TPCC 均不属正规。 2.TPC-E测试内容:数据库事务处理测试,模拟一个证券交易系统。与TPC-C 一样,实际衡量服务器及数据库软件处理在线查询交易处理(OLTP)的性能表现。正规TPC-E测试结果必须提供tpsE值,即每秒钟完成多少笔TPC-E数据库交易(transaction per second),同时提供$/tpsE。测试结果写成其他形式均不属正规。 对比:TPC-E测试较TPC-C测试,在测试模型搭建上增加了应用服务器层,同时增加了数据库结构的复杂性,测试成本相对降低。截止目前,TPC-E的测试结果仅公布有50种左右,且测试环境均为PC服务器和windows操作系统,并无power服务器的测试结果。除此之外,TPC官方组织并未声明TPC-E取代TPC-C,所以,说TPC-E取代TPC-C并没有根据。 附TPC-C与TPC-E数据库结构对比

相关文档