文档库 最新最全的文档下载
当前位置:文档库 › 分布式系统原理与泛型英文版习题解答

分布式系统原理与泛型英文版习题解答

分布式系统原理与泛型英文版习题解答
分布式系统原理与泛型英文版习题解答

DISTRIBUTED SYSTEMS PRINCIPLES AND PARADIGMS

PROBLEM SOLUTIONS

ANDREW S.TANENBAUM

MAARTEN VAN STEEN

Vrije Universiteit

Amsterdam,The Netherlands

PRENTICE HALL

UPPER SADDLE RIVER,NJ07458

SOLUTIONS TO CHAPTER1PROBLEMS

1.Q:What is the role of middleware in a distributed system?

A:To enhance the distribution transparency that is missing in network operat-ing systems.In other words,middleware aims at improving the single-system view that a distributed system should have.

2.Q:Explain what is meant by(distribution)transparency,and give examples

of different types of transparency.

A:Distribution transparency is the phenomenon by which distribution aspects in a system are hidden from users and applications.Examples include access transparency,location transparency,migration transparency,relocation tran-sparency,replication transparency,concurrency transparency,failure tran-sparency,and persistence transparency.

3.Q:Why is it sometimes so hard to hide the occurrence and recovery from

failures in a distributed system?

A:It is generally impossible to detect whether a server is actually down,or that it is simply slow in responding.Consequently,a system may have to report that a service is not available,although,in fact,the server is just slow.

4.Q:Why is it not always a good idea to aim at implementing the highest

degree of transparency possible?

A:Aiming at the highest degree of transparency may lead to a considerable loss of performance that users are not willing to accept.

5.Q:What is an open distributed system and what bene?ts does openness pro-

vide?

A:An open distributed system offers services according to clearly de?ned rules.An open system is capable of easily interoperating with other open sys-tems but also allows applications to be easily ported between different imple-mentations of the same system.

6.Q:Describe precisely what is meant by a scalable system.

A:A system is scalable with respect to either its number of components,geo-graphical size,or number and size of administrative domains,if it can grow in one or more of these dimensions without an unacceptable loss of perfor-mance.

7.Q:Scalability can be achieved by applying different techniques.What are

these techniques?

A:Scaling can be achieved through distribution,replication,and caching.

2PROBLEM SOLUTIONS FOR CHAPTER1

8.Q:What is the difference between a multiprocessor and a multicomputer?

A:In a multiprocessor,the CPUs have access to a shared main memory.

There is no shared memory in multicomputer systems.In a multicomputer system,the CPUs can communicate only through message passing.

9.Q:A multicomputer with256CPUs is organized as a16×16grid.What is

the worst-case delay(in hops)that a message might have to take?

A:Assuming that routing is optimal,the longest optimal route is from one corner of the grid to the opposite corner.The length of this route is30hops.If the end processors in a single row or column are connected to each other,the length becomes15.

10.Q:Now consider a256-CPU hypercube.What is the worst-case delay here,

again in hops?

A:With a256-CPU hypercube,each node has a binary address,from 00000000to11111111.A hop from one machine to another always involves changing a single bit in the address.Thus from00000000to00000001is one hop.From there to00000011is another hop.In all,eight hops are needed. 11.Q:What is the difference between a distributed operating system and a net-

work operating system?

A:A distributed operating system manages multiprocessors and homogene-ous multicomputers.A network operating system connects different indepen-dent computers that each have their own operating system so that users can easily use the services available on each computer.

12.Q:Explain how microkernels can be used to organize an operating system in

a client-server fashion.

A:A microkernel can separate client applications from operating system ser-vices by enforcing each request to pass through the kernel.As a consequence, operating system services can be implemented by(perhaps different)user-level servers that run as ordinary processes.If the microkernel has networking capabilities,there is also no principal objection in placing those servers on remote machines(which run the same microkernel).

13.Q:Explain the principal operation of a page-based distributed shared memory

system.

A:Page-based DSM makes use of the virtual memory capabilities of an operating system.Whenever an application addresses a memory location that is currently not mapped into the current physical memory,a page fault occurs, giving the operating system control.The operating system can then locate the referred page,transfer its content over the network,and map it to physical memory.At that point,the application can continue.

PROBLEM SOLUTIONS FOR CHAPTER13 14.Q:What is the reason for developing distributed shared memory systems?

What do you see as the main problem hindering ef?cient implementations?

A:The main reason is that writing parallel and distributed programs based on message-passing primitives is much harder than being able to use shared memory for communication.Ef?ciency of DSM systems is hindered by the fact,no matter what you do,page transfers across the network need to take place.If pages are shared by different processors,it is quite easy to get into a state similar to thrashing in virtual memory systems.In the end,DSM sys-tems can never be faster than message-passing solutions,and will generally be slower due to the overhead incurred by keeping track of where pages are.

15.Q:Explain what false sharing is in distributed shared memory systems.What

possible solutions do you see?

A:False sharing happens when data belonging to two different and indepen-dent processes(possibly on different machines)are mapped onto the same logical page.The effect is that the page is swapped between the two processes,leading to an implicit and unnecessary dependency.Solutions include making pages smaller or prohibiting independent processes to share a page.

16.Q:An experimental?le server is up3/4of the time and down1/4of the time,

due to bugs.How many times does this?le server have to be replicated to give an availability of at least99%?

A:With k being the number of servers,we have that(1/4)k<0.01,expressing that the worst situation,when all servers are down,should happen at most 1/100of the time.This gives us k=4.

17.Q:What is a three-tiered client-server architecture?

A:A three-tiered client-server architecture consists of three logical layers, where each layer is,in principle,implemented at a separate machine.The highest layer consists of a client user interface,the middle layer contains the actual application,and the lowest layer implements the data that are being used.

18.Q:What is the difference between a vertical distribution and a horizontal dis-

tribution?

A:Vertical distribution refers to the distribution of the different layers in a multitiered architectures across multiple machines.In principle,each layer is implemented on a different machine.Horizontal distribution deals with the distribution of a single layer across multiple machines,such as distributing a single database.

19.Q:Consider a chain of processes P1,P2,...,P n implementing a multitiered

client-server architecture.Process P i is client of process P i+1,and P i will return a reply to P i?1only after receiving a reply from P i+1.What are the

4PROBLEM SOLUTIONS FOR CHAPTER1

main problems with this organization when taking a look at the request-reply performance at process P1?

A:Performance can be expected to be bad for large n.The problem is that each communication between two successive layers is,in principle,between two different machines.Consequently,the performance between P1and P2 may also be determined by n?2request-reply interactions between the other layers.Another problem is that if one machine in the chain performs badly or is even temporarily unreachable,then this will immediately degrade the per-formance at the highest level.

5 SOLUTIONS TO CHAPTER2PROBLEMS

1.Q:In many layered protocols,each layer has its own header.Surely it would

be more ef?cient to have a single header at the front of each message with all the control in it than all these separate headers.Why is this not done?

A:Each layer must be independent of the other ones.The data passed from layer k+1down to layer k contains both header and data,but layer k cannot tell which is which.Having a single big header that all the layers could read and write would destroy this transparency and make changes in the protocol of one layer visible to other layers.This is undesirable.

2.Q:Why are transport-level communication services often inappropriate for

building distributed applications?

A:They hardly offer distribution transparency meaning that application developers are required to pay signi?cant attention to implementing commun-ication,often leading to proprietary solutions.The effect is that distributed applications,for example,built directly on top of sockets are dif?cult to port and to interoperate with other applications.

3.Q:A reliable multicast service allows a sender to reliably pass messages to a

collection of receivers.Does such a service belong to a middleware layer,or should it be part of a lower-level layer?

A:In principle,a reliable multicast service could easily be part of the trans-port layer,or even the network layer.As an example,the unreliable IP multi-casting service is implemented in the network layer.However,because such services are currently not readily available,they are generally implemented using transport-level services,which automatically places them in the middleware.However,when taking scalability into account,it turns out that reliability can be guaranteed only if application requirements are considered.

This is a strong argument for implementing such services at higher,less gen-eral layers.

4.Q:Consider a procedure incr with two integer parameters.The procedure

adds one to each parameter.Now suppose that it is called with the same vari-able twice,for example,as incr(i,i).If i is initially0,what value will it have afterward if call-by-reference is used?How about if copy/restore is used?

A:If call by reference is used,a pointer to i is passed to incr.It will be incre-mented two times,so the?nal result will be two.However,with copy/restore,

i will be passed by value twice,each value initially0.Both will be incre-

mented,so both will now be1.Now both will be copied back,with the second copy overwriting the?rst one.The?nal value will be1,not2.

6PROBLEM SOLUTIONS FOR CHAPTER2

5.Q:C has a construction called a union,in which a?eld of a record(called a

struct in C)can hold any one of several alternatives.At run time,there is no sure-?re way to tell which one is in there.Does this feature of C have any implications for remote procedure call?Explain your answer.

A:If the runtime system cannot tell what type value is in the?eld,it cannot marshal it correctly.Thus unions cannot be tolerated in an RPC system unless there is a tag?eld that unambiguously tells what the variant?eld holds.The tag?eld must not be under user control.

6.Q:One way to handle parameter conversion in RPC systems is to have each

machine send parameters in its native representation,with the other one doing the translation,if need be.The native system could be indicated by a code in the?rst byte.However,since locating the?rst byte in the?rst word is pre-cisely the problem,can this actually work?

A:First of all,when one computer sends byte0,it always arrives in byte0.

Thus the destination computer can simply access byte0(using a byte instruc-tion)and the code will be in it.It does not matter whether this is the low-order byte or the high-order byte.An alternative scheme is to put the code in all the bytes of the?rst word.Then no matter which byte is examined,the code will be there.

7.Q:Assume a client calls an asynchronous RPC to a server,and subsequently

waits until the server returns a result using another asynchronous RPC.Is this approach the same as letting the client execute a normal RPC?What if we replace the asynchronous RPCs with asynchronous RPCs?

A:No,this is not the same.An asynchronous RPC returns an acknowledge-ment to the caller,meaning that after the?rst call by the client,an additional message is sent across the network.Likewise,the server is acknowledged that its response has been delivered to the client.Two asynchronous RPCs may be the same,provided reliable communication is guaranteed.This is generally not the case.

8.Q:Instead of letting a server register itself with a daemon as is done in DCE,

we could also choose to always assign it the same endpoint.That endpoint can then be used in references to objects in the server’s address space.What is the main drawback of this scheme?

A:The main drawback is that it becomes much harder to dynamically allo-cate objects to servers.In addition,many endpoints need to be?xed,instead of just one(i.e.,the one for the daemon).For machines possibly having a large number of servers,static assignment of endpoints is not a good idea. 9.Q:Give an example implementation of an object reference that allows a

client to bind to a transient remote object.

PROBLEM SOLUTIONS FOR CHAPTER27 A:Using Java,we can express such an implementation as the following class:

public class Object reference{

InetAddress server address;//network address of object’s server

int server endpoint;//endpoint to which server is listening

int object identi?er;//identi?er for this object

URL client code;//(remote)?le containing client-side stub

byte[]init data;//possible additional initialization data }

The object reference should at least contain the transport-level address of

the server where the object resides.We also need an object identi?er as the server may contain several objects.In our implementation,we use a URL to refer to a(remote)?le containing all the necessary client-side code.A generic array of bytes is used to contain further initialization data for that code.An alternative implementation would have been to directly put the client-code into the reference instead of a URL.This approach is followed,for example, in Java RMI where proxies are passed as reference.

10.Q:Java and other languages support exceptions,which are raised when an

error occurs.How would you implement exceptions in RPCs and RMIs?

A:Because exceptions are initially raised at the server side,the server stub can do nothing else but catch the exception and marshal it as a special error response back to the client.The client stub,on the other hand,will have to unmarshal the message and raise the same exception if it wants to keep access to the server transparent.Consequently,exceptions now also need to be described in an interface de?nition language.

11.Q:Would it be useful to also make a distinction between static and dynamic

RPCs?

A:Yes,for the same reason it is useful with remote object invocations:it simply introduces more?exibility.The drawback,however,is that much of the distribution transparency is lost for which RPCs were introduced in the ?rst place.

12.Q:Some implementations of distributed-object middleware systems are

entirely based on dynamic method invocations.Even static invocations are compiled to dynamic ones.What is the bene?t of this approach?

A:Realizing that an implementation of dynamic invocations can handle all invocations,static ones become just a special case.The advantage is that only

a single mechanism needs to be implemented.A possible disadvantage is that

performance is not always as optimal as it could be had we analyzed the static invocation.

8PROBLEM SOLUTIONS FOR CHAPTER2

13.Q:Describe how connectionless communication between a client and a

server proceeds when using sockets.

A:Both the client and the server create a socket,but only the server binds the socket to a local endpoint.The server can then subsequently do a blocking read call in which it waits for incoming data from any client.Likewise,after creating the socket,the client simply does a blocking call to write data to the server.There is no need to close a connection.

14.Q:Explain the difference between the primitives mpi bsend and mpi isend

in MPI.

A:The primitive mpi bsend uses buffered communication by which the caller passes an entire buffer containing the messages to be sent,to the local MPI runtime system.When the call completes,the messages have either been transferred,or copied to a local buffer.In contrast,with mpi isend,the caller passes only a pointer to the message to the local MPI runtime system after which it immediately continues.The caller is responsible for not overwriting the message that is pointed to until it has been copied or transferred.

15.Q:Suppose you could make use of only transient asynchronous communica-

tion primitives,including only an asynchronous receive primitive.How would you implement primitives for transient synchronous communication?

A:Consider a synchronous send primitive.A simple implementation is to send a message to the server using asynchronous communication,and subse-quently let the caller continuously poll for an incoming acknowledgement or response from the server.If we assume that the local operating system stores incoming messages into a local buffer,then an alternative implementation is to block the caller until it receives a signal from the operating system that a message has arrived,after which the caller does an asynchronous receive. 16.Q:Now suppose you could make use of only transient synchronous commun-

ication primitives.How would you implement primitives for transient asyn-chronous communication?

A:This situation is actually simpler.An asynchronous send is implemented by having a caller append its message to a buffer that is shared with a process that handles the actual message transfer.Each time a client appends a mes-sage to the buffer,it wakes up the send process,which subsequently removes the message from the buffer and sends it its destination using a blocking call to the original send primitive.The receiver is implemented similarly by offer-ing a buffer that can be checked for incoming messages by an application. 17.Q:Does it make sense to implement persistent asynchronous communication

by means of RPCs?

A:Yes,but only on a hop-to-hop basis in which a process managing a queue passes a message to a next queue manager by means of an RPC.Effectively,

PROBLEM SOLUTIONS FOR CHAPTER29 the service offered by a queue manager to another is the storage of a message.

The calling queue manager is offered a proxy implementation of the interface to the remote queue,possibly receiving a status indicating the success or failure of each operation.In this way,even queue managers see only queues and no further communication.

18.Q:In the text we stated that in order to automatically start a process to fetch

messages from an input queue,a daemon is often used that monitors the input queue.Give an alternative implementation that does not make use of a dae-mon.

A:A simple scheme is to let a process on the receiver side check for any incoming messages each time that process puts a message in its own queue. 19.Q:Routing tables in IBM MQSeries,and in many other message-queuing

systems,are con?gured manually.Describe a simple way to do this automati-cally.

A:The simplest implementation is to have a centralized component in which the topology of the queuing network is maintained.That component simply calculates all best routes between pairs of queue managers using a known routing algorithm,and subsequently generates routing tables for each queue manager.These tables can be downloaded by each manager separately.This approach works in queuing networks where there are only relatively few,but possibly widely dispersed,queue managers.

A more sophisticated approach is to decentralize the routing algorithm,by

having each queue manager discover the network topology,and calculate its own best routes to other managers.Such solutions are widely applied in com-puter networks.There is no principle objection for applying them to message-queuing networks.

20.Q:How would you incorporate persistent asynchronous communication into

a model of communication based on RMIs to remote objects?

A:An RMI should be asynchronous,that is,no immediate results are expected at invocation time.Moreover,an RMI should be stored at a special server that will forward it to the object as soon as the latter is up and running in an object server.

21.Q:With persistent communication,a receiver generally has its own local

buffer where messages can be stored when the receiver is not executing.To create such a buffer,we may need to specify its size.Give an argument why this is preferable,as well as one against speci?cation of the size.

A:Having the user specify the size makes its implementation easier.The sys-tem creates a buffer of the speci?ed size and is done.Buffer management becomes easy.However,if the buffer?lls up,messages may be lost.The alternative is to have the communication system manage buffer size,starting

10PROBLEM SOLUTIONS FOR CHAPTER2

with some default size,but then growing(or shrinking)buffers as need be.

This method reduces the chance of having to discard messages for lack of room,but requires much more work of the system.

22.Q:Explain why transient synchronous communication has inherent scalabil-

ity problems,and how these could be solved.

A:The problem is the limited geographical scalability.Because synchronous communication requires that the caller is blocked until its message is received,it may take a long time before a caller can continue when the receiver is far away.The only way to solve this problem is to design the cal-ling application so that it has other useful work to do while communication takes place,effectively establishing a form of asynchronous communication.

23.Q:Give an example where multicasting is also useful for discrete data

streams.

A:Passing a large?le to many users as is the case,for example,when updat-ing mirror sites for Web services or software distributions.

24.Q:How could you guarantee a maximum end-to-end delay when a collection

of computers is organized in a(logical or physical)ring?

A:We let a token circulate the ring.Each computer is permitted to send data across the ring(in the same direction as the token)only when holding the token.Moreover,no computer is allowed to hold the token for more than T seconds.Effectively,if we assume that communication between two adjacent computers is bounded,then the token will have a maximum circulation time, which corresponds to a maximum end-to-end delay for each packet sent. 25.Q:How could you guarantee a minimum end-to-end delay when a collection

of computers is organized in a(logical or physical)ring?

A:Strangely enough,this is much harder than guaranteeing a maximum delay.The problem is that the receiving computer should,in principle,not receive data before some elapsed time.The only solution is to buffer packets as long as necessary.Buffering can take place either at the sender,the receiver,or somewhere in between,for example,at intermediate stations.The best place to temporarily buffer data is at the receiver,because at that point there are no more unforeseen obstacles that may delay data delivery.The receiver need merely remove data from its buffer and pass it to the applica-tion using a simple timing mechanism.The drawback is that enough buffering capacity needs to be provided.

26.Q:Imagine we have a token bucket speci?cation where the maximum data

unit size is1000bytes,the token bucket rate is10million bytes/sec,the token bucket size is1million bytes,and the maximum transmission rate is50mil-lion bytes/sec.How long can a burst of maximum speed last?

PROBLEM SOLUTIONS FOR CHAPTER211 A:Call the length of the maximum burst interval?t.In an extreme case,the bucket is full at the start of the interval(1million bytes)and another10?t comes in during that interval.The output during the transmission burst con-sists of50?t million bytes,which should be equal to(1+10?t).Consequently,?t is equal to25msec.

12

SOLUTIONS TO CHAPTER3PROBLEMS

1.Q:In this problem you are to compare reading a?le using a single-threaded

?le server and a multithreaded server.It takes15msec to get a request for work,dispatch it,and do the rest of the necessary processing,assuming that the data needed are in a cache in main memory.If a disk operation is needed, as is the case one-third of the time,an additional75msec is required,during which time the thread sleeps.How many requests/sec can the server handle if it is single threaded?If it is multithreaded?

A:In the single-threaded case,the cache hits take15msec and cache misses take90msec.The weighted average is2/3×15+1/3×90.Thus the mean request takes40msec and the server can do25per second.For a mul-tithreaded server,all the waiting for the disk is overlapped,so every request takes15msec,and the server can handle662/3requests per second.

2.Q:Would it make sense to limit the number of threads in a server process?

A:Yes,for two reasons.First,threads require memory for setting up their own private stack.Consequently,having many threads may consume too much memory for the server to work properly.Another,more serious reason, is that,to an operating system,independent threads tend to operate in a chaotic manner.In a virtual memory system it may be dif?cult to build a rela-tively stable working set,resulting in many page faults and thus I/O.Having many threads may thus lead to a performance degradation resulting from page thrashing.

3.Q:In the text,we described a multithreaded?le server,showing why it is

better than a single-threaded server and a?nite-state machine server.Are there any circumstances in which a single-threaded server might be better?

Give an example.

A:Yes.If the server is entirely CPU bound,there is no need to have multiple threads.It may just add unnecessary complexity.As an example,consider a telephone directory assistance number(like555-1212)for an area with1mil-lion people.If each(name,telephone number)record is,say,64characters, the entire database takes64megabytes,and can easily be kept in the server’s memory to provide fast lookup.

4.Q:Statically associating only a single thread with a lightweight process is not

such a good idea.Why not?

A:Such an association effectively reduces to having only kernel-level threads,implying that much of the performance gain of having threads in the ?rst place,is lost.

5.Q:Having only a single lightweight process per process is also not such a

good idea.Why not?

PROBLEM SOLUTIONS FOR CHAPTER313 A:In this scheme,we effectively have only user-level threads,meaning that any blocking system call will block the entire process.

6.Q:Describe a simple scheme in which there are as many lightweight

processes as there are runnable threads.

A:Start with only a single LWP and let it select a runnable thread.When a runnable thread has been found,the LWP creates another LWP to look for a next thread to execute.If no runnable thread is found,the LWP destroys itself.

7.Q:Proxies can support replication transparency by invoking each replica,as

explained in the text.Can(the server side of)an object be subject to a repli-cated invocation?

A:Yes:consider a replicated object A invoking another(nonreplicated) object B.If A consists of k replicas,an invocation of B will be done by each replica.However,B should normally be invoked only once.Special measures are needed to handle such replicated invocations.

8.Q:Constructing a concurrent server by spawning a process has some advan-

tages and disadvantages compared to multithreaded servers.Mention a few.

A:An important advantage is that separate processes are protected against each other,which may prove to be necessary as in the case of a superserver handling completely independent services.On the other hand,process spawn-ing is a relatively costly operation that can be saved when using mul-tithreaded servers.Also,if processes do need to communicate,then using threads is much cheaper as in many cases we can avoid having the kernel implement the communication.

9.Q:Sketch the design of a multithreaded server that supports multiple proto-

cols using sockets as its transport-level interface to the underlying operating system.

A:A relatively simple design is to have a single thread T waiting for incom-ing transport messages(TPDUs).If we assume the header of each TPDU con-tains a number identifying the higher-level protocol,the tread can take the payload and pass it to the module for that protocol.Each such module has a separate thread waiting for this payload,which it treats as an incoming request.After handling the request,a response message is passed to T,which, in turn,wraps it in a transport-level message and sends it to tthe proper desti-nation.

10.Q:How can we prevent an application from circumventing a window

manager,and thus being able to completely mess up a screen?

A:Use a microkernel approach by which the windowing system including the window manager are run in such a way that all window operations are

14PROBLEM SOLUTIONS FOR CHAPTER3

required to go through the kernel.In effect,this is the essence of transferring the client-server model to a single computer as we also explained in Chap.1.

11.Q:Explain what an object adapter is.

A:An object adapter is a generic program capable of accepting incoming invocation requests and passing these on to the server-side stub of an object.

An adapter is primarily responsible for implementing an invocation policy, which determines if,how,and how many multiple threads are used to invoke an object.

12.Q:Mention some design issues for an object adapter that is to support per-

sistent objects.

A:The most important issue is perhaps generating an object reference that can be used independently of the current server and adapter.Such a reference should be able to be passed on to a new server and to perhaps indicate a speci?c activation policy for the referred object.Other issues include exactly when and how a persistent object is written to disk,and to what extent the object’s state in main memory may differ from the state kept on disk.

13.Q:Change the procedure thread per object in the example of the object

adapters,such that all objects under control of the adapter are handled by a single thread.

A:The code stays almost the same,except that we need to create only a sin-gle thread that runs a modi?ed version of thread per object.This thread is referred to as adapter thread(its creation is not shown).When the demulti-plexer calls the adapter,it puts a message in the buffer of adapter thread, after which the latter picks it up as before.The adapter thread invokes the appropriate stub,and handles the response message.

#include

#include

#de?ne MAX OBJECTS100

#de?ne NULL0

#de?ne ANY?1

METHOD CALL invoke[MAX OBJECTS];/*array of pointers to stubs*/

THREAD*root;/*demultiplexer thread*/

THREAD*adapter thread/*thread that runs single thread*/

void single thread(long object id){

message*req,*res;/*request/response message*/ unsigned size;/*size of messages*/

char**results;/*array with all results*/

while(TRUE){

PROBLEM SOLUTIONS FOR CHAPTER315 get msg(&size,(char*)&req);/*block for invocation request*/

/*Pass request to the appropriate stub.The stub is assumed to*/

/*allocate memory for storing the results.*/

(invoke[req->object id]*)(req->size,req->data,&size,results);

res=malloc(sizeof(message)+size);/*create response message*/

res->object id=object id;/*identify object*/

res->method id=req.method id;/*identify method*/

res->size=size;/*set size of invocation results*/

memcpy(res->data,results,size);/*copy results into response*/

put msg(root,sizeof(res),res);/*append response to buffer*/

free(req);/*free memory of request*/

free(*results);/*free memory of results*/ }

}

void invoke adapter(long oid,message*request){

put msg(adapter thread,sizeof(request),request);

}

14.Q:Is a server that maintains a TCP/IP connection to a client stateful or state-

less?

A:Assuming the server maintains no other information on that client,one could justi?ably argue that the server is stateless.The issue is that not the server,but the transport layer at the server maintains state on the client.What the local operating systems keep track of is,in principle,of no concern to the server.

15.Q:Imagine a Web server that maintains a table in which client IP addresses

are mapped to the most recently accessed Web pages.When a client connects to the server,the server looks up the client in its table,and if found,returns the registered page.Is this server stateful or stateless?

A:It can be strongly argued that this is a stateless server.The important issue with stateless designs is not if any information is maintained by the server on its clients,but rather how accurate that information has to be.In this exam-ple,if the table is lost for what ever reason,the client and server can still properly interact as if nothing happened.In a stateful design,such an interac-tion would be possible only after the server had recovered from a possible fault.

16.Q:To what extent does Java RMI rely on code migration?

A:Considering that object references are actually portable proxies,each time an object reference is passed,we are actually migrating code across the

16PROBLEM SOLUTIONS FOR CHAPTER3

network.Fortunately,proxies have no execution state,so that support for sim-ple weak mobility is all that is needed.

17.Q:Strong mobility in UNIX systems could be supported by allowing a pro-

cess to fork a child on a remote machine.Explain how this would work.

A:Forking in UNIX means that a complete image of the parent is copied to the child,meaning that the child continues just after the call to fork.A similar approach could be used for remote cloning,provided the target platform is exactly the same as where the parent is executing.The?rst step is to have the target operating system reserve resources and create the appropriate process and memory map for the new child process.After this is done,the parent’s image(in memory)can be copied,and the child can be activated.(It should be clear that we are ignoring several important details here.)

18.Q:In Fig.3-13it is suggested that strong mobility cannot be combined with

executing migrated code in a target process.Give a counterexample.

A:If strong mobility takes place through thread migration,it should be possi-ble to have a migrated thread be executed in the context of the target process.

19.Q:Consider a process P that requires access to?le F which is locally avail-

able on the machine where P is currently running.When P moves to another machine,it still requires access to F.If the?le-to-machine binding is?xed, how could the systemwide reference to F be implemented?

A:A simple solution is to create a separate process Q that handles remote requests for F.Process P is offered the same interface to F as before,for example in the form of a proxy.Effectively,process Q operates as a?le server.

20.Q:Each agent in D’Agents is implemented by a separate process.Agents can

communicate primarily through shared?les and by means of message pass-ing.Files cannot be transferred across machine boundaries.In terms of the mobility framework given in Sec.3.4,which parts of an agent’s state,as given in Fig.3-19,comprise the resource segment?

A:The resource segment contains all references to local and global resources.

As such,it consists of those variables that refer to other agents,local?les, and so on.In D’Agents,these variables are primarily contained in the part consisting of global program variables.What makes matters simple,is that virtually all resources in D’Agents are nontransferrable.Only agents can move betweeen machines.Because agents are already named by global refer-ences,namely an(address,local-id)pair,transforming references to resources in the presence of migration is relatively simple in D’Agents.

21.Q:Compare the architecture of D’Agents with that of an agent platform in

the FIPA model.

A:The main distinction between the two is that D’Agents does not really have a separate directory service.Instead,it offers only a low-level naming

PROBLEM SOLUTIONS FOR CHAPTER317 service by which agents can be globally referenced.The management com-ponent in the FIPA architecture corresponds to the server in D’Agents, whereas the ACC is implemented by the communication layer.The FIPA model provides no further details on the architecture of an agent,in contrast to D’Agents.

22.Q:Where do agent communication languages(ACLs)?t into the OSI model?

A:Such languages are part of the application layer.

23.Q:Where does an agent communication language?t into the OSI model,

when it is implemented on top of a system for handling e-mail,such as in D’Agents?What is the bene?t of such an approach?

A:It would still be part of the application layer.An important reason for implementing ACLs on top of e-mail,is simplicity.A complete,worldwide communicaton infrastructure is available for handling asynchronous message-passing between agents.Essentially,such an approach comes close to message-queuing systems discussed in Chap.2.

24.Q:Why is it often necessary to specify the ontology in an ACL message?

A:In this context,an ontology can best be interpreted as a reference to a stan-dard interpretation of the actual data contained in an ACL https://www.wendangku.net/doc/f012043548.html,ually, data in message-passing systems is assumed to be correctly interpreted by the sender and receiver of a message.Agents are often considered to be highly independent from each other.Therefore,it can not always be assumed that the receiving side will interpret the transferred data correctly.Of course,it is necessary that there is common agreement on the interpretation of the ontol-ogy?eld.

18

SOLUTIONS TO CHAPTER4PROBLEMS

1.Q:Give an example of where an address of an entity E needs to be further

resolved into another address to actually access E.

A:IP addresses in the Internet are used to address hosts.However,to access a host,its IP address needs to be resolved to,for example,an Ethernet address.

2.Q:Would you consider a URL such as https://www.wendangku.net/doc/f012043548.html,/index.html to be

location independent?What about http://www.acme.nl/index.html?

A:Both names can be location independent,although the?rst one gives fewer hints on the location of the named entity.Location independent means that the name of the entity is independent of its address.By just considering a name,nothing can be said about the address of the associated entity.

3.Q:Give some examples of true identi?ers.

A:Examples are ISBN numbers for books,identi?cation numbers for software and hardware products,employee numbers within a single organiza-tion,and Ethernet addresses(although some addresses are used to identify a machine instead of just the Ethernet board).

4.Q:How is a mounting point looked up in most UNIX systems?

A:By means of a mount table that contains an entry pointing to the mount point.This means that when a mounting point is to be looked up,we need to go through the mount table to see which entry matches a given mount point.

5.Q:Jade is a distributed?le system that uses per-user name spaces(see In

other words,each user has his own,private name space.Can names from such name spaces be used to share resources between two different users?

A:Yes,provided names in the per-user name spaces can be resolved to names in a shared,global name space.For example,two identical names in different name spaces are,in principle,completely independent and may refer to different entities.To share entities,it is necessary to refer to them by names from a shared name space.For example,Jade relies on DNS names and IP addresses that can be used to refer to shared entities such as FTP sites.

6.Q:Consider DNS.To refer to a node N in a subdomain implemented as a dif-

ferent zone than the current domain,a name server for that zone needs to be speci?ed.Is it always necessary to include a resource record for that server’s address,or is it sometimes suf?cient to provide only its domain name?

A:When the name server is represented by a node NS in a domain other than the one in which N is contained,it is enough to give only its domain name.In that case,the name can be looked up by a separate DNS query.This is not possible when NS lies in the same subdomain as N,for in that case,you would need to contact the name server to?nd out its address.

PROBLEM SOLUTIONS FOR CHAPTER419

7.Q:Is an identi?er allowed to contain information on the entity it refers to?

A:Yes,but that information is not allowed to change,because that would imply changing the identi?er.The old identi?er should remain valid,so that changing it would imply that an entity has two identi?ers,violating the second property of identi?ers.

8.Q:Outline an ef?cient implementation of globally unique identi?ers.

A:Such identi?ers can be generated locally in the following way.Take the network address of the machine where the identi?er is generated,append the local time to that address,along with a generated pseudo-random number.

Although,in theory,it is possible that another machine in the world can gen-erate the same number,chances that this happens are negligible.

9.Q:Give an example of how the closure mechanism for a URL could work.

A:Assuming a process knows it is dealing with a URL,it?rst extracts the scheme identi?er from the URL,such as the string ftp:.It can subsequently look up this string in a table to?nd an interface to a local implementation of the FTP protocol.The next part of the closure mechanism consists of extract-ing the host name from the URL,such as www.cs.vu.nl,and passing that to the local DNS name server.Knowing where and how to contact the DNS server is an important part of the closure mechanism.It is often hard-coded into the URL name resolver that the process is executing.Finally,the last part of the URL,which refers to a?le that is to be looked up,is passed to the identi?ed host.The latter uses its own local closure mechanism to start the name resolution of the?le name.

10.Q:Explain the difference between a hard link and a soft link in UNIX sys-

tems.

A:A hard link is a named entry in a directory?le pointing to the same?le descriptor as another named entry(in possibly a different directory).A sym-bolic link is a?le containing the(character string)name of another?le.

11.Q:High-level name servers in DNS,that is,name servers implementing

nodes in the DNS name space that are close to the root,generally do not sup-port recursive name resolution.Can we expect much performance improve-ment if they did?

A:Probably not:because the high-level name servers constitute the global layer of the DNS name space,it can be expected that changes to that part of the name space do not occur often.Consequently,caching will be highly effective,and much long-haul communication will be avoided anyway.Note that recursive name resolution for low-level name servers is important, because in that case,name resolution can be kept local at the lower-level domain in which the resolution is taking place.

灰色系统理论简介

灰色系統理論簡介 一、什麼是灰色系統 二、什麼是灰色系統理論 三、灰色系統理論建立的歷史背景 四、灰色系統理論的主要內容 五、灰色系統理論的兩條基本原理 六、灰色系統的應用範疇 七、灰色系統的優點 八、灰色系統的應用實例

一、什麼是灰色系統(Grey System) 灰色分析全名為灰色系統理論分析(Grey System Theory),是由中國鄧聚龍教授於1982年在國際經濟學會議上 提出,該理論主要是針對系統模型之不明確性,資訊之不完整 性之下,進行關於系統的關聯分析(Relational Analysis)、模型建構(Constructing A Model)、借由預測(Prediction)及決策(Decision)之方法來探討及瞭解系統。 自然界對人類社會來講不是白色的(全部都知道),也不是黑色的(一無所知),而是灰色的(半知半解)。人類的思考、行 為也是灰色的,人類其實是生存在一個高度的灰色信息關係空 間之中,例如:人體系統、糧食生產系統等。部分信息已知,部分信息未知的系統,稱為灰色系統。 控制論中主要以顏色命名,常以顏色之深淺表示研究者對內部信息(information)和對系統本身的了解及認識程度之多 寡,黑色,表示信息缺乏;白色,表示信息充足;而介於白色 (W)系統與黑色(B)系統之間,其信息部份已知,信息部分 未知的這類系統便稱之為灰色(G)系統。 二、什麼是灰色系統理論 灰色系統理論是研究灰色系統分析、建模、預測、決策和控制的理論。它把一般系統論、信息論及控制論的觀點和方法 延伸到社會、經濟和生態等抽象系統,並結合數學方法,發展 出一套解決信息不完全系統(灰色系統)的理論和方法。 灰色系統理論分析具有溝通社會科學及自然科學的作用,可將抽象的系統加以實體化、量化、模型化及做最佳化。

操作系统原理习题及答案(全书免费版)

第一章习题及答案 一、填空题 1.用户与操作系统的接口有,两种。 【答案】命令接口,系统调用 【解析】按用户界面的观点,操作系统是用户与计算机之间的接口。用户通过操作系统提供的服务来有效地使用计算机。一般操作系统提供了两类接口为用户服务,一种是程序一级的接口,即通过一组广义指令(或称系统调用)供用户程序和其他系统程序调用;另一种是作业一级的接口,提供一组控制命令供用户去组织和控制自己的作业。 2.用户程序调用操作系统有关功能的途径是。 【答案】利用系统调用命令 【解析】系统调用命令是操作系统专门给编程人员提供的调用操作系统有关功能的途径,一般在汇编语言和C语言中都提供了使用系统调用命令的方法。编程人员可以在这些语言中利用系统调用命令动态请求和释放系统资源。 3.UNIX系统是①操作系统,DOS系统是②操作系统。 【答案】①分时(或多用户、多任务),②单用户(或单用户、单任务) 【解析】 UNIX系统是一个可供多个用户同时操作的会话式的分时操作系统,DOS系统是为个人计算机设计的一个单用户操作系统。 4.现代计算机中,CPU工作方式有目态和管态两种。目态是指运行①程序,管态是指运行②程序。执行编译程序时,CPU处于③。 【答案】①用户,②操作系统,③目态 【解析】 CPU工作方式分为目态和管态,主要是为了把用户程序和操作系统程序区分开,以利于程序的共享和保护。 5.从资源分配的角度讲,计算机系统中的资源分为处理机、、和。操作系统相应的组成部分是、、和。 【答案】处理机、存储器、输入/输出设备和文件资源;处理机管理、存储器管理、设备管理和文件系统 【解析】计算机系统中的资源分为硬件资源和软件资源。硬件资源有处理机、内/外存储器及输入/输出设备。而软件资源指的是程序、数据和相应的文档。从资源管理的观点,操作系统是计算机资源系统的管理系统,它提供了处理机管理、存储器管理、输入/输出设备管理和信息文件管理的功能。对每种资源的管理都可从提供资源情况的记录、资源分配策略、资源分配和回收等几个方面来加以讨论。 6.根据服务对象不同,常用的单处理机OS可以分为如下三种类型: 允许多个用户在其终端上同时交互地使用计算机的OS称为①,它通常采用②策略为用户服务; 允许用户把若干个作业提交计算机系统集中处理的OS,称为③,衡量这种系统性能的一个主要指标是系统的④; 在⑤的控制下,计算机系统能及时处理由过程控制反馈的数据并作出响应。设计这种系统时,应首先考虑系统的⑥。 【答案】①分时OS,②时间片轮转,③批处理OS,④吞吐率,⑤实时OS,⑥实时性和可靠性 【解析】分时操作系统、批处理操作系统和实时操作系统是操作系统的三种基本类型。分时系统一般采用时间片轮转的办法,使一台计算机同时为多个终端用户服务,对每个用户都能保证足够快的响应时间,并提供交互会话能力;批处理系统则是把用户提交的作业(包括程序、数据和处理步骤)成批输入计算机,然后由作业调度程序自动选择作业运行,从而缩短了作业之间的交接时间,减少了处理机的空闲等待,提高了系统效率;实时系统是操作系统的另一种类型,要求对外部输入的信息能以足够快的速度进行处理,并在被控对象允许的时间范围内作出快速响应,其响应时间要求特别高。由于实时系统大部分是为特殊的实时任务设计的,这类任务对系统的可靠性和安全性要求很高。

操作系统原理练习题附答案

《操作系统原理》练习题 一、填空题 1. 每个进程都有一个生命周期,这个周期从__(1)__开始,到__(2)__而结束。 2. 当一个进程独占处理器顺序执行时,具有两个特性:__(3)__和可再现性。 3. 并发进程中与共享变量有关的程序段称为__(4)__。 4. 一个进程或者由系统创建,或者由__(5)__创建。 5. 一个进程的静态描述是处理机的一个执行环境,被称为__(6)__。 6. 信号量的物理意义是:信号量大于0,其值为__(7)__;信号量小于0,其绝对值为__(8)__。 7. 系统有某类资源5个,供3个进程共享,如果每个进程最多申请__(9)__个该类资源,则系统是安全的。 8. 不可中断的过程称为__(10)__。 9. 操作系统中,进程可以分为__(11)__进程和__(12)__进程两类。 10. 操作系统为用户提供两种类型的使用接口,它们是__(13)__接口和__(14)__接口。 11. 批处理操作系统中,操作员根据作业需要把一批作业的有关信息输入计算机系统,操作系统选择作业并根据__(15)__的要求自动控制作业的执行。 12. 在批处理兼分时的系统中,往往由分时系统控制的作业称为前台作业,而由批处理系统控制的作业称为__(16)__作业。 13. 采用SPOOL技术的计算机系统中,操作员只要启动__(17)__程序工作,就可以把作业存放到__(18)__中等待处理。 14. 作业控制方式有__(19)__方式和__(20)__方式二种。 15. 对资源采用抢夺式分配可以防止死锁,能对处理器进行抢夺式分配的算法有__(21)__算法和__(22)__算法。 16. 因争用资源产生死锁的必要条件是互斥、__(23)__、不可抢占和__(24)__。 17. 死锁的形成,除了与资源的__(25)__有关外,也与并发进程的__(26)__有关。 18. 为破坏进程循环等待条件,从而防止死锁,通常采用的方法是把系统中所有资源类进行__(27)__,当任何一个进程申请两个以上资源时,总是要求按对应资源号__(28)__次序申请这些资源。 19. 内存管理的核心问题是如何实现__(29)__的统一,以及它们之间的__(30)__问题。 20. 页式存储管理中,处理器设置的地址转换机构是__(31)__寄存器。 21. 在页式和段式存储管理中,__(32)__存储管理提供的逻辑地址是连续的。 22. 实现地址重定位或地址映射的方法有两种:__(33)__和__(34)__。 23. 在响应比最高者优先的作业调度算法中,当各个作业等待时间相同时,__(35)__的作业将得到优先调度;当各个作业要求运行的时间相同时,__(36)__的作业得到优先调度。 24. 确定作业调度算法时应注意系统资源的均衡使用,即使CPU繁忙的作业和__(37)__的作业搭配使用。 25. 按照组织形式分类文件,可以将文件分为普通文件、目录文件和__(38)__。 26. 文件系统为用户提供了__(39)__的功能,以使得用户能透明地存储访问文件。 27. 文件名或记录名与物理地址之间的转换通过__(40)__实现。 28. 文件的__(41)__与文件共享、保护和保密紧密相关。

线性系统理论历年考题

说明: 姚老师是从07还是08年教这门课的,之前的考题有多少参考价值不敢保证,也只能供大家参考了,重点的复习还是以课件为主,把平时讲的课件内容复习好了,考试不会有问题(来自上届的经验)。 祝大家考试顺利! (这个文档内部交流用,并感谢董俊青和兰天同学,若有不足请大家见谅。) 2008级综合大题 []4001021100101 1 2x x u y x ???? ????=-+????????-????= 1 能否通过状态反馈设计将系统特征值配置到平面任意位置? 2 控规范分解求上述方程的不可简约形式? 3 求方程的传递函数; 4 验证系统是否渐近稳定、BIBO 稳定、李氏稳定; 5 可能通过状态反馈将不可简约方程特征值配置到-2,-3?若能,确定K ,若不能,请说明理由; 6 能否为系统不可简约方程设计全阶状态观测器,使其特征值为-4,-5; 7画出不可简约方程带有状态观测器的状态反馈系统结构图。 参考解答: 1. 判断能控性:能控矩阵2 14161 24,() 2.0 0M B AB A B rank M ?? ?? ??==-=???????? 系统不完全可控,不能任意配置极点。

2 按可控规范型分解 取M 的前两列,并加1与其线性无关列构成1 1 401200 1P -?? ??=-?????? ,求得120331 1066 00 1P ?? ????? ?=-????????? ? 进行变换[] 1 1 20831112,0,2 2 26000 1 A PAP B PB c cP --? ? ?? ???? ????=-====???? ??????????? ? 所以系统不可简约实现为[]08112022x x u y x ?????=+???????????=? 3. 1 2(1)(1)2(1)()()(4)(2)(1) (4)(2) s s s G s c sI A B s s s s s --+-=-= = -++-+ 4. det()(4)(2)(1)sI A s s s -=-++, 系统有一极点4,位于复平面的右部,故不是渐近稳定。 1 2(1)()()(4)(2) s G s c sI A B s s --=-= -+,极点为4,-2,存在位于右半平面的极点,故系统不 是BIBO 稳定。 系统发散,不是李氏稳定。 5. 可以。令11 228,12T k k k k A Bk k +???? =+=??? ??? ?? 则特征方程[]2 112()det ()(2)28f s sI A Bk s k s k k =-+=-++-- 期望特征方程* 2 ()(2)(3)56f s s s s s =++=++

操作系统原理-第八章 文件系统习题(有答案)

第六章文件系统 6.3习题 6.3.1 单项选择题 1.操作系统对文件实行统一管理,最基本的是为用户提供( )功能。 A.按名存取 B.文件共享 C.文件保护 D.提高文件的存取速度 2.按文件用途分类,编译程序是( )。 A.系统文件 B.库文件 C.用户文件 D.档案文件 3.( )是指将信息加工形成具有保留价值的文件。 A.库文件 B.档案文件 C.系统文件 D.临时文件 4.把一个文件保存在多个卷上称为( )。 A.单文件卷 B.多文件卷 C.多卷文件 D.多卷多文件 5.采取哪种文件存取方式,主要取决于( )。 A.用户的使用要求 B.存储介质的特性 C.用户的使用要求和存储介质的特性 D.文件的逻辑结构 6.文件系统的按名存取主要是通过( )实现的。 A.存储空间管理 B.目录管理 C.文件安全性管理 D.文件读写管理7.文件管理实际上是对( )的管理。 A.主存空间 B.辅助存储空间 C.逻辑地址空间 D.物理地址空间8.如果文件系统中有两个文件重名,不应采用( )结构。 A.一级目录 B.二级目录 C.树形目录 D.一级目录和二级目录9.树形目录中的主文件目录称为( )。 A.父目录 B.子目录 C.根目录 D.用户文件目录 10.绝对路径是从( )开始跟随的一条指向制定文件的路径。 A.用户文件目录 B.根目录 C.当前目录 D.父目录 11.逻辑文件可分为流式文件和( )两类。 A.索引文件 B.链接文件 C.记录式文件 D.只读文件 12.由一串信息组成,文件内信息不再划分可独立的单位,这是指( )。A.流式文件 B.记录式文件 C.连续文件 D.串联文件 13.记录式文件内可以独立存取的最小单位是由( )组成的。 A.字 B.字节 C.数据项 D.物理块 14.在随机存储方式中,用户以( )为单位对文件进行存取和检索。 A.字符串 B.数据项 C.字节 D.逻辑记录

信息光学习题答案

信息光学习题答案 第一章 线性系统分析 1.1 简要说明以下系统是否有线性和平移不变性. (1)()();x f dx d x g = (2)()();?=dx x f x g (3)()();x f x g = (4)()()()[];2 ? ∞ ∞ --= αααd x h f x g (5) ()()απξααd j f ?∞ ∞ --2exp 解:(1)线性、平移不变; (2)线性、平移不变; (3)非线性、平移不变; (4)线性、平移不变; (5)线性、非平移不变。 1.2 证明)()ex p()(2x comb x j x comb x comb +=?? ? ??π 证明:左边=∑∑∑∞ -∞ =∞-∞=∞-∞=-=??? ???-=??? ??-=??? ??n n n n x n x n x x comb )2(2)2(2122δδδ ∑∑∑∑∑∑∞ -∞ =∞ -∞ =∞ -∞=∞ -∞=∞ -∞ =∞ -∞ =--+-= -+-=-+-= +=n n n n n n n n x n x n x jn n x n x x j n x x j x comb x comb ) () 1()() ()exp()() ()exp()()exp()()(δδδπδδπδπ右边 当n 为奇数时,右边=0,当n 为偶数时,右边=∑∞ -∞ =-n n x )2(2δ 所以当n 为偶数时,左右两边相等。 1.3 证明)()(sin x comb x =ππδ 证明:根据复合函数形式的δ函数公式 0)(,) () ()]([1 ≠''-= ∑ =i n i i i x h x h x x x h δδ 式中i x 是h(x)=0的根,)(i x h '表示)(x h 在i x x =处的导数。于是 )() ()(sin x comb n x x n =-=∑∞ -∞ =π δπ ππδ

操作系统原理与实践教程(第二版)第2章习题答案

第2章操作系统的界面 (1) 请说明系统生成和系统引导的过程。 解: 系统的生成过程:当裸机启动后,会运行一个特殊的程序来自动进行系统的生成(安装),生成系统之前需要先对硬件平台状况进行检查,或者从指定文件处读取硬件系统的配置信息,以便根据硬件选择合适的操作系统模块组,比较重要的信息通常有:CPU类型、内存大小、当前关联设备的类型和数量以及操作系统的重要功能选项和参数。按照这些信息的指示,系统生成程序就可以正确地生成所需的操作系统。 系统引导的过程:系统引导指的是将操作系统内核装入内存并启动系统的过程。主要包括初始引导、内核初始化、全系统初始化。初始引导工作由BIOS完成,主要完成上电自检,初始化基本输入输出设备,载入操作系统内核代码等工作。内核被载入内存后,引导程序将CPU控制权交给内核,内核将首先完成初始化功能,包括对硬件、电路逻辑等的初始化,以及对内核数据结构的初始化,如页表(段表)等。全系统初始化阶段要做的就是启动用户接口程序,对系统进行必要的初始化,使系统处于等待命令输入状态。 (2) 操作系统具有哪些接口?这些接口的作用是什么? 解: 操作系统为用户提供的接口有图形接口、命令接口和程序接口几种形式。 操作系统包括三种类型的用户接口:命令接口(具体又可分为联机命令接口与脱机命令接口)、程序接口及图形化用户接口。其中,命令接口和图形化用户接口支持用户直接通过终端来使用计算机系统,而程序接口则提供给用户在编制程序时使用。 (3) 请说明操作系统具有的共性服务有哪些不同类别,这些类别分别用于完成什么功能? 解:所有的操作系统都通过一些基本服务来帮助用户简单便捷地使用计算机各类资源,它们包括以下几个类别: 1.控制程序运行:系统通过服务将用户程序装入内存并运行该程序,并且要控制程序 在规定时间内结束。 2.进行I/O操作:用户是不能直接控制设备的,只能通过操作系统与外部设备进行交 互,由系统调用将结果显示在屏幕上或交给用户。 3.操作文件系统:为了保证实现“按名存取”,文件系统应该为用户提供根据文件名 来创建、访问、修改、删除文件的方法,以确保文件数据的安全可靠以及正确存取。 4.实现通信:操作系统需要提供多个程序之间进行通讯的机制,来控制程序的执行顺 序。 5.错误处理:操作系统通过错误处理机制,以便及时发现错误并采取正确的处理步骤, 避免损害系统的正确性和统一性。 (4) 系统调用的用途是什么? 解: 通常,在操作系统内核设置有一组用于实现各种系统功能的子程序(过程),并将它们提供给用户程序调用。每当用户在程序中需要操作系统提供某种服务时,便可利用一条系统调用命令,去调用所需的系统过程。这即所谓的系统调用。系统调用的主要类型包括: 1.进程控制类,主要用于进程的创建和终止、对子进程结束的等待、进程映像的替换、 进程数据段大小的改变以及关于进程标识符或指定进程属性的获得等; 2.文件操纵类,主要用于文件的创建、打开、关闭、读/写及文件读写指针的移动和

操作系统原理复习题答案

<<操作系统原理复习题答案>> 第一部分名词解释答案 第一章概论 计算机系统: 计算机系统由硬件和软件两大部分组成. 硬件(即物理计算机)是系统 的基本资源, 在硬件基础上配置的软件是对硬件功能的扩充和完善. 裸机: 即不附加任何软件的物理计算机. 虚拟机: 所谓虚拟是指逻辑的而非物理的计算机, 是指在物理计算机上加上若干层 软件上构成的比裸机功能更强的、使用更方便的``虚拟计算机''. 操作系统是加在裸机 上的第一层软件. 它是对裸机的首次扩充. 操作系统: 从结构上看, 操作系统是用户程序及系统应用, 应用程序, 单道程序系统, 多道程序系统, 中断, 通道, 异步事件, 批处理, 脱机方式, 批处理系统, 分时, 时间片, 响应时间, 分时系统, 实时, 单用户交互式系统, 实时系统, 网络操作系统, 管态, 目态, 特权指令, 非特权指令, 系统调用, 访管指令, 访管中断, 程序级接口, 人-机接口, 脱机级接口, 联机级接口, 前台作业, 后台作业, 并发性, 共享性, 不确定性, 并发, 并行, 顺序共享, 并发共享, 中断机构, 中断源, 中断请求, 断点, 中断响应, 中断处理程序, 中断处理, 中断返回, 输入输出中断, 故障中断, 程序状态字, 外部中断, 时钟, 界限寄存器, 存储保护键, 冷启动, 热启动. 2. 顺序性, 封闭性, 确定性, 可再现性, 竞争性, 制约性, 与速度无关, 进程概念, 执行状态, 就绪状态, 等待状态, 死锁状态, 挂起状态, 进程控制块, 纯代码, 可再入程序, 用户进程, 系统进程, 进程模块, 非进程模块, 原语, 内核, 进程控制, 互斥, 同步, 广义同步, 临界资源, 临界区, 同类临界区, 信号量, 忙等待方式, 让权等待方式, P V 操作, 生产者与消费者, 公用信号量, 私用信号量, 消息缓冲, 消息队列, 管道, 作业, 脱机作业, 联机作业, 作业控制, 作业控制块. 3. 高级调度, 中断调度, 低级调度, 进程调度, 调度方式, CPU 周期, 剥夺方式, 非剥夺方式, 吞吐量, 平均周时间, 平均带权周转时间, 平均等待时间, 响应比, 先来先服务, 短作业优先, 最高响应比优先, 轮转法, 高优先级优先, 静态优先数, 动态优先数, 多级反馈队列, 调度程序, 系统开销, 系统颠簸. 4. 存储空间, 地址空间, 物理地址, 逻辑地址, 绝对地址, 相对地址, 名空间, 虚空间, 地址映射, 静态映射, 动态映射, 静态分配, 动态分配, 内存保护, 内存扩充, 分区, 碎片, 内碎片, 首次适应法, 最佳适应法, 最坏适应法, 固定分区, 可变分区, 地址越界, 越界中断, 覆盖, 交换, 整体交换, 部分交换,

线性系统理论多年考题和答案

2008级综合大题 []400102110010112x x u y x ????????=-+????????-????=& 1 能否通过状态反馈设计将系统特征值配置到平面任意位置? 2 控规范分解求上述方程的不可简约形式? 3 求方程的传递函数; 4 验证系统是否渐近稳定、BIBO 稳定、李氏稳定;(各种稳定之间的关系和判定方法!) 5 可能通过状态反馈将不可简约方程特征值配置到-2,-3?若能,确定K ,若不能,请说明理由; 6 能否为系统不可简约方程设计全阶状态观测器,使其特征值为-4,-5; 7画出不可简约方程带有状态观测器的状态反馈系统结构图。 参考解答: 1. 判断能控性:能控矩阵21416124,() 2.000M B AB A B rank M ?? ????==-=???? ???? 系统不完全 可控,不能任意配置极点。 2 按可控规范型分解 取M 的前两列,并加1与其线性无关列构成1140120001P -????=-??????,求得1203311066 001P ?? ?? ?? ??=-?????? ???? 进行变换[]11 20831112,0,22260001A PAP B PB c cP --? ??????? ????=-====???? ???????? ????

所以系统不可简约实现为[]08112022x x u y x ?????=+?????????? ?=? & 3. 12(1)(1)2(1) ()()(4)(2)(1)(4)(2) s s s G s c sI A B s s s s s --+-=-= =-++-+ 4. det()(4)(2)(1)sI A s s s -=-++,系统有一极点4,位于复平面的右部,故不是渐近稳定。 12(1) ()()(4)(2) s G s c sI A B s s --=-= -+,极点为4,-2,存在位于右半平面的极点,故系统不 是BIBO 稳定。 系统发散,不是李氏稳定。 5. 可以。令11228,12T k k k k A Bk k +???? =+=???????? 则特征方程[]2 112()det ()(2)28f s sI A Bk s k s k k =-+=-++-- 期望特征方程*2 ()(2)(3)56f s s s s s =++=++ 比较上两式求得:728T k -?? =??-?? 6. 可以。设12l L l ??=????,则11222821222l l A LC l l --?? -=? ?--?? 特征方程2 2121()(222)1628f s s l l s l l =+-++-- 期望特征方程*2 ()(4)(5)920f s s s s s =++=++ 比较得:103136L ???? =????????

操作系统原理 庞丽萍 答案 习题六答案

习题六参考答案(P132) 6-2某系统进程调度状态变迁图如图6.5所示(设调度方式为非剥夺方式),请说明: (1)什么原因将引起发生变迁2、变迁3、变迁4? (2)当观察系统中所有进程时,能够看到某一进程产生的一次状态 变迁能引起另一进程作一次状态变迁,在什么情况下,一个进程的变 迁3能立即引起另一个进程发生变迁1? (3 生? (a ) 图6.5 解答:(1)当运行进程在分得的时间片内未完成,时间片到将发生变 迁2; 当运行进程在执行过程中,需要等待某事件的发生才能继续向下执行,此时会发生变迁3; 当等待进程等待的事件发生了,将会发生变迁4。 o m P D F d T r i a l

(2)正在运行的进程因等待某事件的发生而变为等待状态的变迁3,在就绪队列非空时会立即引起一个就绪进程被调度执行的变迁1。 (3)a .3->1的因果变迁可能发生 正在运行的进程因等待某事件的发生而变为等待状态的变迁3,在就绪队列非空时必然引起一个就绪进程被调度执行的变迁1。 b.3->2的因果变迁不可能发生。 c.2->1的因果变迁必然发生 正运行的进程因时间片到变为就绪状态的变迁2,必然引起一个就绪进程被调度执行的变迁1。 6-3若题2中所采用的调度为可剥夺式,请回答题2中提出的问题: (1)什么原因将引起发生变迁2、变迁3、变迁4? (2)当观察系统中所有进程时,能够看到某一进程产生的一次状态变迁能引起另一进程作一次状态变迁,在什么情况下,一个进程的变迁3能立即引起另一个进程发生变迁1? (3)下述因果变迁是否可能发生?如果可能的话,在什么情况下发 生? (a )3->1;(b )3->2;(c )2->1 解答:(1)当运行进程在分得的时间片内未完成,时间片到将发生变 迁2;或者新创建一个进程或一个等待进程变成就绪,它具有比当前进程更高的优先级,也将发生变迁2。 h t t p ://w w w .p d f d o .c o m P D F d o P a s s w o r d R e m o v e r T r i a l

操作系统原理课后习题答案

操作系统原理课后习题答案 操作系统原理作业第1章1-2 批处理系统和分时系统各有什么特点?为什么分时系统的响应比较快?答:在批处理系统中操作人员将作业成批装入计算机并计算机管理运行,在程序的运行期间用户不能干预,因此批处理系统的特点是:用户脱机使用计算机,作业成批处理,系统内多道程序并发执行以及交互能力差。在分时系统中不同用户通过各自的终端以交互方式共同使用一台计算机,计算机以“分时”的方法轮流为每个用户服务。分时系统的主要特点是:多个用户同时使用计算机的同时性,人机问答方式的交互性,每个用户独立使用计算机的独占性以及系统响应的及时性。分时系统一般采用时间片轮转的方法使一台计算机同时为多个终端用户服务,因此分时系

统的响应比较快。1-4什么是多道程序设计技术?试述多道程序运行的特征。答:多道程序设计技术是指同时把多个作业放入内存并允许它们交替执行和共享系统中的各类资源;当一道程序因某种原因而暂停执行时,CPU 立即转去执行另一道程序。多道程序运行具有如下特征:多道计算机内存中同时存放几道相互独立的程序。宏观上并行:同时进入系统的几道程序都处于运行过程中,它们先后开始了各自的运行但都未运行完毕。微观上串行:从微观上看内存中的多道程序轮流或分时地占有处理机,交替执行。1-6操作系统的主要特性是什么?为什么会有这样的特性?答:并发性,共享性,异步性,虚拟性,这些特性保证了计算机能准确的运行,得出想要的结果。1-7 工作情况如图。CPU有空闲等待,它发生在100 ms与程序B都在进行I/O操作。程序A无等待现象,

程序B在0 ms间段内有等待现象。150 ms时间段内,此时间段内程序A50 ms时间段与180 ms200 ms时第2章2-1 什么是操作系统虚拟机?答:在裸机上配置了操作系统程序后就构成了操作系统虚拟机2-3 什么是处理机的态?为什么要区分处理机的态?答:处理机的态,就是处理机当前处于何种状态,正在执行哪类程序。为了保护操作系统,至少需要区分两种状态:管态和用户态。2-5 什么是中断?在计算机系统中为什么要引用中断?答:中断是指某个事件发生时,系统终止现行程序的运行、引出处理该事件程序进行处理,处理完毕后返回断点,继续执行。为了实现并发活动,为了实现计算机系统的自动化工作,系统必须具备处理中断的能力。2-8中断和俘获有什么不同?答:中断指处理机外部事件引起的中断称为外中断,又称中断。包括I/O中断、外中断。俘获是指外处理内部

灰色系统理论及其应用

灰色系统理论及其应用 第一章灰色系统的概念与基本原理 1.1灰色系统理论的产生和发展动态 1982年,北荷兰出版公司出版的《系统与控制通讯》杂志刊载了我国学者邓聚龙教授的第一篇灰色系统理论论文”灰色系统的控制问题”,同年,《华中工学院学报》发表邓聚龙教授的第一篇中文论文《灰色控制系统》,这两篇论文的发表标志着灰色系统这一学科诞生 1985灰色系统研究会成立,灰色系统相关研究发展迅速。 1989海洋出版社出版英文版《灰色系统论文集》,同年,英文版国际刊物《灰色系统》杂志正式创刊。目前,国际、国内300多种期刊发表灰色系统论文,许多国际会议把灰色系统列为讨论专题。国际著名检索已检索我国学者的灰色系统论著3000多次。灰色系统理论已应用范围已拓展到工业、农业、社会、经济、能源、地质、石油等众多科学领域,成功地解决了生产、生活和科学研究中的大量实际问题,取得了显著成果。 1.2几种不确定方法的比较 概率统计,模糊数学和灰色系统理论是三种最常用的不确定系统研究方法。其研究对象都具有某种不确定性,是它们共同的特点。也正是研究对象在不确定性上的区别,才派生了这三种各具特色的不确定学科。 模糊数学着重研究“认识不确定”问题,其研究对象具有“内涵明确,外延不明确”的特点。比如“年轻人”内涵明确,但要你划定一个确定的范围,在这个范围内是年轻人,范围外不是年轻人,则很难办到了。

概率统计研究的是“随机不确定”现象,考察具有多种可能发生的结果之“随机不确定”现象中每一种结果发生的可能性大小。要求大样本,并服从某种典型分布。 灰色系统理论着重研究概率统计,模糊数学难以解决的“小样本,贫信息”不确定性问题,着重研究“外延明确,内涵不明确”的对象。如到2050年,中国要将总人口控制在15亿到16亿之间,这“15亿到16亿之间“是一个灰概念,其外延很清楚,但要知道具体数值,则不清楚。 1.3灰色系统理论的基本概念 定义1.3.1信息完全明确的系统称为白色系统。 定义1.3.2信息未知的系统称为黑色系统。 定义1.3.3部分信息明确,部分不明确的系统称为灰色系统。 1.4灰色系统理论的基本原理 公理1(差异信息原理)“差异“是信息,凡信息必有差异。 公理2(解的非唯一性原理)信息不完全,不确定的解是非唯一的。 公理3(最少信息原理)灰色系统理论的特点是充分开发利用已占有的“最少信息“。 公理4(认知根据原理)信息是认知的根据。 公理5(新信息优先原理)新信息对认知的作用大于老信息。 公理6(灰性不灭原理):信息不完全是绝对的

线性系统理论试卷

湘潭大学研究生考试试题 考试科目:线性系统理论/现代控制理论考生人数:20考试形式:闭卷 适用专业: 双控单控/电传 适用年级:一年级 试卷类型: A 类 一、给定多项式矩阵如下: 22121()1 2s s s s D s s s ?? ?????? ++++= ++ 1. 计算矩阵的行次数,判断系统是否行既约? 2. 计算矩阵的列次数,判断系统是否列既约? 3. 寻找单模矩阵,将多项式矩阵()D s 化为史密斯型。 二、设系统的传递函数矩阵为右MFD 1()()N s D s -,其中: 210 ()21s D s s s s ? ? ????? ? -= +-+,()11N s s s ???? =-+ 试判断{}(),()N s D s 是否右互质;如果不是右互质,试通过初等运算找出其最大右公因子。 三、给定()G s 的一个左MFD 为: 1 210 1 0()112 1s s G s s s s -? ? ?? ?????????? ? ? -+= +-+ 试判断这个MFD 是否是最小阶的;如果不是,求出其最小阶MFD 。 四、确定下列传递函数矩阵的一个不可简约左MFD : 21 1 0()102 2s s s G s s s s s ????????? ? ?? += +++ 五、给定系统的传递函数矩阵为

22 3 (1)(2)(1)(2)()31(1)(2) (2)s s s s s s G s s s s s s ???? ?? ??????? ? +++++= +++++ 试计算出相应的评价值,并写出其史密斯--麦克米伦型。 六、给定传递函数矩阵如下: 2 2221156()1253 43s s s s s G s s s s s ???? ?? ??? ? ?? +-++= ++++ 试定出其零、极点,并计算出其结构指数。 七、给定系统的传递函数矩阵如下: 2 2211 154()14 3 712s s s s G s s s s s ???? ?? ??? ? ?? +-++= ++++ 试求出一个控制器型实现。 八、确定下列传递函数矩阵()G s 的一个不可简约的PMD 2 2 141()143 32s s s s G s s s s s ?? ?? ?? ??? ??? ++-= ++++ 九、给定系统的传递函数矩阵如下: 1 2 2 430 11()221 21s s s s G s s s s s -?????? ??????? ?? ? ++-+= +++ 试设计一个状态反馈K,使得状态反馈系数的极点为: 12λ*=-, 23λ*=-, 4,5 42j λ* =-±

光学信息技术第三章习题

第三章习题解答 3.1参看图3.5,在推导相干成像系统点扩散函数( 3.35 )式时,对于积分号前的相位因子 相对于它在原点之值正好改变n 弧度? 设光瞳函数是一个半径为 a 的圆,那么在物平面上相应 h 的第一个零点的半径是 多少? 时可以弃去相位因子 由于原点的相位为零,于是与原点位相位差为 的条件是 式中r x 2 y 2,而 试问 exp j#(x ; y o ) 2d o 2 2 x y i M 2 (1) 物平面上半径多大时,相位因子 exp j£(x 0 y 0) d o (2) (3) 由这些结果,设观察是在透镜光轴附近进行,那么 a ,入和d o 之间存在什么关系 exp k 2 2 (x 。 y 。) 2d o (2) y 2) 賦 2d o ,r o ... d o 根据(3.1.5 ) 式,相干成像系统的点扩散函数是透镜光瞳函数的夫琅禾费衍射图 样,其中心位于理想像点 (%, %) h(x °,y °;x, y) 1 2 d °d i 2 P (x,y)exp jp (xi %)2 (yi %)2]dxdy r circ 一 a J_aJ,2 a ) 2 d o d i

(3)根据线性系统理论,像面上原点处的场分布,必须是物面上所有点在像面上的点 扩散函数对于原点的贡献 h(x ),y 0;0,0) o 按照上面的分析,如果略去 h 第一个零点以 外的影响,即只考虑h 的中央亮斑对原点的贡献, 那么这个贡献仅仅来自于物平面原点 附近r 。 0.61 d 。/ a 范围内的小区域。当这个小区域内各点的相位因子 2 exp[jkr ° /2d °]变化不大,就可认为(3.1.3 )式的近似成立,而将它弃去,假设小区 域内相位变化不大于几分之一弧度(例如 /16 )就满足以上要求,则 kr ;/2d 0 16 2 r ° d °/16,也即 a 2.44. d 0 (4) 例如 600nm , d ° 600nm ,则光瞳半径a 1.46mm ,显然这一条件是极易满足 的。 3.2 一个余弦型振幅光栅,复振幅透过率为 放在图3.5所示的成像系统的物面上,用单色平面波倾斜照明,平面波的传播方向在 X 0Z 平 面内,与z 轴夹角为Bo 透镜焦距为 f ,孔径为D O (1) 求物体透射光场的频谱; (2) 使像平面出现条纹的最大B 角等于多少?求此时像面强度分布; (3) 若B 采用上述极大值, 使像面上出现条纹的最大光栅频率是多少?与B =0时的截 止频率比较,结论如何? (% y o )2 (d i 在点扩散函数的第一个零点处, J ,(2 a ) 0 ,此时应有2 a 3.83,即 0.61 (2) 将(2)式代入(1 )式,并注意观察点在原点 ( X i y 0) ,于是得 r o 0.61 d o a (3) t(X 0,y °) cos 2 f °X 0 2

计算机操作系统 第四版 课后习题答案

第一章1.设计现代OS的主要目标是什么? 答:(1)有效性(2)方便性(3)可扩充性(4)开放性 2.OS的作用可表现在哪几个方面? 答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS 作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象 3.为什么说OS实现了对计算机资源的抽象? 答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。OS 通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。 4.试说明推劢多道批处理系统形成和収展的主要劢力是什么? 答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。5.何谓脱机I/O和联机I/O? 答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。而联机I/O

方式是指程序和数据的输入输出都是在主机的直接控制下进行的。 6.试说明推劢分时系统形成和収展的主要劢力是什么? 答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。 7.实现分时系统的关键问题是什么?应如何解决? 答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。解决方法:针对及时接收问题,可以在系统中设臵多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配臵缓冲区,暂存用户键入的命令或数据。针对及时处理问题,应使所有的用户作业都直接进入内存,并且为每个作业分配一个时间片,允许作业只在自己的时间片内运行,这样在不长的时间内,能使每个作业都运行一次。 8.为什么要引入实时OS? 答:实时操作系统是指系统能及时响应外部事件的请求,在规定的时间内完成对该事件的处理,并控制所有实时任务协调一致地运行。引入实时OS 是为了满足应用的需求,更好地满足实时控制领域和实时信息处理领域的需要。 9.什么是硬实时任务和软实时任务?试举例说明。 答:硬实时任务是指系统必须满足任务对截止时间的要求,否则可能出现难以预测的结果。举例来说,运载火箭的控制等。软实时任务是指它的截止时间并不严格,偶尔错过了任务的截

北航线性系统理论完整版答案

1-1 证明:由矩阵 可知A 的特征多项式为 n n n n n n n n n n n n n n n a a a a a a a a a a a a a a a a a a a a a a a A I ++++++=+++++=+++=++=+= -+λλλλλλλλλλ λλλλλ λλλλ λλλλ1-3-32-21-11-3-31 22 -2-1-n 1 3-n 2-n 2 1 -1n 1 2-n 1-n 12-n 1-n n 1- )1(-)1(- 0 0 0 1- )1(-)1(- 0 0 0 1- 1 0 1- 0 0 0 1- 若i λ是A 的特征值,则 所以[] T i i 1-n i 2 1 λλλ 是属于i λ的特征向量。 1-7 解:由于()τ τ--t e t g =,,可知当τ≤-=-=αα ββαβαt u t u P u Q P 而()()?? ?+>+≤-=???>≤=βαβαβααβαβ t 0 t t 0 t t u t u Q u P Q ,故u P Q u Q P αββα≠,所以系统是时变的。 又因为()()()()()?? ?>≤=???>≤=ααααα,,T T t u t u P u P P T T min t 0 min t t 0 t 而()()()()()()() ?? ?>≤=???>≤=ααααα,,,,T T t u T T t u P u P P P T T T min t 0 min t min t 0 min t ,故()()u P P P u P P T T T αα=,所以系统具有因果性。 1-11 解:由题设可知,()τ-t g 随τ变化的图如下所示。

操作系统原理复习题库

计算机操作系统期末复习题声明:本题库内容仅供参考难较难 4-一般1-简单 2- 3-注:第一部分操作系统基本概念 一、选择题(选择最确切的一个答案,将其代码填入括号中) 1、操作系统是一种()。 A、应用软件 B、系统软件 C、通用软件 D、工具软件 答案-1:B 2、计算机系统的组成包括()。 A、程序和数据 B、处理器和内存 C、计算机硬件和计算机软件 D、处理器、存储器和外围设备 答案-1:C 3、下面关于计算机软件的描述正确的是()。 A、它是系统赖以工作的实体 B、它是指计算机的程序及文档 C、位于计算机系统的最外层 D、分为系统软件和支撑软件两大类答案-2:B 4、财务软件是一种()。 A、系统软件 B、接口软件 C、应用软件 D、用户软件 答案-2:C 5、世界上第一个操作系统是()。 A、分时系统 B、单道批处理系统 C、多道批处理系统 D、实时系统

答案-1:B 6、批处理操作系统提高了计算机的工作效率,但()。 A、系统资源利用率不高 B、在作业执行时用户不能直接干预 C、系统吞吐量小 D、不具备并行性 答案-3:B 7、引入多道程序的目的是()。 、增强系统的交互能力B 、为了充分利用主存储器A. C、提高实时响应速度 D、充分利用CPU,减少CPU的等待时间 答案-3:D 8、在多道程序设计的计算机系统中,CPU()。 A、只能被一个程序占用 B、可以被多个程序同时占用 C、可以被多个程序交替占用 D、以上都不对 答案-2:C 9、多道程序设计是指()。 A、有多个程序同时进入CPU运行 B、有多个程序同时进入主存并行运行 C、程序段执行不是顺序的 D、同一个程序可以对应多个不同的进程 答案-3:B 10、从总体上说,采用多道程序设计技术可以()单位时间的算题量,但对每一个算题,从算题开始到全部完成所需的时间比单道执行所需的时间可能要()。

相关文档
相关文档 最新文档