文档库 最新最全的文档下载
当前位置:文档库 › Delay embeddings for forced systems II. stochastic forcing

Delay embeddings for forced systems II. stochastic forcing

Delay embeddings for forced systems II. stochastic forcing
Delay embeddings for forced systems II. stochastic forcing

D ELAY

E MBEDDINGS FOR

F ORCED

S YSTEMS:

II. S TOCHASTIC F ORCING

J. Stark1, D.S. Broomhead2, M.E. Davies3 and J. Huke2

1Centre for Nonlinear Dynamics and its Applications, University College London, Gower Street, London, WC1E

6BT.

2Department of Mathematics,

University of Manchester Institute of Science and Technol-ogy, PO Box 88, Manchester, M60 1QD.

3Department of Electronic Engineering,

Queen Mary, University of London, Mile End Road, London E1 4NS.

Monday, July 15, 2002

A BSTRACT

Takens’ Embedding Theorem forms the basis of virtually all approaches to the analysis of time series generated by nonlinear deterministic dynamical systems. It typically allows us to recon-struct an unknown dynamical system which gave rise to a given observed scalar time series simply by constructing a new state space out of successive values of the time series. This pro-vides the theoretical foundation for many popular techniques, including those for the meas-urement of fractal dimensions and Liapunov exponents, for the prediction of future behaviour, for noise reduction and signal separation, and most recently for control and targeting. Current versions of Takens’ Theorem assume that the underlying system is autonomous (and noise free). Unfortunately this is not the case for many real systems. In a previous paper, one of us showed how to extend Takens’ Theorem to deterministically forced systems. Here, we use similar techniques to prove a number of delay embedding theorems for arbitrarily and stochas-tically forced systems. As a special case, we obtain embedding results for Iterated Functions Systems, and we also briefly consider noisy observations.

1.I NTRODUCTION

This paper continues the work begun in [Stark, 1999] where one of us developed techniques for proving extensions of the Takens’ Embedding Theorem to forced dynamical systems. In that paper, these methods were applied to the case of forcing by a finite dimensional determi-nistic system. In many applications the assumption that the forcing is deterministic is not a rea-sonable one, and the aim of this paper is therefore to extend these results to far more general forcing processes.

Recall that Takens’ Embedding Theorem ([Takens, 1980]; see also [Aeyels, 1981] and [Sauer et al., 1991]) provides the theoretical foundation for the analysis of time series generated by nonlinear deterministic dynamical systems. Informally, it says that if we take a scalar observable ? of the state x of a deterministic dynamical system then typically we can reconstruct a copy of

the original system by considering blocks (?(x

t ), ?(x

t+τ

), ?(x

t+2τ

), …, ?(x

t+(d-1)τ

)) of d successive

observations of ?, for d sufficiently large. Here x

t

is the state of the system at time t, and τ > 0 is

some sampling interval. Since x

t will usually be unknown, whilst ?(x

t

) is a quantity we can

measure in practice, this result has stimulated a vast range of applications in fields ranging from fluid dynamics, through electrical engineering to biology, medicine and economics (for a good overview see e.g. [Ott et al, 1994]). One might even say that this one theorem has given rise to virtually a new branch of nonlinear dynamics, often informally called chaotic time series analysis.

However, for Takens’ Theorem to be valid, we need to assume both that the dynamics is de-

terministic (ie that there is some mapping f such that x

t+τ = f(x

t

)), and that both the dynamics,

and the observations are autonomous (so that f and ? depend on x only). Note that by rescaling time, if necessary, we may assume that τ = 1, and hence restrict t to integer values. In [Stark, 1999] we extended Takens’ Theorem to the non-autonomous case where f is also a function of

some other variable y

t , where y

t

itself is generated by a deterministic system, so that y

t+1

= g(y

t

)

for some g.

Here we turn to systems driven by far more general processes. It turns out that the same ap-proach can encompass a number of different cases. In particular the theorems proved below apply to a wide class of stochastic dynamical systems (which we can think of as deterministic systems driven by some stochastic process), to input-output systems and to irregularly sampled systems. Input-output systems have already been considered in this context by [Casdagli, 1992], and our results here in essence prove his conjecture on reconstructing such systems.

Broadly speaking our approach results in a reconstruction framework where both the dynami-cal systems and the delay reconstruction map are indexed by the realization of the forcing proc-ess. This leads to results similar to the Bundle Delay Embedding Theorem (Theorem 3.2) of [Stark, 1999], and indeed the proof of the main “Stochastic Takens’ Embedding Theorem”here (Theorem 2.3 below) closely parallels that of Theorem 3.2 of [Stark, 1999]. We also con-sider a number of variations of this main theorem, including the case of Iterated Function Sys-tems (eg [Norman, 1968; Barnsley, 1988]) and of noisy observations. It is perhaps interesting to note that the latter, at least in the case where the forcing process takes on continuous values is far the easiest case to prove, and amounts to little more than the classical Whitney Embed-ding Theorem (eg [Hirsch, 1976]).

We assume familiarity with the standard Takens’ framework, and use the notation of [Stark,1999]. Most of the transversality techniques employed here closely follow those developed in that paper, and we shall make use of a number of technical results and calculations derived there. We begin by describing a general framework for treating arbitrarily forced systems, and stating the principal theorems proved in this paper.

2. D ELAY R ECONSTRUCTION FOR A RBITRARY F ORCING

A convenient formalism for incorporating arbitrary forcing into a dynamical system is that of random dynamical systems (eg [Kifer, 1988; Arnold, 1998]). This encompasses both a wide class of noisy systems, as well as input-output systems in the terminology of [Casdagli, 1992], ir-regularly sampled systems and others. In the case of discrete time systems, which is the situa-tion that we treat here, we assume that the forcing at time i ∈Z is specified by a variable ωi ,drawn from some appropriate space N . In the case of noisy systems, ωi is chosen at random,with respect to some probability measure μ on N . The state of the system at time i ∈Z is de-noted by x i ∈M and evolves according to

x i +1=f (x i , ωi )(2.1)

As is usual in the standard Takens’ framework, we assume that M is a smooth compact mani-fold. The case of M non-compact can be treated (eg [Takens, 1980; Huke, 1993]), but will not be considered here further. If we think of ωi as a parameter, then we can interpret (2.1) as a standard dynamical system with noise or forcing on the parameters. It can also be helpful to write f ωi (x i ) instead of f (x i ,ωi ). This suggests the interpretation that instead of applying the same map f every time, we choose a different map f ωi at each time step. The case of a single determ i-nistic system f subject to additive (dynamical) noise can be included in this formalism by set-ting f ωi

(x i ) = f (x i ) + ωi .In the most general case (assuming M compact), N can be taken to be the space D r (M ) of all diffeomorphisms on M [Kifer, 1988]. At the other extreme, N may be a discrete space consist-ing of a finite number of points, so that f ωi

is chosen from a finite set of maps. This leads to the well known example of so called iterated function systems (eg [Norman, 1968; Barnsley, 1988]).In this paper we shall choose N somewhere between these two extremes, and make the stand-ing assumption that N is a compact manifold.2.1. S HIFT S PACES

Standard dynamical systems theory cannot directly encompass non-autonomous systems of the form (2.1). The usual approach to non-autonomous systems is to enlarge the state space suffi-ciently to make the system autonomous. This is well known in the case of a periodically forced differential equation which can be transformed into an autonomous system by the addition of a dummy variable to represent time. In the context of arbitrary forcing, this is most easily ac-complished through the use of an appropriate shift space. Thus, let Σ = N Z be the space of bi-infinite sequences ω = ( …, ω-1, ω0, ω1, …) of elements in N with the product topology. Since we assume that N is compact, the Tychonoff theorem implies that Σ is also compact. Let σ : Σ→ Σ be the usual shift map

[σ(ω)]i

=ωi +1where ωi ∈N is the i th component of ω ∈Σ. Then the evolution of x i given by (2.1) can be repre-

sented by the skew product T : M ×Σ → M ×Σ (see Figure 2.1) defined by

T (x ,ω)=(f (x ,ω0),σ(ω))(2.2)

Since the space Σ contains all possible sequences of elements in N , this gives us a very general model of systems driven by arbitrary sequences. Furthermore, if one wants to restrict interest to a particular class of input sequences, one can replace Σ by any shift invariant closed subset. If in addition we have some measure μ on N then the corresponding product measure μ∞ on Σ is shift invariant and hence (σ , μ∞) gives rise to a (Bernoulli) stochastic process. We can also con-sider general σ-invariant measures μΣ to take account of correlations in the choice of successive ωi (so that for instance ω is a Markov process). The restriction to σ-invariant measures corre-sponds to a stationarity condition on the corresponding random process. Additionally, f could be taken to be a general function f : M ×Σ → M , rather than just depending on a single compo-nent ω

0 of ω. We shall not do so here, though as we shall soon see, the delay reconstruction map will depend on more than a single element of ω.If we dispense with the measure μ, the same formalism can also be used to model a determinis-tic system driven by an arbitrary input sequence ω. This arises frequently in communications systems where the sequence ω would represent the information being transmitted (eg

[Broomhead et al , 1999]). Another application is to irregularly sampled time series where ωi denotes the time between sample i and i +1 (see [Martin, 1998], and Example 3.5 of [Stark,1999]).

2.2. C ONJUGACIES FOR R ANDOM D YNAMICAL S YSTEMS

The crucial question we need to address in order to develop an analogue of Takens’ Theorem for random dynamical systems is what do we mean by a delay reconstruction of T ? It is well

Figure 2.1 Graphical representation of a random dynamical system, from [Stark, 2001].

known (eg [Takens, 1980; Stark, 1999; Stark, 2001]) that the fundamental property required of a reconstruction is that the reconstructed system (which we shall call F, in line with [Stark, 1999; Stark, 2001]) should be equivalent to the original dynamical system f under a co-ordinate change. In the standard Takens embedding framework therefore we have F = Φ°f°Φ-1, where Φ is the delay reconstruction map.

We thus need to determine what it means for two random dynamical systems T and T' to be equivalent in this way. The most general concept is simply to require T' = H

T H -1 for some

(invertible) co-ordinate change H : M×Σ→M×Σ. It may not be possible to arrange for T' = H°T°H -1 to hold for all ω ∈Σ, and hence we may either require this for only μΣ-almost every ω in the probabilistic setting, or for only generic ω in a topological setting.

In the context of delay reconstruction, however, it turns out to be convenient to place some further restrictions on H. The space M×Σ will in general be infinite dimensional, and hence we have little hope of reconstructing it in a finite dimensional delay space R d. Furthermore ? is a function of M only and hence attempting to reconstruct Σusing ? seems foolhardy (though note that in the case of deterministic forcing, this is in fact possible, [Stark, 1999]). We there-fore want to restrict ourselves to conjugacies which correspond to reconstructing only M and leaving Σ untouched. A reasonable interpretation of this is to require that the Σ component of H is the identity (Figure 2.2). In other words, we only consider conjugacies of the form H = (h , Id ) for some map h : M×Σ→M. If H is to be invertible for μΣ-almost every ω, or for generic ω, then hω = h(?,ω) : M→M has to be invertible for μΣ-almost every ω, or for generic ω. We shall refer to co-ordinate changes of this form as bundle conjugacies (using the term loosely for any of h

ω

, h and H itself). Co-ordinate changes of this form are common in the theory of ran-dom dynamical systems (eg [Arnold, 1998]).

Given a skew product (2.2), it is convenient to define ~

f : M×Σ→M by

~

f(x,ω) = f(x,ω

) so that

T =(~

f,σ). If similarly, T' =(

~

f',σ) then T' = H°T°H -1 is equivalent to (~f',σ) = (h , Id )°(~f,σ)°(h ,

Id )-1. If we denote ~

f

ω

=

~

f(?,ω) : M→M we have (h , Id )°(~f,σ)°(h , Id )-1(?,ω) = (h , Id )°(~fω°h-1ω ,

σ(ω)) = (hσ(ω)° ~fω°h-1ω , σ(ω)). Hence

~ f'ω=h

σ(ω)°

~

f

ω°h-1ω(2.3)

where as usual ~

f'

ω

=

~

f'(?,ω) : M→M. Note that in [Stark et al., 1997] and [Stark, 1999] we

abused notation to write f

ω instead of

~

f

ω

. Observe the similarity between (2.3) and the determi-

nistic conjugacy f ' = h°f°h-1. Essentially all we do in the random case is index both the dynamics

and the co-ordinate change with ω. The only slightly delicate point is the σ appearing in h

σ(ω)

. The reason for this is that by the time we come to apply h in equation (2.3) we have carried out one time step of the dynamics, and hence ω has moved to σ(ω) (Figure 2.2).

In applications to reconstruction we need to consider one further issue, namely that the domain and range of H will typically not be the same space. Thus, recall that in the standard Takens framework the delay reconstruction map Φ is a map Φ : M→R d. The main content of Takens Theorem is that this is an embedding (a diffeomorphism onto its image Φ(M)) and hence we can define the co-ordinate change F = Φ°f°Φ-1 on Φ(M). In the present context Φ will depend

on ω and so for each ω can be regarded as a map Φ

ω

: M×Σ→R d (see §2.3 below). Informally our concept of a “Stochastic Takens’ Theorem” is to require Φ to be a bundle embedding. By

this we mean that Φ

ω should be an embedding (a diffeomorphism onto its image Φ

ω

(M)) for

typical ω, ie for μΣ-almost every ω, or for generic ω. Note that in general the image Φ

ω

(M) will be different for each ω (but all these images will be diffeomorphic to M). The range of the map H = (Φ , Id ) will thus not be M×Σ as above, but rather the reconstruction space R d×Σ . When Φis a bundle embedding, the co-ordinate change T' = H°T°H -1is defined on H(M×Σ). This space cannot typically be written in the form M'×Σ for some appropriate M', but it is bundle diffeomorphic to M×Σ.

2.3. T HE S TOCHASTIC T AKENS ’ T HEOREM

We shall now define the reconstruction map more precisely, and give several versions of the a “Stochastic Takens’ Theorem”. Suppose that we observe the skew product (2.2) using a meas-urement function ? : M → R , so that the observed time series is generated by ?i = ?(x i ) (for the moment we assume that ? is independent of ω). The usual delay map can then be written as

Φf ,?(x ,ω)

=(?(x ), ?( f ω0(x )), ?( f ω1ω0(x )), …, ?( f ωd -2…ω0(x )))?where f ωi …ω0 = f ωi ° … ° f ω0 (as in §3.5 of [Stark, 1999]). Observe that Φf ,? can be regarded as either

a map M ×N d -1 → R d , or a map M ×Σ → R d . In the latter form, Φf ,? is a candidate for a bundle

embedding which is equivalent to the condition that Φf ,?,ω = Φf ,?(?,ω) should be an embedding

for μΣ-almost every ω, or for generic ω. In [Stark, 1999] we were able to prove that this was in-deed the case for finite dimensional deterministic forcing. Given that in the setting described in §2.1, σ defines a deterministic dynamical system (albeit an infinite dimensional one) and Φf ,?,ωdepends only on a finite number of components of ω, it should not come as a great surprise that we can modify Theorem 3.2 of [Stark, 1999] to hold in the present context. Thus, as in

[Stark, 1999] denote by D r (M ×N ,

M ) the space of maps f : M ×N → M such that f y : M → M is a C r diffeomorphism of M for any y , where as usual f y (x ) = f (x,y ). Then Theorem 2.1 Let M and N be compact manifolds of dimension m ≥ 1 and n respectively.Suppose that d ≥ 2m +1. Then for r ≥ 1, there exists a residual set of ( f ,?)∈D r (M ×N , M )×C r (M ,R ) such that for any ( f

,?) in this set there is an open dense set of ω in Σ such that Φf ,?,ω is an embedding.The same approach also gives a measure theoretic version of this theorem. Since the proof ul-timately relies on Sard’s Theorem, we need to impose some conditions on the shift invariant measure on Σ that describes the stochastic forcing. The most straightforward case is when we assume that this measure is a product measure μ∞ arising from a measure μ on N , as discussed above in §2.1. This gives the following statement of the theorem, already announced in [Stark et al., 1997] and [Stark, 1999]:

Theorem 2.2 Let M and N be compact manifolds of dimension m ≥ 1 and n respectively and let μ be a measure on N which is absolutely continuous with respect to Lebesgue measure.Suppose that d ≥ 2m +1. Then for r ≥ 1, there exists a residual set of (f ,?)∈D r (M ×N , M )× C r (M ,R ) such that for any (

f ,?) in this set Φf ,?,ω is an embeddin

g for μ∞ almost every ω .Our method of proof, however, allows for more general stochastic processes. The key property required of μΣ is that the marginal measure μd -1 on cylinders of lengt

h d -1 is absolutely continu-ous with respect to Lebesgue measure. Recall that μd -1 is the measure on N d -1 defined by μd -1(U )= μΣ((πd -1)-1(U )) for all measurable sets U ? N d -1, where πd -1 : Σ → N

d -1 is th

e projection πd -1(ω)= (ω0, …, ωd -2). We then have Theorem 2.3 (Takens’ Theorem for Stochastic Systems) Let M and N be compact manifolds o

f dimension m ≥ 1 and n respectively and let μΣ be an invariant measure on Σ = N Z such that μd -1 is absolutely continuous with respect to Lebesgue measure on N

d -1. Suppos

e that d ≥

2m +1. Then for r ≥ 1, there exists a residual set of (f ,?)∈D r (M ×N , M )×

C r (M ,R ) such that for any (

f ,?) in this set Φf ,?,ω is an embeddin

g for μΣ almost every ω.Note that Φf ,?,ω only depends on the finite number of components (ω0, …, ωd -2)∈N d -1 and hence as we shall see in §3.1 below it is sufficient to prove that Φf ,?,ω is an embedding for μd -1 almost every (ω0, …, ωd -2). When N is zero dimensional, and hence consists of a finite number of points, the only set whic

h has μd -1 full measure is the whole of N

d -1. Thus when dim N = 0 th

e theorem implies that Φ

f ,?,ω is an embeddin

g for all ω. It turns out that in this case we need to

use a different proof (see §3.2 below for a more detailed explanation) to that for dim N > 0,which also gives an open dense set of (f ,?), rather than merely a residual set. We thus get Theorem 2.4 (Takens’ Theorem for Iterated Function Systems) Let M and N be compact manifolds of dimension m ≥ 1 and n = 0 respectively (so that N is a finite set of points). Sup-pose that d ≥ 2m +1 and r ≥ 1. Then there exists an open dense set of (f ,?)∈D r (M ×N , M )× C r (M ,R ) such that for any (

f ,?) in this set Φf ,?,ω is an embeddin

g for every ω∈Σ.This thus encompasses bot

h Theorems 2.1 and 2.3 for dim N = 0, and is proved separately in §6. It is also possible to incorporate “noisy observations” in either of these results. This in fact should lead to easier proofs, since it allows us greater freedom in making perturbations. On the other hand, such a generalization does significantly complicate the notation. We therefore postpone discussion of this until §2.5 below.

2.4. R ECONSTRUCTING THE D YNAMICS

The most important consequence of the standard Takens’ Theorem is that when Φ is an em-bedding it possible to reconstruct a copy of the original system from successive observations of ?. Essentially the same situation holds in our generalized framework (Figure 2.3).

Thus suppose that ω is such that Φf ,?,ω and Φf ,?,σ(ω) are both embeddings of M . Then the map F ω= Φf ,?,σ(ω)° f ω0° (Φf ,?,ω)-1 is well defined and is a diffeomorphism between Φf ,?,ω (M ) ? R d and Φf ,?,σ(ω)(M ) ? R d . Let (x i ,σ i (ω)) be an orbit of ( f ,σ), so that x i +1 = f (x i ,ωi ), recall that ?i = ?(x i

)and define z i = (?i , ?i +1 , … , ?i +d -1). Then z i = Φf ,?,σ i

(ω)(x i ) and hence z i +1=

Φf ,?,σ i +1(ω)(x i +1)=

Φf ,?,σ i +1(ω)( f ωi (x i ))=

Φf ,?,σ(σ i (ω))( f ωi (( Φf ,?,σ i (ω))-1(z i )))=F σ i

(ω)(z i )Therefore in exact analogy to the standard Takens’ framework, F σ i (ω) is just the map which shifts

a block of the time series forward by one time step, and hence (?i , ?i +1 , … , ?i +d -1) a (?i +1 ,?i +2 , … , ?i +d ) is bundle conjugate to our original dynamics f ωi . Note however, that whereas f ωi only depends on ωi = σ i (ω)0 , the map F σ i (ω) depends on ωi , ωi +1 , …

, ωi +d -1 . Also in contrast to the standard framework, different F ω have different domains (each a subset of R d diffeomorphic to M ). There is no reason in general why these should all be disjoint.The first d -1 components of F ω are trivial. If we denote the last component by G ω : Φf ,?,ω(M ) →R then

?i+d=Gσ i(ω)(?i , ?i+1 , … , ?i+d-1)

If we write out the dependence on σ i(ω) explicitly, we get

?i+d=G(?i , ?i+1 , … , ?i+d-1 , ωi , ωi+1 , … , ωi+d-1)(2.4)

The existence of such a function was conjectured by [Casdagli, 1992]. From another point of view, processes of this form are known in signal processing under the name of Nonlinear Auto-Regressive Moving Average (NARMA) models. Some authors (eg [Chen and Billings, 1989])

restricted to one di-have already used models with the general structure of (2.4), (but with ω

i

mension). More commonly, however, it is assumed that there is only a single additive random component:

?i+d=G(?i , ?i+1 , … , ?i+d-1) + ωi+d-1

Figure 2.3 Delay reconstruction of a random dynamical system.

It is clear that estimating both G and ω in a model of the form (2.4) will be a major challenge.Note that ωi

, …, ωi +d -2 have all been determined by time i +d -1, and so there is at least some hope of estimating them from previous values of the time series. By contrast ωi +d -1 corresponds to new uncertainty entering the system in the time step from i +d -1 to i +d .2.5. N OISY O BSERVATIONS

So far we have assumed that the observations are completely noise free. This is clearly unrealis-tic in many applications. In this section we therefore generalize our approach to cover the pos-sibility of noise in the observations. We can envisage two cases: the observation noise may be independent of the noise in the dynamics, or both may depend on the same underlying stochas-tic process. In the first case, which is more likely to be relevant in applications, we can con-struct another shift space Σ' = (N')Z to represent the observation dynamics, where N' is some compact manifold. The observation function is now ? : M ×N' → R , which gives rise to the de-lay map Φf ,?,ω,η defined by

Φf ,?,ω,η(x )

=(?η0(x ), ?η1( f ω0(x )), ?η2( f ω1ω0(x )), … , ?ηd -1( f ωd -2…ω0(x )))?where ?ηi (x ) = ?(x ,ηi ) for η∈Σ'. We can then require this to be an embedding for typical (ω,η)∈Σ×Σ'

:Theorem 2.5 Let M , N and N' be compact manifolds, with m = dim M > 0. Suppose that d ≥2m +1. Then for r ≥ 1, there exists a residual set of ( f ,?)∈D r (M ×N , M )× C r (M ×N', R ) such that for any ( f ,?) in this set there is an open dense set Σf ,? ? Σ×Σ' such that Φf ,?,ω,η is an embedding

for all (ω ,η)∈Σf ,? . If μΣ and μ'Σ' are invariant measures on Σ and Σ' respectively, such that μd -1and μ'd are absolutely continuous with respect to Lebesgue measure on N

d -1 and (N')d respec-tively, then w

e can choose Σ

f ,? such that μΣ×μ'Σ'(Σf ,?) = 1.As with Theorem 2.3, we need to treat the cases dim N = 0 and dim N' = 0 separately. Details can be found in §7 below, which also contains the proofs of the various resultin

g versions of Theorem 2.5. The other possibility is that the observation noise depends on the same process as the dynamic noise. The relevant delay map is then

~Φf ,?,ω(x )

=(?ω0(x ), ?ω1( f ω0(x )), ?ω2( f ω1ω0(x )), … , ?ωd -1(f ωd -2…ω0(x )))?where ?ωi (x ) = ?(x ,ωi ). We then get

Theorem 2.6 Let M and N be compact manifolds, with m = dim M > 0. Suppose that d ≥

2m +1. Then for r ≥ 1, there exists a residual set of ( f ,?)∈

D r (M ×N , M )× C r (M ×N , R ) such that for any ( f ,?) in this set there is an open dense set Σf ,? ? Σ such that ~Φf ,?,ω is an embedding for all

ω ∈Σf ,?

. If μΣ is an invariant measure on Σ = N Z such that μd is absolutely continuous with re-

spect to Lebesgue measure on N

d , then w

e can choose Σ

f ,? such that μΣ(Σf ,? ) = 1.Again, the case dim N = 0 is treated separately. Details and proofs are in §8.

3. S TRUCTURE OF THE P ROOFS OF T HEOREMS 2.1 AND 2.3

The proofs of Theorems 2.1 and 2.3 are largely motivated by the proof of Theorem 3.2 of

[Stark, 1999]. The main new ingredient is that the driving system is now infinite dimensional.However, since Φf ,?,ω depends only on a finite number of components of ω

, we can easily re-

duce the theorem to a finite dimensional one. This is done in §3.1 below. Additionally, in the measure theoretic case (Theorem 2.3) we require a measure theoretic version of the finite di-mensional Parametric Transversality Theorem ([Abraham, 1963; Abraham and Robbin, 1967]or see Appendix A of [Stark, 1999]). As far as we are aware there is no published proof of this theorem, but it follows trivially by replacing the use of Smale’s Density Theorem in the stan-dard Parametric Transversality Theorem by an application of Sard’s Theorem. We sketch the argument in Appendix A.3.1. R EDUCTION TO F INITE D IMENSIONS

Since Φf ,?,ω depends only on ω0, …, ωd -2 , it turns out to be sufficient to consider subsets of N d -1rather than of Σ itself. Recall that we defined the projection πd -1 : Σ → N

d -1 by πd -1(ω) = (ω0, …,ωd -2). Given any U ? N

d -1 w

e can define the cylinder ΣU by ΣU = (πd -1)-1(U ) so that ΣU

={ ω ∈Σ : (ω 0 , …

, ωd -2)∈U }Then Lemma 3.1 If U is open and dense in N d -1 then ΣU is open and dense in Σ

. If U is residual in N d -1 then ΣU is residual in Σ

.Proof Recall that by the definition of the product topology on Σ , πd -1 is a continuous open mapping. Thus if U is open then ΣU = (πd -1)-1(U ) is open. Now suppose that ΣU is not dense.Then Σ\ΣU contains some open set V . Since π d -1(V ) ∩ U = ?, and πd -1(V ) is open this means that U cannot be dense. Hence if U is dense, then so is ΣU . If U is residual then U contains the countable intersection of dense open sets U i

. By above, each (πd -1)-1(U i ) is open and dense and hence their intersection is residual.I If μΣ is a measure on Σ recall that we defined the marginal measure μd -1 on N

d -1 by μd -1(U ) =μΣ((πd -1)-1(U )) for all measurabl

e sets U ? N

d -1. This definition immediately implies Lemma 3.2 If U ? N d -1 has full measur

e with respect to μd -1

, then ΣU has full measure with respect to μΣ.We also have the elementary

Lemma 3.3 Suppose that U ? N d -1 is a measurable set of full Lebesgue measure (ie N d -1\U has Lebesgue measure 0). Then U is dense in N d -1 and μd -1(U ) = 1 for any measure μd -1 that is ab-solutely continuous with respect to Lebesgue measure on N d -1.

Proof If U is not dense then N d -1\U contains an open set and hence is not of Lebesgue measure 0. Hence U cannot be of full Lebesgue measure. This contradiction implies that U must be dense if it has full Lebesgue measure. For the second part, if N d -1\U has Lebesgue measure 0,then μd -1(N d -1\U ) = 0 by the definition of absolute continuity. Thus μd -1(U ) = 1, as required.I

Finally, if ω ∈Σ, define y = πd -1(ω) = (ω 0 , … , ωd -2)∈N

d -1. Then Φf ,?,ω(x ) = Φf ,?,y (x ) wher

e Φ

f ,?,y (x )

=(?( f (0)(x ,y )), ?( f (1)(x ,y )), …, ?(f (d -1)(x ,y )))?(3.1)with f (i ) : M ×N d -1 → M defined in an analogous fashion to §3.2 of [Stark, 1999] by f (i +1)(x ,y ) =

f ( f (i )(x ,y ),y i ) = f y i ( f (i )(x ,y )) and f (0)(x ,y ) = x . Here y i is the i

th component of y , so that y i = ωi ; we adopt this notation to emphasize that y depends on only a finite number of components of ω.Using Lemmas 3.1, 3.2 and 3.3, we can then immediately deduce Theorems 2.1 and 2.3 from Theorem 3.4 Let M and N be compact manifolds, with m = dim M > 0 and suppose that d ≥2m +1. Then for r ≥ 1, there exists a residual set of (f ,?)∈D r (M ×N , M )× C r (M , R ) such that for any ( f ,?) in this set there is an open set U f ,? ? N d -1 of full Lebesgue measure such that Φf ,?,y is an embedding for all y ∈U f ,?

.3.2. T HE D IMENSION OF N

Unfortunately, it turns out that to proceed further we have to treat the case of n = dim N = 0(ie the iterated function system case) separately from the general case of n ≥ 1. The reason for this is that as in Theorem 3.2 of [Stark, 1999] one of our first steps is to eliminate driving se-quences ω with any repeating entries. Thus after reduction to finite dimensions, we remove those ω for which ωi = ωj for any i ≠ j with 0 ≤ i , j ≤ d -2. When n ≥ 1, the set of such ω has zero Lebesgue measure in N d -1 and hence can be ignored. However, when n = 0 and N consists of a finite number of discrete points, this set will in general have positive measure. In effect, for n =0 we have to prove the theorem for all ω

, not just Lebesgue almost all.Additional insight into the n = 0 case comes from the observation that if N is a single point,then Theorems 2.1 and 2.3 reduce to the standard Takens’ Theorem. Any proof of these theo-rems for n = 0 must therefore include a proof of the standard result. Such a proof (eg see §4 of

[Stark, 1999]) has to treat the short periodic orbits of f with some care. The image under the delay map of any point on such an orbit has components that are necessarily identical. This is because for such points f i (x ) = f

j (x ) for some 0 ≤ i < j ≤ 2d -1. Each such point therefore has to be embedded individually. This can easily be done, since for generic f on a compact M there is only a finite number of short periodic orbits (this is the easy part of the Kupka-Smale Theo-rem; a self-contained proof is given in §4.2.1 of [Stark, 1999]). In Theorems 2.1 and 2.3, the analogues of points on short periodic orbits are points whose images under Φf ,?,y are identical,that is points such that x i = x j for some 0 ≤ i < j ≤ 2d -1. Thus define ~P ω0… ωq -1={ x ∈M : f ωi -1…ω0(x ) = f ωj -1…ω0(x ) for some i ≠ j with 0 ≤ i < j ≤ q }

A technically intricate, but conceptually straightforward extension of the argument in §5.1.1 of [Stark, 1999] can be used to show that for an open dense set of f , the set ~P ω0… ωq -1 consists of a

finite number of points for any given ω0, ω1, …

, ωq -1. Thus, if dim N = 0, so that there is only a finite number of choices of ω0 , …

, ωq -1, the set ~P (q )=

(,,)ωω01…∈?q q N U ~P ω0… ωq -1

is finite, and each point in can be dealt with individually. On the other hand if dim N ≥ 1, then this set is no longer finite, and a different approach is necessary, eg the one adopted in Theo-rem 3.2 of [Stark, 1999].

To summarize, therefore, we have to give somewhat different proofs in the cases dim N ≥ 1and dim N = 0 respectively. We shall outline the main ideas of the proof in the next two sec-tions, and the detailed proofs are then given in §5 and §6 respectively.

3.3. M AIN I DEAS : DIM N ≥ 1

When dim N ≥ 1, Theorem 3.4 is very similar to Theorem 3.2 of [Stark, 1999], and it is not surprising that the proofs proceed along almost identical lines. The strategy is to first show that for a residual set of f and ?, ~T Φf ,? is transversal to the zero section in T R d and Φf ,? × Φf ,? is trans-versal to the diagonal in R d ×R d (with the domain of Φf ,?,y × Φf ,?,y being restricted to M ×M \?,where ? is the diagonal in M ×M ). Here Φf ,? : M ×N d -1 → R d is defined by Φf ,?(x ,y ) = Φf ,?,y (x ),with Φf ,?,y given by (3.1), and ~T Φf ,? is the tangent map of Φf ,? restricted to the unit tangent bun-dle ~TM = { v ∈TM : ||v || = 1 } of M . We then treat y as a parameter and apply the Parametric Transversality Theorem to the maps y a ~T Φf ,?,y and y a Φf ,?,y × Φf ,?,y . This shows that for a residual set of full measure of y the maps ~T Φf ,?,y and Φf ,?,y × Φf ,?,y are transversal to the zero sec-tion in T R d and the diagonal in R d ×R d respectively. We then apply the same dimension count-ing argument as was used repeatedly in [Stark, 1999] to show that if d ≥ 2m +1, then transver-sality implies non-intersection. Thus for a residual set of full measure of y the image of ~T Φf ,? ,y cannot intersect the zero section in T R d and the image of Φf ,?,y × Φf ,?,y (restricted to M ×M \?)cannot intersect the diagonal in R d ×R d . But these are precisely the statements of the immersiv -ity and injectivity of Φf ,?,y

, respectively.Exactly as in [Stark, 1999], the big problem is points such that x i = x j with i ≠ j . We proceed just as we did there, by ensuring that such points occur on a family of submanifolds ~W I of M ×N d -1 and then dealing with each of these separately. Again, each ~W I is characterized by the set of pairs (i ,j ) for which x i = x j , and the more such pairs (i , j ) there are, the smaller the dimen-sion of the corresponding ~W I . In particular we shall see that ~W I ,y = ~W I ∩ (M ×{y }) has codimen-sion (d -γ)m where γ is the number of distinct points in the set {x 0, … , x d -1}. Thus it seems plau-sible that generically Φf ,?,y should embed ~W I ,y if γ ≥ 2(m - (d -γ)m ) + 1. But this is always satisfied if d ≥ 2m + 1, since then 2m + 1 - 2(d -γ)m - γ ≤ (d -γ)(1-2m ) ≤ 0. Since the union of the ~W I ,y over all I is M ×{y } this means that Φf ,?,y should embed the whole of M ×{y }. Of course, the problem with this argument as stated is that the residual set of f and ? for which Φf ,?,y is an em-bedding will depend on y , and hence a priori the intersection of these sets over an open and dense set of y will not necessarily be residual. We deal with this as in §6 of [Stark, 1999] by combining the construction of ~W I , which itself is a transversality argument with the proof of the transversality of ~T Φf ,?,y and Φf ,?,y ×

Φf ,?,y on ~W I . This allows a single invocation of the Parametric Transversality Theorem to yield a residual set of f and ?.3.4. M AIN I DEAS : DIM N = 0

The overall structure closely follows the transversality proof of the standard Takens’ Theorem given in [Stark, 1999], which in turn is based on Takens’ original proof ([Takens, 1980; Huke,1993]). We first ensure that periodic orbits of period less than 2d are isolated and have distinct

eigenvalues, and then embed these individually. We then use the Parametric Transversality Theorem exactly as outlined above for dim N > 0 to show that for a dense set of f and ?, the map ~T Φf ,? is transversal to the zero section in T R d and Φf ,? ×

Φf ,? is transversal to the diagonal in

R d ×R d . Counting dimensions shows that if d ≥ 2m , transversality implies non-intersection, so that Φf ,? is respectively immersive and injective.The main complication is dealing with the short periodic orbits. In particular, a priori it is pos-sible for two such orbits for two different sequences (ω0, ω1, … , ωq -1) and (ω'0, ω'1, …

, ω'q'-1) to have points in common. Not only does this prevent us perturbing ? independently on each or-bit when we embed the orbit, but in fact it turns out to be an obstruction if we try and mimic the proof of Lemma 4.8 of [Stark, 1999] to show that generically periodic orbits are isolated. A different way of stating this problem is that in the standard case, if x is a periodic orbit of f of minimal period q then f i (x ) = x if and only if i = 0 (mod q ). This need no longer be the case in the present situation (and in fact the definition of minimal period requires some care). It turns out that to avoid this problem, we need to carry out an inductive step which simultaneously shows that if periodic orbits of period less than q are isolated and those for different sequences are distinct, then the same holds for period q orbits. This then allows us to carry the remainder of the proof in a more or less straightforward fashion.4. P RELIMINARY C ALCULATIONS

4.1. N OTATION

As in §6 of [Stark, 1999] we first prove the theorem for a sufficiently large r , and then show in §5.5 that this implies the theorem for all r ≥ 1. It turns out that initially we shall want f and ? to be C 2r where r = n (d -1) (recall that we assume d ≥ 2m +1 and m ≥ 1). Also note that since Φf ,?,y depends continuously on y (Lemma 4.3 below) and embeddings are open in C k (M ,

R d ) (eg

[Hirsch, 1976]), the set of y such that Φf ,?,y is an embedding for a fixed (f ,?) is open. Hence all that we need to prove is that this set has full Lebesgue measure.Recall that in §3.1, given ( f ,?)∈D 2r (M ×N , M )× C 2r (M , R ), we defined f (i )∈D 2r (M ×N d -1, M ) by f (i +1)(x ,y ) = f ( f (i )(x ,y ),y i ) = f y i (

f (i )(x ,y )) and f (0)(x ,y ) = x . The delay reconstruction map Φf ,? : M ×N

d -1 → R d is then given by Φf ,?(x ,y )

=(?( f (0)(x ,y )), ?( f (1)(x ,y )), … , ?( f (d -1)(x ,y )))?and for a given y ∈N d -1, we also define Φf ,?,y : M → R d as in (3.1) by

Φf ,?,y (x )=

Φf ,?(x ,y )(4.1)Finally, let ρ : D 2r (M ×N , M )× C 2r (M , R ) → D r (M ×N d -1, R d ) be the map that takes ( f ,?) to Φf ,?

ρ( f ,?)=Φf ,?

(4.2)Recall (eg Appendix A of [Stark, 1999]) that the evaluation map ev ρ : D 2r (M ×N , M )× C 2r (M , R )×

M ×N d -1 → R d of ρ is defined by ev ρ( f ,?,x ,y ) = ρ(

f ,?)(x ,y ) = Φf ,?(x ,y ). In provin

g immersivity, we

shall also want the corresponding map for the tangent map of Φf ,? . Since we are only interested

in the immersivity of each Φf ,?,y individually, we only want to differentiate in the x direction.Thus τ : D 2r (M ×N , M )× C 2r (M , R ) → VB r -1(TM ×N d -1,T

R d ) is given by τ(f ,?)=T 1Φf ,?(4.3)

where T 1 is the partial tangent operator in the x direction, so that T 1Φ(v ) = T Φ(v ,0). Here, ex-

tending the notation of Appendix B.3 of [Stark, 1999], VB r -1(TM ×N d -1,T R d ) is the space of maps from TM ×N d -1 to T R d that are linear on the fibres of TM , and whose dependence on (x ,y )∈M ×N d -1 is C r , but on v ∈T x M is only C r -1. Thus VB r -1(TM ×N d -1,T R d ) is a subspace of C r -1(TM ×N d -1,T R d ), however since TM is not compact, the latter lacks a natural manifold struc-ture. More usefully, note that for a given y ∈N d -1, a map in VB r -1(TM ×N d -1,T R d ) is defined by its action on unit vectors in TM . We can thus also treat τ as mapping into C r -1(~TM ×N

d -1,T R ). Th

e evaluation map ev τ : D 2r (M ×N , M )× C 2r (M ,R )×TM ×N

d -1 → R d is given by ev τ( f ,?,v ,y )=

τ(f ,?)(v ,y )=T x Φf ,?,y (v )

Immersivity is defined by the condition that T x Φf ,?,y (v ) ≠ 0 for all v ∈

TM such that v ≠ 0. By linearity it is sufficient to consider just those v such that ||v || = 1, and hence to restrict ourselves to the unit tangent bundle ~TM = { v ∈TM : || v

|| = 1 }. To emphasize this, as in §3.3 above we shall denote the restriction of T Φf ,?,y to ~TM by ~T Φf ,?,y .4.2. S MOOTHNESS OF THE E VALUATION M APS

In order to apply the Parametric Transversality Theorem to ev ρ×ρ and ev τ we need to show that these evaluation maps are smooth. This is done in a very similar fashion to the analogous re-sults for deterministic forcing given in Appendix C.1 and C.2 of [Stark, 1999]. We also take this opportunity to compute various derivatives of the evaluation maps that we shall use later.We begin with ?ρi : D 2r (M ×N , M ) → D r (M ×N d -1,

M ) given by ?ρi (f )=f (i )

(4.4)Lemma 4.1 The map ?ρi is C r . Given a η∈T f D 2r (M ×N , M ), denote _ηi = T f ?ρi (η). Then _η0 = 0 and

for i = 1, …, d -1 we have

_ηi (x ,y )=η(x i -1,y i -1) + T (x i -1,y i -1) f (_ηi -1(x ,y ),0)

(4.5)Proof By induction on i . Since ?ρ0( f ) = Id for all f , ?ρ0 is trivially C r . Suppose that ?ρi -1 for some i

> 1 is C r . Now, f (i ) = f ° ( f (i -1), πi -1), where πi : M ×N d -1 → N is the projection πi (x ,y ) = y i . There-fore ?ρi ( f ) = χ((?ρi -1( f ), πi -1), f ) where χ : D r (M ×N d -1, M ×N )× D 2r (M ×N , M ) → D r (M ×N d -1, M ) is the composition χ(F , f ) = f ° F . Hence ?ρi = χ ° ((?ρi -1, ?πi -1), Id ) where I d : D 2r (M ×N , M ) →D 2r (M ×N , M ) is the identity Id ( f ) = f and ?πi : D 2r (M ×N , M ) → C r (M ×N d -1, N ) is the constant map ?πi (

f ) = πi . Clearly Id and ?πi are C ∞ maps, and it is well known that the composition opera-tor χ is C r ([Eells, 1966; Foster, 1975; Franks, 1979], or see Theorem B.2 of [Stark, 1999]).Hence by the chain rule ?ρi is C r .

Since ?ρ0( f ) = Id for all f , we have _η0 = 0. We differentiate ?ρi ( f ) = χ(( ?ρi -1( f ), πi -1), f ) using the

chain rule and the fact that T (F ,f )χ(ζ,η) = η °F + Tf °ζ (see [Eells, 1966; Foster, 1975; Franks,1979] or Theorem B.2 of [Stark, 1999]). This yields T f ?ρi (η) = η ° (?ρi -1( f ), πi -1) + Tf ° (T f ?ρi -1(η),0). Substituting T f ?ρi (η) = _ηi and ?ρi -1( f ) = f (i -1) we obtain _ηi = η ° ( f (i -1), πi -1) + Tf °

(_ηi -1,0). Evalu-ating this at (x ,y ) and using the fact that f

(i -1)(x ,y ) = x i -1 and πi -1(x ,y ) = x i -1 gives (4.5).I The corresponding evaluation map ev ?ρi : D 2r (M ×N , M )×M ×N d -1 → M is given by ev ?ρi ( f ,x ,y ) =f (i )(x ,y ) = x i . Note that, with one or two exceptions, we will only ever need to evaluate Tev ?ρi on vectors of the form (η,0x

,0y ). In other words, we only need to consider perturbations to f but not to x or y .Corollary 4.2 The evaluation map ev ?ρi is C r and T (f ,x ,y )ev ?ρi (η,0x

,0y ) = _ηi (x ,y ).Proof By Corollary B.3 of [Stark, 1999] (which is a simple corollary of the smoothness of composition), the evaluation operator is smooth. More precisely ev : D r (M ×N d -1, M )×M ×N d -1→ M given by ev (F ,x ,y ) = F (x ,y ) is C r and T (F ,x ,y )ev (ζ,v ,w ) = ζ(x ,y ) + T (x ,y )F (v ,w ). But ev ?ρi =ev ° (?ρi ×Id x × Id y ) and hence ev ?ρi is C r by the chain rule. Differentiating ev ?ρi = ev ° (?ρi × Id x ×

Id y ) us-ing the chain rule we obtain T (f ,x ,y )ev ?ρi (η,v ,w )=_ηi (x ,y ) + T (x ,y ) f (i )(v ,w )(4.6)

Evaluating this for v = 0x , w = 0y yields T (f ,x ,y )ev ?ρi (η,0x

,0y ) = _ηi (x ,y ), as claimed.I Now define ρi : D 2r (M ×N ,M )× C 2r (M , R ) → C r (M ×N d -1,

R ) by ρi (f ,?)=? ° f (i )(4.7)

Lemma 4.3 The map ρi is C r , with

T (f ,?)ρi (η,ξ)=ξ ° f (i ) + T ? ° _ηi

(4.8)The evaluation map ev ρi

: D 2r (M ×N , M )× C 2r (M,R )×M ×N d -1 → R is also C r and if we denote Ξ =

(

f ,?,x ,y ) then T Ξev ρi

(η,ξ,v ,w )=ξ(x i ) + T x i ?[_ηi (x ,y ) + T (x ,y )f (i )(v ,w )](4.9)Proof We have ρi = χ' ° (?ρi ×Id ?) where χ' is the composition χ' : C r (M ×N d -1, M )× C 2r (M , R ) →

C r (M ×N d -1, M ) and Id ? is the identity on C 2r (M , R ). As above, χ' and Id ? are C r and so is ?ρ

by Lemma 4.1. Hence ρi is C r by the chain rule. Differentiating ρi = χ' °

(?ρi ×Id ?), eg using Theorem B.2 of [Stark, 1999], we obtain T (f ,?)ρi (η,ξ) = ξ ° ?ρi ( f ) + T ? ° T f ?ρi (η) = ξ ° f (i ) + T ? °

_ηi , as required.Turning to the evaluation function ev ρi , we proceed in a similar fashion to Corollary 4.2. We have ev ρi = ev' ° ( ρi ×Id x ×Id y ) where ev' : C r (M ×N d -1,R )×M ×N d -1 → R is the evaluation ev'(Φ,x ,y )= Φ(x ,y ). By Corollary B.3 of [Stark, 1999] this is C r and T (Φ,x ,y )ev'(ζ,v ,w ) = ζ(x ,y ) + T (x ,y )Φ(v ,w ). Hence ev ρi is C r by the chain rule and T Ξev ρi (η,ξ,v ,w ) = T (f ,?)ρi (η,ξ)(x ,y ) + T (x ,y )(

ρi (f ,?))(v ,w )= ξ ° f (i )(x ,y ) + T x i ?(_ηi (x ,y )) + T x i ?(T (x ,y ) f (i )(v ,w )) = ξ(x i ) + T x i ?[_ηi (x ,y ) + T (x ,y ) f

(i )(v ,w )].I

We shall not need to evaluate T Ξev ρi except for vectors of the form (0f ,ξ,0x ,0y ), in other words,we only consider perturbations to ?, but not to f , x or y . We now turn to analogous results for the derivative of ? ° f (i ) and so define τi : D 2r (M ×N , M )× C 2r (M , R ) → VB r -1(TM ×N d -1,T

R ) by τi (f ,?)=T 1(? ° f (i ))(4.10)

where as usual T 1 is the partial tangent in the x direction.

Lemma 4.4 The operator τi is C r . Its evaluation map ev τi : D 2r (M ×N , M )× C 2r (M , R )×TM ×N

d -1→ T R is C r -1 and if w

e denote Ξ' = (

f ,?,v ,y ) then T Ξ' ev τi (0f ,ξ,0v ,0y )=

?(T x i ξ(v i ))where x i = f

(i )(x ,y ) and v i = T (x ,y ) f (i )(v ,0y ) = T 1,(x ,y ) f (i )(v ) and ? is the canonical involution on

T (T R ). Note that in [Stark, 1999] the canonical involution is denoted by ω , but here we want to avoid confusion with the usage of ω as the forcing sequence in §2 and §3.1.Proof We have τi = σ'1 ° ρi where σ'1 : C r (M ×N d -1, R ) → VB r -1(TM ×N d -1, T R ) is the operator that takes the partial tangent in the x direction. Then σ'1(Φ)(w ,y ) = T 1Φ(w ,y ) = T Φ(w ,0y ) = σ'(Φ)(w ,0y ) where σ' : C r ( M ×N d -1, R ) → VB r -1(TM ×TN d -1, T R ) is the full tangent operator. By Lemma

B.11 of [Stark, 1999], σ' is C ∞ and T Φσ'(ζ) = ? ° T ζ , Let ιy : VB r -1(TM ×TN d -1, T R ) → VB r -1(TM ×N d -1, T R ) be the inclusion defined by ιy (Ψ)(v ,y ) = Ψ(v ,0y ). Thus ιy (Ψ) = Ψ ° (Id ×L N d -1), where L N d -1 : N d -1 → TN d -1 is the zero section in TN d -1, ie L N d -1(y ) = 0y . Hence ιy is just composition on the right with a fixed function. This is just a linear map in the local chart described in Appen-dix B.1 of [Stark, 1999], and hence ιy is C ∞, with T Ψιy (ζ)(v ,y ) = ζ(v ,0y ) (this result appears as Proposition 4.2 in [Franks, 1979]). But σ'1(Φ)(v ,y ) = σ'(Φ)(v ,0y ) = ιy (σ'(Φ))(v , y ) so that σ'1 =ιy ° σ' and thus σ'1 is C ∞. Hence by the chain rule τi = σ'1 ° ρi = ιy ° σ' °

ρi is C r .Proceeding as in Corollary 4.2 and Lemma 4.3, we see that the evaluation function ev τi is given by ev τi = ev'' ° (τi ×Id v ×Id y ) where ev'' : VB r -1(TM ×N d -1, T R )×TM ×N d -1 → T R is the evaluation ev''(Ψ,v ,y ) = Ψ(v ,y ). By Corollary B.3 of [Stark, 1999] this is C r -1 and T (Ψ,v ,y )ev''(ζ,w ,w' ) = ζ(v ,y ) + T (v ,y )Ψ(w ,w' ). Thus ev τi is C r -1 by the chain rule and T Ξ' ev τi (0f ,ξ,0v ,0y ) = T (f ,?)τi (0f ,ξ)(v ,y ).Differentiating τi = ιy ° σ' ° ρi using the chain rule gives T (f ,?)τi (0f ,ξ) = T ιy ° ? °T (T (f ,?)ρi (0f ,ξ)). By (4.8) we have T (f ,?)ρi (0f ,ξ) = ξ ° f (i ) and hence T Ξ' ev τi (0f ,ξ,0v ,0y ) = ?(T ξ ° Tf (i )(v ,0y )) = ?(T x i ξ °

T (x ,y ) f (i )(v ,0y )) = ?(T x i

ξ(v i )) as required.I 5. P ROOF OF T HEOREM 3.4 FOR DIM N ≥ 1

It turns out that initially we shall want f and ? to be C 2r where r = n (d -1) (recall that we assume d ≥ 2m +1 and m ≥ 1). Also note that since Φf ,?,y depends continuously on y (Lemma 4.3 below)and embeddings are open in C k (M ,

R d ) (eg [Hirsch, 1976]), the set of y such that Φf ,?,y is an em-bedding for a fixed (f ,?) is open. Hence all that we need to prove is that this set has full Lebes-gue measure.As long as n ≥ 1, the set of y ∈N

d -1 such that y i = y j for som

e i ≠ j is closed, nowhere dense and has zero Lebesgue measure. We can thus ignore it and restrict ourselves to ~N d -1 where ~N d -1={ y ∈N d -1 : y i ≠ y j for all i ≠ j }(5.1)

5.1. P ARTITIONS OF THE C OMPONENTS OF Φ

As sketched in §3.3 above, our overall strategy will be to make ev ρ×ρ and ev τ transversal to spe-cific submanifolds of R d ×R d and T R d . The difficulty in doing this occurs at points where x i = x j with i ≠ j . At such points, the i th and j th components of Φf ,? are not cannot be perturbed inde-

pendently. We therefore want to define independent subsets of such components. We employ the same notation as in [Stark, 1999]. Thus, let I = { I 1, I 2, … , I α } be a partition of { 0, … , d -1 },and define the associated equivalence relation ~I on {0, …, d -1} by i ~I j if and only if i and j are in the same element of the partition. Given any such partition I , let J I be a set containing pre-cisely one element from each I k for k = 1, … , α . There will typically be many ways to choose such a J

I , but we arbitrarily select just one. Clearly J I has α elements. Write these as J I = { j 1, j 2,

… , j α } with j 1 < j 2 < … < j α . For a given partition I , define the map Φf ,?,I : M ×N

d -1 → R α by Φf ,?,I (x ,y )=(?( f (j 1)(x ,y )), ?( f (j 2)(x ,y )), …, ?( f (j α)(x ,y )))?(5.2)

We can also write this as

Φf ,?,I (x ,y )

=(?(x j 1), ?(x j 2), …, ?(x j α ))?where x i = f (i )(x ,y ). This has the advantage that it emphasizes that we observe the x co-ordinate

only, but the disadvantage that it obscures the dependence of x i on y , which can be a potential source of errors when performing calculations. Let ρI : D 2r (M ×N , M )× C 2r (M , R ) → C r (M ×N

d -1,R α) b

e defined in the obvious way by ρI (

f ,?)

=Φf ,?,I (5.3)and for any y ∈N d -1, define Φf ,?,I ,y : M → R α by

Φf ,?,I ,y (x )

=Φf ,?,I (x ,y )Note that ρI = (ρj 1, ρj 2, … , ρj α )? where ρi (f ,?) = ? ° f (i ) as defined in (4.7). As an immediate conse-

quence of this we get

Corollary 5.1 The map ρI and its evaluation map ev ρI are C r for any partition I of { 0, … , d -1

}.If Ξ = (

f ,?,x ,y ) then T Ξev ρI (0f ,ξ,0x ,0y )=(ξ(x j 1), ξ(x j 2), …, ξ(x j α

))?This corollary is the crucial result underlying our whole approach: it shows that if the points x j 1 ,

x j 2 , … , x j α are distinct then T Ξev ρI is surjective, and hence transversal to any submanifold of R α.

Finally let τI : D 2r (M ×N ,M )× C 2r (M ,R ) → C r -1(~TM ×N d -1,T

R α) be defined by τI ( f ,?)=T 1Φf ,?,I (5.4)

where, as above, T 1 is the partial tangent operator in the x direction. Thus

ev τI ( f ,?,v ,y )=

τI ( f ,?)(v ,y )=T x Φf ,?,I ,y (v )Similarly to ρI we have τI = (τj 1, τj 2, …, τj α )?, where τi ( f , ?) = T 1(? ° f (i )) as in (4.10), giving:

Corollary 5.2 The map τI and its evaluation map ev τI are C r -1 for any partition I of { 0, … , d -1

}.If Ξ' = (f ,?,v ,y ) then T Ξ' ev τI (0f ,ξ,0x ,0y )=(?(T x 1ξ(v j 1)), ?(T x 2ξ(v j 2)), … , ?(T x αξ(v j α

)))?5.2. S URJECTIVITY OF ev ?

ρRecall from Lemma 4.1 and Corollary 4.2 that ?ρi : D 2r (M ×N , M ) → D r (M ×N d -1, M ) given by ?ρi (f )= f (i ) is C r and T (f ,x ,y )ev ?ρi (η,0x ,0y ) = _ηi (x ,y ) where _ηi satisfies (4.5). Let ?ρ = (?ρ0, ?ρ1, …, ?ρd -1) :D 2r (M ×N , M ) → D r (M ×N d -1,

M d ) and denote the corresponding evaluation map ev ?ρ : D 2r (M ×N , M )×M ×N

d -1 → M d . Then ev ?ρ is C r and Lemma 5.3 T (f ,x ,y )ev ?ρ is surjectiv

e at all (

f ,x ,y )∈D 2r (M ×N , M )×M ×~N d -1, where ~N d -1 is defined in

(5.1). More precisely, given any (u 0, u 1, … , u d -1) ∈T x 0M × … ×T x d -1M , we can find a η∈T f D 2r (M ×N , M ) such that T (f ,x ,y )ev ?ρ(η,u 0,0y ) = (u 0, u 1, … , u d -1).

Proof If y ∈~N d -1 then the points { y 0, y 1, … , y d -2 } are all distinct, and hence { (x 0, y 0), (x 1, y 1), … ,

(x d -2, y d -2) } are distinct. By Corollary C.12 of [Stark, 1999], given any set of u i ∈T x i M for i = 1,

… , d -1 we can find a η∈

T f D 2r (M ×N , M ) such that η(x i -1, y i -1) = u i for i = 1, … , d -1. So fix some i ∈{ 1, … , d -1 } and some u i ∈T x i M . Choose η∈T f D 2r (M ×N , M ) such that η(x i -1, y i -1) = u i , η(x i , y i )= -T (x i ,y i ) f (u i ,0) if i ≠ d -1 and η(x j , y j ) = 0 for j ≠ i -1, i . Then by (4.6) T (f ,x ,y )ev ?ρi (η,0x ,0y ) = _ηi (x ,y )= η(x i -1, y i -1) = u i and if i ≠ d -1 then T (f ,x ,y )ev ?ρi +1(η,0x ,0y ) = η(x i , y i ) + T (x i ,y i ) f (_ηi (x ,y ),0) = -T (x i ,y i ) f (u i ,0) + T (x i ,y i ) f (u i ,0) = 0. Furthermore _ηj (x ,y ) = 0 for j ≠ i -1, i and hence T (f ,x ,y )ev ?ρ(η,0x ,0y ) = (0, … ,0, u i , 0, … , 0)∈T x 0 M × … × T x i M × … ×T x d -1 M . Hence by linearity, given any (u 1, … , u d -1)∈T x 1 M ×

…×T x d -1 M , we can find a η∈T f C 2r (M ×N , M ) such that T (f ,x ,y )ev ?ρ(η,0x ,0y ) = (0, u 1, …

, u d -1).It remains to treat the first component. Note that the corresponding proof in Lemma 5.12 of

[Stark, 1999] contains an erroneous calculation, but the argument used here is equally valid in that case. Recall that f (0)(x ,y ) = x and hence given any u 0∈T x 0 M , we have by (4.6) T (f ,x ,y )ev ?ρ(0f ,u 0,0y ) = (u 0, u 1, … , u d -1), for some u 1, … , u d -1 whose precise values do not concern us (in fact u i =T (x ,y ) f (i )(u 0,0) since if η = 0 we have _ηi = 0 by (4.5)). By the first part of the proof, choose η∈T f D 2r (M ×N , M ) such that T (f ,x ,y )ev ?ρ(η,0x ,0y ) = (0, u 1, … , u d -1). Then T (f ,x ,y )ev ?ρ0(-η,u 0,0y ) = (u 0,0,…

, 0), and hence by linearity T (f ,x ,y )ev ?ρ is surjective.I 5.3. I MMERSIVITY OF Φ

As in [Stark, 1999], the basic idea is to make ev τ transversal to the zero section in T R d and then count dimensions. Observe that if for some v ∈~T x M we have T x Φf ,?,I ,y (v ) ≠ 0 for some I , then T x Φf ,?,y (v ) ≠ 0. Therefore to prove that Φf ,? is immersive at x it is sufficient to show that for all v ∈~T x M we have T x Φf ,?,I ,y (v ) ≠ 0 for some I . If we define the zero section L I in T

R α by L I ={ v ∈TM : v = 0 }

then our aim is to show that the image of T x Φf ,?,y does not intersect L I . As in [Stark, 1999], we proceed by defining a codimension (d -α)m submanifold of M d by

?I ={ (z 0, z 1, …, z d -1)∈M d : z i = z j if and only if i ~I j }

Recall that ?ρ = ( ?ρ0, ?ρ1, …, ?ρd -1) : C 2r (M ×N , M ) → C r (M ×N d -1, M d ) with ?ρi as in (4.4), and that ev ?ρis C r by Corollary 4.2. Let ιτ be the inclusion ιτ : C r (M ×N d -1, M d ) → C r -1(~TM ×N d -1, M d ) given by ιτ(F ) = F ° (τM ×Id ) where τM : ~TM → M is the tangent bundle projection, thus ιτ(F )(v ,y ) = F (x ,y )where x ∈M is the point such that v ∈T x M , ie x = τM (v ). Since ιτ is just composition with fixed maps, it is C r . Now define τ'I : D 2r (M ×N , M )× C 2r (M , R ) → C r -1(~TM ×~N d -1, T

R α×M d ) by τ'I ( f ,?)=(τI ( f ,?), ιτ ° ?ρ( f

))Note that we have restricted the domain of τ'I (f ,?) to ~TM ×~N d -1 , where ~N d -1 is defined in (5.1).

In other words we only consider those y ∈N d -1 in which no two components are the same. Since ~N d -1 is not compact there is no natural manifold structure on C r -1(~TM ×~N d -1 , T R α×M d ) and we cannot speak of τ'I being smooth. However, it is only really the evaluation map ev τ'I

: D 2r (M ×N ,M )× C 2r (M , R )×~TM ×~N d -1 → T

R α×M that we need. This is given by ev τ'I ( f ,?,v ,y )=

(ev τI ( f ,?,v ,y ),ev ?ρ( f ,τM (v ),y ))=

(τI ( f ,?)(v ,y ),?ρ( f )(x ,y ))=

(T x Φf ,?,I ,y (v ),( f (0)(x ,y ), f (1)(x ,y ), … , f (d -1)(x ,y )))where x = τM (v ). Since ev τI (by Lemma 4.4), ev ?ρ (by Corollary 4.2) and τM are all C r -1 so is ev τ'I .

We now claim that

Proposition 5.4 Given any partition I of { 0, … , d -1 }, T (f ,?,v ,y )ev τ'I is surjective at all (f ,?,v ,y )∈D 2r (M ×N , M )× C 2r (M , R )×~TM ×~N d -1 such that ?ρ( f )(x ,y )∈?I . Hence, in particular ev τ'I

is trans-versal to L I ×?I

.Proof Suppose that ev τ'I ( f ,?,v ,y )∈L I ×?I . We aim to evaluate T Ξev τ'I , where Ξ = ( f ,?,v ,y ). We first consider the second component. Let ev ?ρτ be the evaluation map of ιτ ° ?ρ, so that ev ?ρτ( f ,v ,y ) =ev ?ρ( f ,τM (v ),y ). Thus by the chain rule T (f ,v ,y )ev ?ρτ(η,w ,w' ) = T (f ,x ,y )ev ?ρ(η,T v τM (w ),w' ) where x =τM (v ). Let ?ρ( f )(x ,y ) = z ∈?I . Then by Lemma 5.3, given any u = (u 0, u 1, … , u d -1)∈T z ?I , there exists a η∈T f D 2r (M ×N ,M ) such that T (f ,x ,y ) ev ?ρ(η,u 0,0y ) = u . Furthermore τM is a submersion, ie T v τM is surjective for all v ∈TM . This can be shown using local co-ordinates, see for instance the paragraph following Lemma B.4 of [Stark, 1999]. Hence there exists a w ∈T v (TM ) such that T v τM (w ) = u 0 and hence T (f ,v ,y )ev ?ρτ(η, w ,0y

) = u (so that in particular T (f ,v ,y )ev ?ρτ is surjective).Turning to the first component, for any ξ∈T ? C 2r (M ,R ) we have by linearity

T Ξev τI (η,ξ,w ,0y )=

T Ξev τI (η,0?,w ,0y ) + T Ξev τI (0η,ξ,0v ,0y )=~u + T Ξev τI (0η,ξ,0v ,0y

)for some ~u ∈T (T R α), independent of ξ. By Corollary 5.2, T Ξev τI (0f ,ξ,0x ,0y ) = (?(T x 1ξ(v j 1

)),

?(T x 2ξ(v j 2)), … , ?(T x αξ(v j α )))?, where x i = f (i )(x ,y ) and v i = T (x ,y ) f (i )(v ,0y

).Since ev ?ρτ( f ,v ,y ) = ev ?ρ( f ,τM (v ),y ) = ?ρ( f )(x ,y )∈?I , the points x j 1, x j 2, … , x j α are all distinct. For a

fixed y , f (j i ) is a diffeomorphism and ||v || = 1 and hence v j i ≠ 0 for i = 1, … , α. Thus, by Corol-lary C.16 of [Stark, 1999], for any _u ∈T 0(T R α) there exists a ξ∈T ? C 2r (M , R ) such that T Ξev τI (0f ,ξ,0x ,0y ) = _u - ~u . Hence T Ξev τI (η,ξ,w ,0y ) = ~u + (_u - ~u ) = _u and so T Ξev τ'I (η,ξ,w ,0y

) = (_u , u ). Thus T Ξev τ'I is surjective, and in particular transversal to L I ×?I , as required. I

KeilC51程序设计中几种精确延时方法

Keil C51程序设计中几种精确延时方法 2008-04-03 08:48 实现延时通常有两种方法:一种是硬件延时,要用到定时器/计数器,这种方法可以提高CPU的工作效率,也能做到精确延时;另一种是软件延时,这种方法主要采用循环体进行。 1 使用定时器/计数器实现精确延时 单片机系统一般常选用11.059 2 MHz、12 MHz或6 MHz晶振。第一种更容易产生各种标准的波特率,后两种的一个机器周期分别为1 μs和2 μs,便于精确延时。本程序中假设使用频率为12 MHz的晶振。最长的延时时间可达216=65 536 μs。若定时器工作在方式2,则可实现极短时间的精确延时;如使用其他定时方式,则要考虑重装定时初值的时间(重装定时器初值占用2个机器周期)。 在实际应用中,定时常采用中断方式,如进行适当的循环可实现几秒甚至更长时间的延时。使用定时器/计数器延时从程序的执行效率和稳定性两方面考虑都是最佳的方案。但应该注意,C51编写的中断服务程序编译后会自动加上PUSH ACC、PUSH PSW、POP PSW和POP ACC语句,执行时占用了4个机器周期;如程序中还有计数值加1语句,则又会占用1个机器周期。这些语句所消耗的时间在计算定时初值时要考虑进去,从初值中减去以达到最小误差的目的。 2 软件延时与时间计算 在很多情况下,定时器/计数器经常被用作其他用途,这时候就只能用软件方法延时。下面介绍几种软件延时的方法。 2.1 短暂延时 可以在C文件中通过使用带_NOP_( )语句的函数实现,定义一系列不同的延时函数,如Delay10us( )、Delay25us( )、Delay40us( )等存放在一个自定义的C文件中,需要时在主程序中直接调用。如延时10 μs 的延时函数可编写如下: void Delay10us( ) { _NOP_( ); _NOP_( ); _NOP_( ) _NOP_( );

基于51单片机的精确延时(微秒级)

声明: *此文章是基于51单片机的微秒级延时函数,采用12MHz晶振。 *此文章共包含4个方面,分别是延时1us,5us,10us和任意微秒。前三个方面是作者学习过程中从书本或网络上面总结的,并非本人所作。但是延时任意微秒函数乃作者原创且亲测无误。欢迎转载。 *此篇文章是作者为方便初学者使用而写的,水平有限,有误之处还望大家多多指正。 *作者:Qtel *2012.4.14 *QQ:97642651 ----------------------------------------------------------------------------------------------------------------------序: 对于某些对时间精度要求较高的程序,用c写延时显得有些力不从心,故需用到汇编程序。本人通过测试,总结了51的精确延时函数(在c语言中嵌入汇编)分享给大家。至于如何在c 中嵌入汇编大家可以去网上查查,这方面的资料很多,且很简单。以12MHz晶振为例,12MHz 晶振的机器周期为1us,所以,执行一条单周期指令所用时间就是1us,如NOP指令。下面具体阐述一下。 ----------------------------------------------------------------------------------------------------------------------1.若要延时1us,则可以调用_nop_();函数,此函数是一个c函数,其相当于一个NOP指令,使用时必须包含头文件“intrins.h”。例如: #include #include void main(void){ P1=0x0; _nop_();//延时1us P1=0xff; } ----------------------------------------------------------------------------------------------------------------------2.延时5us,则可以写一个delay_5us()函数: delay_5us(){ #pragma asm nop #pragma endasm } 这就是一个延时5us的函数,只需要在需要延时5us时调用此函数即可。或许有人会问,只有一个NOP指令,怎么是延时5us呢? 答案是:在调用此函数时,需要一个调用指令,此指令消耗2个周期(即2us);函数执行完毕时要返回主调函数,需要一个返回指令,此指令消耗2个周期(2us)。调用和返回消耗了2us+2us=4us。然后再加上一个NOP指令消耗1us,不就是5us吗。

单片机一些常用的延时与中断问题及解决方法

单片机一些常用的延时与中断问题及解决方法 延时与中断出错,是单片机新手在单片机开发应用过程中,经常会遇到的问题,本文汇总整理了包含了MCS-51系列单片机、MSP430单片机、C51单片机、8051F的单片机、avr单片机、STC89C52、PIC单片机…..在内的各种单片机常见的延时与中断问题及解决方法,希望对单片机新手们,有所帮助! 一、单片机延时问题20问 1、单片机延时程序的延时时间怎么算的? 答:如果用循环语句实现的循环,没法计算,但是可以通过软件仿真看到具体时间,但是一般精精确延时是没法用循环语句实现的。 如果想精确延时,一般需要用到定时器,延时时间与晶振有关系,单片机系统一般常选用 2 MHz、12 MHz或6 MHz晶振。第一种更容易产生各种标准的波特率,后两种的一个机器周期分别为1 μs和2 μs,便于精确延时。本程序中假设使用频率为12 MHz的晶振。最长的延时时间可达216=65 536 μs。若定时器工作在方式2,则可实现极短时间的精确延时;如使用其他定时方式,则要考虑重装定时初值的时间(重装定时器初值占用2个机器周期)。 2、求个单片机89S51 12M晶振用定时器延时10分钟,控制1个灯就可以 答:可以设50ms中断一次,定时初值,TH0=0x3c、TL0=0xb0。中断20次为1S,10分钟的话,需中断12000次。计12000次后,给一IO口一个低电平(如功率不够,可再加扩展),就可控制灯了。 而且还要看你用什么语言计算了,汇编延时准确,知道单片机工作周期和循环次数即可算出,但不具有可移植性,在不同种类单片机中,汇编不通用。用c的话,由于各种软件执行效率不一样,不会太准,通常用定时器做延时或做一个不准确的延时,延时短的话,在c中使用汇编的nop做延时 3、51单片机C语言for循环延时程序时间计算,设晶振12MHz,即一个机器周期是1us。for(i=0,i<100;i++) for(j=0,j<100;j++) 我觉得时间是100*100*1us=10ms,怎么会是100ms 答: 不可能的,是不是你的编译有错的啊

单片机89C51精确延时

单片机89C51精确延时 高手从菜鸟忽略作起之(六)一,晶振与周期: 89C51晶振频率约为12MHZ。在此基础上,计论几个与单片机相关的周期概念:时钟周期,状态周期,机器周期,指令周期。 晶振12MHZ,表示1US振动12次,此基础上计算各周期长度。 时钟周期(W sz):Wsz=1/12=0.083us 状态周期(W zt) Wzt=2*Wsz=0.167us 机器周期(W jq): Wjq=6*Wzt=1us 指令周期(W zl): W zl=n*Wjq(n=1,2,4) 二,指令周期 汇编指令有单周期指令,双周期指令,四周期指令。指令时长分别是1US,2US,4US.指令的周期可以查询绘编指令获得,用下面方法进行记忆。 1.四周期指令:MUL,DIV 2.双周期指令:与SP,PC相关(见汇编指令周期表) 3.单周期指令:其他(见汇编指令周期表) 三,单片机时间换算单位 1.1秒(S)=1000毫秒(ms) 2.1毫秒(ms)=1000微秒(us) 3.1微秒(us)=1000纳秒(ns) 单片机指令周期是以微秒(US)为基本单位。 四,单片机延时方式 1.计时器延时方式:用C/T0,C/T1进行延时。 2.指令消耗延时方式: 本篇单片机精确延时主要用第2种方式。 五,纳秒(ns)级延时: 由于单片机指令同期是以微秒(US)为基本单位,因此,纳秒级延时,全部不用写延时。六,微秒(US)级延时:

1.单级循环模式:delay_us_1 最小值:1+2+2+0+2+1+2+2=12(US),运行此模式最少需12US,因此12US以下,只能在代码中用指定数目的NOP来精确延时。 最大值:256*2+12-2=522(US),256最大循环次数,2是指令周期,12是模式耗时,-2是模式耗时中计1个时钟周期。 延时范围:值域F(X)[12,522],变量取值范围[0,255]. 函数关系:Y=F(x):y=2x+12,由输入参数得出延时时间。 反函数:Y=F(x):y=1/2x-6:由延时时间,计算输入参数。例延时500US,x=500,y=244,即输入244时,精确延时500US. 2.双级循环模式:delay_us_2

TMS320F2812delay 延时完整程序

TMS320F2812的延时程序(完整) .def _DSP28x_usDelay ;==================================================== ;Delay Function ;The C assembly call will look as follows: ; ; extern void Delay(long time); ; MOV AL,#LowLoopCount ; MOV AH,#HighLoopCount ; LCR _Delay ; ;Or as follows (if count is less then 16-bits): ; ; MOV ACC,#LoopCount ; LCR _Delay .global __DSP28x_usDelay _DSP28x_usDelay: SUB ACC,#1 NOP NOP BF _DSP28x_usDelay,GEQ ;; Loop if ACC >= 0 LRETR ;There is a 9/10 cycle overhead and each loop ;takes five cycles. The LoopCount is given by ;the following formula: ; DELAY_CPU_CYLES = 9 + 5*LoopCount ; LoopCount = (DELAY_CPU_CYCLES - 9) / 5 ;================================================== --

RE:我是这么调用的(C语言) extern void DSP28x_usDelay(long time); 在需要延时的地方加入 DSP28x_usDelay(0x100000);//根据延迟时间写入参数

STM32延时函数

#include #include "delay.h" ////////////////////////////////////////////////////////////////////////////////// //使用SysTick的普通计数模式对延迟进行管理 //包括delay_us,delay_ms //***************************************************************************** *** //V1.2修改说明 //修正了中断中调用出现死循环的错误 //防止延时不准确,采用do while结构! ////////////////////////////////////////////////////////////////////////////////// static u8 fac_us=0;//us延时倍乘数 static u16 fac_ms=0;//ms延时倍乘数 //初始化延迟函数 //SYSTICK的时钟固定为HCLK时钟的1/8 //SYSCLK:系统时钟 void delay_init(u8 SYSCLK) { SysTick->CTRL&=0xfffffffb;//bit2清空,选择外部时钟HCLK/8 fac_us=SYSCLK/8; fac_ms=(u16)fac_us*1000; } //延时nms //注意nms的范围 //SysTick->LOAD为24位寄存器,所以,最大延时为: //nms<=0xffffff*8*1000/SYSCLK //SYSCLK单位为Hz,nms单位为ms //对72M条件下,nms<=1864 void delay_ms(u16 nms) { u32 temp; SysTick->LOAD=(u32)nms*fac_ms;//时间加载(SysTick->LOAD为24bit) SysTick->VAL =0x00; //清空计数器 SysTick->CTRL=0x01 ; //开始倒数 do { temp=SysTick->CTRL; } while(temp&0x01&&!(temp&(1<<16)));//等待时间到达 SysTick->CTRL=0x00; //关闭计数器 SysTick->VAL =0X00; //清空计数器 } //延时nus //nus为要延时的us数.

单片机C延时时间怎样计算

C程序中可使用不同类型的变量来进行延时设计。经实验测试,使用unsigned char类型具有比unsigned int更优化的代码,在使用时 应该使用unsigned char作为延时变量。以某晶振为12MHz的单片 机为例,晶振为12M H z即一个机器周期为1u s。一. 500ms延时子程序 程序: void delay500ms(void) { unsigned char i,j,k; for(i=15;i>0;i--) for(j=202;j>0;j--) for(k=81;k>0;k--); } 计算分析: 程序共有三层循环 一层循环n:R5*2 = 81*2 = 162us DJNZ 2us 二层循环m:R6*(n+3) = 202*165 = 33330us DJNZ 2us + R5赋值 1us = 3us 三层循环: R7*(m+3) = 15*33333 = 499995us DJNZ 2us + R6赋值 1us = 3us

循环外: 5us 子程序调用 2us + 子程序返回2us + R7赋值 1us = 5us 延时总时间 = 三层循环 + 循环外 = 499995+5 = 500000us =500ms 计算公式:延时时间=[(2*R5+3)*R6+3]*R7+5 二. 200ms延时子程序 程序: void delay200ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=132;j>0;j--) for(k=150;k>0;k--); } 三. 10ms延时子程序 程序: void delay10ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=4;j>0;j--) for(k=248;k>0;k--);

delay延时教程

delay延时教程(用的是12MHz晶振的MCS-51) 一、 1)NOP指令为单周期指令 2)DJNZ指令为双周期指令 3)mov指令为单周期指令 4)子程序调用(即LCALL指令)为双周期指令 5)ret为双周期指令 states是指令周期数, sec是时间,=指令周期×states,设置好晶振频率就是准确的了 调试>设置/取消断点”设置或移除断点,也可以用鼠标在该行双击实现同样的功能 二、编程最好: 1.尽量使用unsigned型的数据结构。 2.尽量使用char型,实在不够用再用int,然后才是long。 3.如果有可能,不要用浮点型。 4.使用简洁的代码,因为按照经验,简洁的C代码往往可以生成简洁的目标代码(虽说不是在所有的情况下都成立)。 5.在do…while,while语句中,循环体内变量也采用减减方法。 三、编辑注意: 1、在C51中进行精确的延时子程序设计时,尽量不要或少在延时子程序中定义局部变量,所有的延时子程序中变量通过有参函数传递。 2、在延时子程序设计时,采用do…while,结构做循环体要比for结构做循环体好。 3、在延时子程序设计时,要进行循环体嵌套时,采用先内循环,再减减比先减减,再内循环要好。 四、a:delaytime为us级 直接调用库函数: #include// 声明了void _nop_(void); _nop_(); // 产生一条NOP指令 作用:对于延时很短的,要求在us级的,采用“_nop_”函数,这个函数相当汇编NOP指令,延时几微秒。 eg:可以在C文件中通过使用带_NOP_( )语句的函数实现,定义一系列不同的延时函数,如Delay10us( )、Delay25us( )、Delay40us( )等存放在一个自定义的C 文件中,需要时在主程序中直接调用。如延时10 μs的延时函数可编写如下: void Delay10us( ) { _NOP_( ); _NOP_( );

单片机几个典型延时函数

软件延时:(asm) 晶振12MHZ,延时1秒 程序如下: DELAY:MOV 72H,#100 LOOP3:MOV 71H,#100 LOOP1:MOV 70H,#47 LOOP0:DJNZ 70H,LOOP0 NOP DJNZ 71H,LOOP1 MOV 70H,#46 LOOP2:DJNZ 70H,LOOP2 NOP DJNZ 72H,LOOP3 MOV 70H,#48 LOOP4:DJNZ 70H,LOOP4 定时器延时: 晶振12MHZ,延时1s,定时器0工作方式为方式1 DELAY1:MOV R7,#0AH ;;晶振12MHZ,延时0.5秒 AJMP DELAY DELAY2:MOV R7,#14H ;;晶振12MHZ,延时1秒DELAY:CLR EX0 MOV TMOD,#01H ;设置定时器的工作方式为方式1 MOV TL0,#0B0H ;给定时器设置计数初始值 MOV TH0,#3CH SETB TR0 ;开启定时器 HERE:JBC TF0,NEXT1 SJMP HERE NEXT1:MOV TL0,#0B0H MOV TH0,#3CH DJNZ R7,HERE CLR TR0 ;定时器要软件清零 SETB EX0 RET

C语言延时程序: 10ms延时子程序(12MHZ)void delay10ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=4;j>0;j--) for(k=248;k>0;k--); } 1s延时子程序(12MHZ)void delay1s(void) { unsigned char h,i,j,k; for(h=5;h>0;h--) for(i=4;i>0;i--) for(j=116;j>0;j--) for(k=214;k>0;k--); }

用单片机实现延时(自己经验及网上搜集).

标准的C语言中没有空语句。但在单片机的C语言编程中,经常需要用几个空指令产生短延时的效果。这在汇编语言中很容易实现,写几个nop就行了。 在keil C51中,直接调用库函数: #include // 声明了void _nop_(void; _nop_(; // 产生一条NOP指令 作用:对于延时很短的,要求在us级的,采用“_nop_”函数,这个函数相当汇编NOP指令,延时几微秒。NOP指令为单周期指令,可由晶振频率算出延时时间,对于12M晶振,延时1uS。对于延时比较长的,要求在大于10us,采用C51中的循环语句来实现。 在选择C51中循环语句时,要注意以下几个问题 第一、定义的C51中循环变量,尽量采用无符号字符型变量。 第二、在FOR循环语句中,尽量采用变量减减来做循环。 第三、在do…while,while语句中,循环体内变量也采用减减方法。 这因为在C51编译器中,对不同的循环方法,采用不同的指令来完成的。 下面举例说明: unsigned char i; for(i=0;i<255;i++; unsigned char i; for(i=255;i>0;i--;

其中,第二个循环语句C51编译后,就用DJNZ指令来完成,相当于如下指令: MOV 09H,#0FFH LOOP: DJNZ 09H,LOOP 指令相当简洁,也很好计算精确的延时时间。 同样对do…while,while循环语句中,也是如此 例: unsigned char n; n=255; do{n--} while(n; 或 n=255; while(n {n--}; 这两个循环语句经过C51编译之后,形成DJNZ来完成的方法, 故其精确时间的计算也很方便。 其三:对于要求精确延时时间更长,这时就要采用循环嵌套的方法来实现,因此,循环嵌套的方法常用于达到ms级的延时。对于循环语句同样可以采用for,do…while,while结构来完成,每个循环体内的变量仍然采用无符号字符变量。 unsigned char i,j for(i=255;i>0;i--

单片机一些常用的延时与中断问题及解决方法

延时与中断出错,是单片机新手在单片机开发应用过程中,经常会遇到的问题,本文汇总整理了包含了MCS-51系列单片机、MSP430单片机、C51单片机、8051F的单片机、avr单片机、STC89C52、PIC单片机…..在内的各种单片机常见的延时与中断问题及解决方法,希望对单片机新手们,有所帮助! 一、单片机延时问题20问 1、单片机延时程序的延时时间怎么算的? 答:如果用循环语句实现的循环,没法计算,但是可以通过软件仿真看到具体时间,但是一般精精确延时是没法用循环语句实现的。 如果想精确延时,一般需要用到定时器,延时时间与晶振有关系,单片机系统一般常选用11.059 2 MHz、12 MHz或6 MHz晶振。第一种更容易产生各种标准的波特率,后两种的一个机器周期分别为1 μs和2 μs,便于精确延时。本程序中假设使用频率为12 MHz的晶振。最长的延时时间可达216=65 536 μs。若定时器工作在方式2,则可实现极短时间的精确延时;如使用其他定时方式,则要考虑重装定时初值的时间(重装定时器初值占用2个机器周期)。 2、求个单片机89S51 12M晶振用定时器延时10分钟,控制1个灯就可以 答:可以设50ms中断一次,定时初值,TH0=0x3c、TL0=0xb0。中断20次为1S,10分钟的话,需中断12000次。计12000次后,给一IO口一个低电平(如功率不够,可再加扩展),就可控制灯了。 而且还要看你用什么语言计算了,汇编延时准确,知道单片机工作周期和循环次数即可算出,但不具有可移植性,在不同种类单片机中,汇编不通用。用c的话,由于各种软件执行效率不一样,不会太准,通常用定时器做延时或做一个不准确的延时,延时短的话,在c中使用汇编的nop做延时 3、51单片机C语言for循环延时程序时间计算,设晶振12MHz,即一个机器周期是1us。for(i=0,i<100;i++) for(j=0,j<100;j++) 我觉得时间是100*100*1us=10ms,怎么会是100ms 答: 不可能的,是不是你的编译有错的啊 我改的晶振12M,在KEIL 4.0 里面编译的,为你得出的结果最大也就是40ms,这是软件的原因, 不可能出现100ms那么大的差距,是你的软件的原因。 不信你实际编写一个秒钟,利用原理计算编写一个烧进单片机和利用软件测试的秒程序烧进单片机,你会发现原理计算的程序是正确的

STM32的几种延时方法

STM32的几种延时方法(基于MDK固件库3.0,晶振8M) 单片机编程过程中经常用到延时函数,最常用的莫过于微秒级延时delay_us()和毫秒级delay_ms()。 1.普通延时法 这个比较简单,让单片机做一些无关紧要的工作来打发时间,经常用循环来实现,不过要做的比较精准还是要下一番功夫。下面的代码是在网上搜到的,经测试延时比较精准。 //粗延时函数,微秒 void delay_us(u16 time) { u16 i=0; while(time--) { i=10; //自己定义 while(i--) ; } } //毫秒级的延时 void delay_ms(u16 time) { u16 i=0; while(time--) { i=12000; //自己定义 while(i--) ; } } 2.SysTick 定时器延时 CM3 内核的处理器,内部包含了一个SysTick定时器,SysTick是一个24 位的倒计数定时器,当计到0 时,将从RELOAD 寄存器中自动重装载定时初值。只要不把它在SysTick控制及状态寄存器中的使能位清除,就永不停息。SysTick 在STM32 的参考手册里面介绍的很简单,其详细介绍,请参阅《Cortex-M3 权威指南》。 这里面也有两种方式实现: a.中断方式 如下,定义延时时间time_delay,SysTick_Config()定义中断时间段,在中断中递减time_delay,从而实现延时。 volatile unsigned long time_delay; // 延时时间,注意定义为全局变量 //延时n_ms void delay_ms(volatile unsigned long nms) { //SYSTICK分频--1ms的系统时钟中断 if (SysTick_Config(SystemFrequency/1000))

51单片机精确延时源程序

51单片机精确延时源程序 一、晶振为 11.0592MHz,12T 1、延时 1ms: (1)汇编语言: 代码如下: DELAY1MS: ;误差 -0.651041666667us MOV R6,#04H DL0: MOV R5,#71H DJNZ R5,$ DJNZ R6,DL0 RET (2)C语言: void delay1ms(void) //误差 -0.651041666667us { unsigned char a,b; for(b=4;b>0;b--) for(a=113;a>0;a--); } 2、延时 10MS: (1)汇编语言: DELAY10MS: ;误差 -0.000000000002us MOV R6,#97H DL0: MOV R5,#1DH DJNZ R5,$ DJNZ R6,DL0

RET (2)C语言: void delay10ms(void) //误差 -0.000000000002us { unsigned char a,b; for(b=151;b>0;b--) for(a=29;a>0;a--); } 3、延时 100MS: (1)汇编语言: DELAY100MS: ;误差 -0.000000000021us MOV R7,#23H DL1: MOV R6,#0AH I

棋影淘宝店:https://www.wendangku.net/doc/186521249.html,QQ:149034219 DL0: MOV R5,#82H DJNZ R5,$ DJNZ R6,DL0 DJNZ R7,DL1 RET (2)C语言: void delay100ms(void) //误差 -0.000000000021us { unsigned char a,b,c; for(c=35;c>0;c--) for(b=10;b>0;b--) for(a=130;a>0;a--); } 4、延时 1S: (1)汇编语言: DELAY1S: ;误差 -0.00000000024us MOV R7,#5FH DL1: MOV R6,#1AH DL0: MOV R5,#0B9H DJNZ R5,$ DJNZ R6,DL0 DJNZ R7,DL1 RET (2)C语言: void delay1s(void) //误差 -0.00000000024us { unsigned char a,b,c; for(c=95;c>0;c--) for(b=26;b>0;b--)

单片机精确毫秒延时函数

单片机精确毫秒延时函数 实现延时通常有两种方法:一种是硬件延时,要用到定时器/计数器,这种方法可以提高CPU的工作效率,也能做到精确延时;另一种是软件延时,这种方法主要采用循环体进行。今天主要介绍软件延时以及单片机精确毫秒延时函数。 单片机的周期介绍在电子技术中,脉冲信号是一个按一定电压幅度,一定时间间隔连续发出的脉冲信号。脉冲信号之间的时间间隔称为周期;而将在单位时间(如1秒)内所产生的脉冲个数称为频率。频率是描述周期性循环信号(包括脉冲信号)在单位时间内所出现的脉冲数量多少的计量名称;频率的标准计量单位是Hz(赫)。电脑中的系统时钟就是一个典型的频率相当精确和稳定的脉冲信号发生器。 指令周期:CPU执行一条指令所需要的时间称为指令周期,它是以机器周期为单位的,指令不同,所需的机器周期也不同。对于一些简单的的单字节指令,在取指令周期中,指令取出到指令寄存器后,立即译码执行,不再需要其它的机器周期。对于一些比较复杂的指令,例如转移指令、乘法指令,则需要两个或者两个以上的机器周期。通常含一个机器周期的指令称为单周期指令,包含两个机器周期的指令称为双周期指令。 时钟周期:也称为振荡周期,一个时钟周期= 晶振的倒数。对于单片机时钟周期,时钟周期是单片机的基本时间单位,两个振荡周期(时钟周期)组成一个状态周期。 机器周期:单片机的基本操作周期,在一个操作周期内,单片机完成一项基本操作,如取指令、存储器读/写等。 机器周期=6个状态周期=12个时钟周期。 51单片机的指令有单字节、双字节和三字节的,它们的指令周期不尽相同,一个单周期指令包含一个机器周期,即12个时钟周期,所以一条单周期指令被执行所占时间为12*(1/ 晶振频率)= x s。常用单片机的晶振为11.0592MHz,12MHz,24MHz。其中11.0592MHz 的晶振更容易产生各种标准的波特率,后两种的一个机器周期分别为1 s和2 s,便于精确延时。 单片机精确毫秒延时函数对于需要精确延时的应用场合,需要精确知道延时函数的具体延

AVR单片机常用的延时函数

AVR单片机常用的延时函数 /******************************************************************** *******/ //C header files:Delay function for AVR //MCU:ATmega8 or 16 or 32 //Version: 1.0beta //The author: /******************************************************************** *******/ #include void delay8RC_us(unsigned int time) //8Mhz内部RC震荡延时Xus { do { time--; } while(time>1); } void delay8RC_ms(unsigned int time) //8Mhz内部RC震荡延时Xms { while(time!=0) { delay8RC_us(1000); time--; } } /******************************************************************** **********/ void delay1M_1ms(void) //1Mhz延时1ms { unsigned char a,b,c; for(c=1;c>0;c--) for(b=142;b>0;b--) for(a=2;a>0;a--); } void delay1M_xms(unsigned int x) //1Mhz延时xms { unsigned int i; for(i=0;i

STM32的几种延时方法

STM32的几种延时方法(基于MDK固件库,晶振8M)单片机编程过程中经常用到延时函数,最常用的莫过于微秒级延时delay_us( )和毫秒级delay_ms( )。 1.普通延时法 这个比较简单,让单片机做一些无关紧要的工作来打发时间,经常用循环来实现,不过要做的比较精准还是要下一番功夫。下面的代码是在网上搜到的,经测试延时比较精准。 断方式 如下,定义延时时间time_delay,SysTick_Config()定义中断时间段,在中断中递减time_delay,从而实现延时。 volatile unsigned long time_delay; 中断方式 主要仿照原子的《STM32不完全手册》。SYSTICK 的时钟固定为HCLK 时钟的1/8,在这里我们选用内部时钟源72M,所以SYSTICK的时钟为9M,即SYSTICK 定时器以9M的频率递减。SysTick 主要包含CTRL、LOAD、VAL、CALIB 等4 个寄存器, 程序如下,相当于查询法。 //仿原子延时,不进入systic中断

void delay_us(u32 nus) { u32 temp; SysTick->LOAD = 9*nus; SysTick->VAL=0X00;//清空计数器 SysTick->CTRL=0X01;//使能,减到零是无动作,采用外部时钟源 do { temp=SysTick->CTRL;//读取当前倒计数值 }while((temp&0x01)&&(!(temp&(1<<16))));//等待时间到达 SysTick->CTRL=0x00; //关闭计数器 SysTick->VAL =0X00; //清空计数器 } void delay_ms(u16 nms) { u32 temp; SysTick->LOAD = 9000*nms; SysTick->VAL=0X00;//清空计数器 SysTick->CTRL=0X01;//使能,减到零是无动作,采用外部时钟源 do { temp=SysTick->CTRL;//读取当前倒计数值 }while((temp&0x01)&&(!(temp&(1<<16))));//等待时间到达 SysTick->CTRL=0x00; //关闭计数器 SysTick->VAL =0X00; //清空计数器 } 三种方式各有利弊,第一种方式容易理解,但不太精准。第二种方式采用库函数,编写简单,由于中断的存在,不利于在其他中断中调用此延时函数。第三种方式直接操作寄存器,看起来比较繁琐,其实也不难,同时克服了以上两种方式的缺点,个人感觉比较好用。

CVAVR 软件中启动delay库,调用delay_ms()函数,自动带了喂狗程序

CV A VR 软件中启动delay.h库,调用delay_ms()函数,自动带了喂狗程序 近期在学习中发现个问题,CV A VR 中启动delay.h库,调用delay_ms()函数延时,系统怎么都不复位重启,即使打开看门狗熔丝位,看门狗也不会重启,找了很久原因,发现是调用调用系统自身带的delay_ms()函数引起的,换成自己的简单延时函数,问题就解决,看门狗可以正常工作,后面附带我自己写的简单延时函数。 后来查找问题,发现系统中的delay_ms()函数自带了喂狗程序,所以不会自动的重启,请大家放心使用,用延时函数看门狗不溢出是正常的。后面附带软件编辑后生产的汇编程序,一看就知道确实带了喂狗。 今天写出来供大家注意,不要犯我同样的问题。 /***************************************************** This program was produced by the CodeWizardA VR V1.25.9 Standard Chip type : A Tmega8L Program type : Application Clock frequency : 1.000000 MHz Memory model : Small External SRAM size : 0 Data Stack size : 256 *****************************************************/ #include #include // Declare your global variables here void main(void) { // Declare your local variables here // Input/Output Ports initialization // Port B initialization // Func7=In Func6=In Func5=In Func4=In Func3=In Func2=In Func1=In Func0=In // State7=T State6=T State5=T State4=T State3=T State2=T State1=T State0=T PORTB=0x00; DDRB=0x00; // Port C initialization // Func6=In Func5=In Func4=In Func3=In Func2=In Func1=In Func0=In // State6=T State5=T State4=T State3=T State2=T State1=T State0=T PORTC=0x00; DDRC=0x00; // Port D initialization // Func7=In Func6=In Func5=In Func4=In Func3=In Func2=In Func1=In Func0=In // State7=T State6=T State5=T State4=T State3=T State2=T State1=T

单片机写延时程序的几种方法

单片机写延时程序的几种方法 1)空操作延時(12MHz) void delay10us() { _NOP_(); _NOP_(); _NOP_(); _NOP_(); _NOP_(); _NOP_(); } 2)循環延時 (12MHz) Void delay500ms() { unsigned char i,j,k; for(i=15;i>;0;i--) for(j=202;j>;0;j--) for(k=81;k>;0;k--); }

延時總時間=[(k*2+3)*j+3]*i+5 k*2+3=165 us 165*j+3=33333 us 33333*i+5=500000 us=500 ms 3)計時器中斷延時(工作方式2) (12MHz) #include; sbit led=P1^0; unsigned int num=0; void main() { TMOD=0x02; TH0=6; TL0=6; EA=1; ET0=1; TR0=1; while(1) { if(num==4000) { num=0;

led=~led; } } } void T0_time() interrupt 1 { num++; } 4)C程序嵌入組合語言延時 #pragma asm …… 組合語言程序段 …… #pragma endasm KEIL軟件仿真測量延時程序延時時間

這是前段事件總結之延時程序、由於不懂組合語言,故NO.4無程序。希望對你有幫助!!! 對於12MHz晶振,機器周期為1uS,在執行該for循環延時程式的時候 Void delay500ms() { unsigned char i,j,k; for(i=15;i>;0;i--) for(j=202;j>;0;j--) for(k=81;k>;0;k--); } 賦值需要1個機器周期,跳轉需要2個機器周期,執行一次for循環的空操作需要2個機器周期,那么,對於第三階循環 for(k=81;k>;0;k--);,從第二階跳轉到第三階需要2機器周期,賦值需要1個機器周期,執行81次則需要2*81個機器周期,執行一次二階for循環的事件為81*2+1+2;執行了220次,則(81*2+3)*220+3,執行15次一階循環,則 [(81*2+3)*220+3]*15,由於不需要從上階跳往下階,則只加賦值的一個機器周期,另外進入該延時子函數和跳出該函數均需要2個機器周期,故

51单片机的几种精确延时

51单片机的几种精确延时.txt这是一个禁忌相继崩溃的时代,没人拦得着你,只有你自己拦着自己,你的禁忌越多成就就越少。自卑有多种档次,最高档次的自卑表现为吹嘘自己干什么都是天才。51单片机的几种精确延时实现延时通常有两种方法:一种是硬件延时,要用到定时器/计数器,这种方法可以提高CPU的工作效率,也能做到精确延时;另一种是软件延时,这种方法主要采用循环体进行。 1 使用定时器/计数器实现精确延时 单片机系统一般常选用11.059 2 MHz、12 MHz或6 MHz晶振。第一种更容易产生各种标准的波特率,后两种的一个机器周期分别为1 μs和2 μs,便于精确延时。本程序中假设使用频率为12 MHz的晶振。最长的延时时间可达216=65 536 μs。若定时器工作在方式2,则可实现极短时间的精确延时;如使用其他定时方式,则要考虑重装定时初值的时间(重装定时器初值占用2个机器周期)。 在实际应用中,定时常采用中断方式,如进行适当的循环可实现几秒甚至更长时间的延时。使用定时器/计数器延时从程序的执行效率和稳定性两方面考虑都是最佳的方案。但应该注意,C51编写的中断服务程序编译后会自动加上PUSH ACC、PUSH PSW、POP PSW和POP ACC 语句,执行时占用了4个机器周期;如程序中还有计数值加1语句,则又会占用1个机器周期。这些语句所消耗的时间在计算定时初值时要考虑进去,从初值中减去以达到最小误差的目的。 2 软件延时与时间计算 在很多情况下,定时器/计数器经常被用作其他用途,这时候就只能用软件方法延时。下面介绍几种软件延时的方法。 2.1 短暂延时 可以在C文件中通过使用带_NOP_( )语句的函数实现,定义一系列不同的延时函数,如Delay10us( )、Delay25us( )、Delay40us( )等存放在一个自定义的C文件中,需要时在主程序中直接调用。如延时10 μs的延时函数可编写如下: void Delay10us( ) { _NOP_( ); _NOP_( ); _NOP_( ); _NOP_( ); _NOP_( ); _NOP_( ); } Delay10us( )函数中共用了6个_NOP_( )语句,每个语句执行时间为1 μs。主函数调用Delay10us( )时,先执行一个LCALL指令(2 μs),然后执行6个_NOP_( )语句(6 μs),最后执行了一个RET指令(2 μs),所以执行上述函数时共需要10 μs。可以把这一函数

相关文档
相关文档 最新文档