文档库 最新最全的文档下载
当前位置:文档库 › Eigentracking Robust matching and tracking of articulated objects using a view-based repres

Eigentracking Robust matching and tracking of articulated objects using a view-based repres

Eigentracking Robust matching and tracking of articulated objects using a view-based repres
Eigentracking Robust matching and tracking of articulated objects using a view-based repres

EigenTracking:Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

Michael J.Black and Allan D.Jepson

Xerox Palo Alto Research Center,3333Coyote Hill Road,Palo Alto,CA94304

black@https://www.wendangku.net/doc/015102761.html,

Department of Computer Science,University of Toronto,Ontario M5S3H5

and Canadian Institute for Advanced Research

jepson@https://www.wendangku.net/doc/015102761.html,

Abstract.This paper describes a new approach for tracking rigid and articulated

objects using a view-based representation.The approach builds on and extends

work on eigenspace representations,robust estimation techniques,and parameter-

ized optical?ow estimation.First,we note that the least-squaresimage reconstruc-

tion of standard eigenspacetechniqueshas a number of problems and we reformu-

late the reconstruction problem as one of robust estimation.Second we de?ne a

“subspace constancy assumption”that allows us to exploit techniques for param-

eterized optical?ow estimation to simultaneously solve for the view of an object

and the af?ne transformation between the eigenspace and the image.To account

for large af?ne transformations between the eigenspace and the image we de?ne

an EigenPyramid representation and a coarse-to-?ne matching strategy.Finally,

we use these techniques to track objects over long image sequences in which the

objects simultaneously undergo both af?ne image motions and changes of view.

In particular we use this“EigenTracking”technique to track and recognize the

gestures of a moving hand.

1Introduction

View-based object representations have found a number of expressions in the computer vision literature,in particular in the work on eigenspace representations[10,13].Eigen-space representations provide a compact approximate encoding of a large set of training images in terms of a small number of orthogonal basis images.These basis images span a subspace of the training set called the eigenspace and a linear combination of these images can be used to approximately reconstruct any of the training images.Previous work on eigenspace representations has focused on the problem of object recognition and has only peripherally addressed the problem of tracking objects over time.Addition-ally,these eigenspace reconstruction methods are not invariant to image transformations such as translation,scaling,and rotation.Previous approaches have typically assumed that the object of interest can be located in the scene,segmented,and transformed into a canonical form for matching with the eigenspace.In this paper we will present a robust statistical framework for reconstruction using the eigenspace that will generalize and ex-tend the previous work in the area to ameliorate some of these problems.The work com-bines lines of research from object recognition using eigenspaces,parameterized optical

2c Springer-Verlag,1996?ow models,and robust estimation techniques into a novel method for tracking objects using a view-based representation.

There are two primary observations underlying this work.First,standard eigenspace techniques rely on a least-squares?t between an image and the eigenspace[10]and this can lead to poor results when there is structured noise in the input image.We reformu-late the eigenspace matching problem as one of robust estimation and show how it over-comes the problems of the least-squares approach.Second,we observe that rather than try to represent all possible views of an object from all possible viewing positions,it is more practical to represent a smaller set of canonical views and allow a parameterized transformation(eg.af?ne)between an input image and the eigenspace.This allows a multiple-views plus transformation[12]model of object recognition.What this implies is that matching using an eigenspace representation involves both estimating the view as well as the transformation that takes this view into the image.We formulate this problem in a robust estimation framework and simultaneously solve for the view and the trans-formation.For a particular view of an object we de?ne a subspace constancy assumption between the eigenspace and the image.This is analogous to the“brightness constancy assumption”used in optical?ow estimation and it allows us to exploit parameterized optical?ow techniques to recover the transformation between the eigenspace and the image.Recovering the view and transformation requires solving a non-linear optimiza-tion problem which we minimize using gradient descent with a continuation method.To account for large transformations between model and image we de?ne an EigenPyramid representation and a coarse-to-?ne matching scheme.This method enables the tracking of previously viewed objects undergoing general motion with respect to the camera.This approach,which we call EigenTracking,can be applied to both rigid and articulated ob-jects and can be used for object and gesture recognition in video sequences.

2Related Work

While eigenspaces are one promising candidate for a view-based object representation, there are still a number of technical problems that need to be solved before these tech-niques can be widely applied.First,the object must be located in the image.It is either assumed that the object can be detected by a simple process[9,10]or through global search[9,13].Second,the object must be segmented from the background so that the reconstruction and recognition is based on the object and not the appearance of the back-ground.Third,the input image must be be transformed(for example by translation,ro-tation,and scaling)into some canonical form for matching.The robust formulation and continuous optimization framework presented here provide a local search method that is robust to background variation and simultaneously matches the eigenspace and image while solving for translation,rotation,and scale.

To recognize objects in novel views,traditional eigenspace methods build an eigen-space from a dense sampling of views[6,7,10].The eigenspace coef?cients of these views are used to de?ne a surface in the space of coef?cients which interpolates between views.The coef?cients of novel views will hopefully lie on this surface.In our approach we represent views from only a few orientations and recognize objects in other orienta-tions by recovering a parameterized transformation(or warp)between the image and the

European Conf.on Computer Vision,ECCV’96,Cambridge,England,April19963 eigenspace.This is consistent with a model of human object recognition that suggests that objects are represented by a set of views corresponding to familiar orientations and that new views are transformed to one of these stored views for recognition[12].

To track objects over time,current approaches assume that simple motion detection and tracking approaches can be used to locate objects and then the eigenspace matching veri?es the object identity[10,13].What these previous approaches have failed to ex-ploit is that the eigenspace itself provides a representation(i.e.an image)of the object that can be used for tracking.We exploit our robust parameterized matching scheme to perform tracking of objects undergoing af?ne image distortions and changes of view.

This differs from traditionalimage-based motion and tracking techniques which typ-ically fail in situations in which the viewpoint of the object changes over time.It also differs from tracking schemes using3D models which work well for tracking simple rigid objects.The EigenTracking approach encodes the appearance of the object from multiple views rather than its structure.

Image-based tracking schemes that emphasize learning of views or motion have fo-cused on region contours[1,5].In particular,Baumberg and Hogg[1]track articulated objects by?ttinga spline to the silhouette of an object.They learn a view-based represen-tation of people walking by computing an eigenspace representation of the knot points of the spline over many training images.Our work differs in that we use the brightness values within an image region rather than the region outline and we allow parameterized transformations of the input data in place of the standard preprocessing normalization. 3Eigenspace Approaches

Given a set of images,eigenspace approaches construct a small set of basis images that characterize the majority of the variation in the training set and can be used to approx-imate any of the training images.For each image in a training set of images we construct a1D column vector by scanning the image in the standard lexicographic order.Each of these1D vectors becomes a column in a matrix.We assume that the number of training images,,is less than the number of pixels,and we use

Singular Value Decomposition(SVD)to decompose the matrix as

(1)

is an orthogonal matrix of the same size as representing the principle component directions in the training set.is a diagonal matrix with singular values

sorted in decreasing order along the diagonal.The orthogonal matrix encodes the coef?cients to be used in expanding each column of in terms of the principle com-ponent directions.

If the singular values,for for some,are small then,since the columns of are orthonormal,we can approximate some new column e as

e(2)

4c Springer-Verlag,1996

Sample images from the training set:

First few principle components:

Fig.1.Example that will be used to illustrate ideas throughout the paper. where the are scalar values that can be computed by taking the dot product of e and the column.This amounts to a projection of the input image,,onto the subspace de?ned by the basis vectors.

For illustration we constructed an eigenspace representation for soda cans.Figure1 (top row)shows some example soda can images in the training set which contained200 images of Coke and7UP cans viewed from the side.The eigenspace was constructed as described above and the?rst few principle components are shown in the bottom row of Figure1.For the experiments in the remainder of the paper,50principle components were used for reconstruction.While fewer components could be used for recognition, EigenTracking will require a more accurate reconstruction.

4Robust Matching

The approximation of an image region by a linear combination of basis vectors can be thought of as“matching”between the eigenspace and the image.This section describes how this matching process can be made robust.

Let e be an input image region,written as a vector,that we wish to match to the eigenspace.For the standard approximation e of e in Equation(2),the coef?cients are computed by taking the dot product of e with the.This approximation corresponds to the least-squares estimate of the[10].In other words,the are those that give a reconstructed image that minimizes the squared error c between e and e summed

European Conf.on Computer Vision,ECCV’96,Cambridge,England,April19965

a b c d

Fig.2.A simple example.(a,b):Training images.(c,d):Eigenspace basis images.

a b c d

Fig.3.Reconstruction.(a):New test image.(b):Least-squares reconstruction.(c):Ro-bust reconstruction.(d):Outliers(shown as black pixels).

over the entire image:

c e e e(3) This least-squares approximation works well when the input images have clearly seg-mente

d objects that look roughly lik

e those used to build the eigenspace.But it is com-monly known that least-squares is sensitive to gross errors,or“outliers”[8],and it is easy to construct situations in which the standard eigenspace reconstruction is a poor ap-proximation to the input data.In particular,i

f the input image contains structured noise (eg.from the background)that can be represented by the eigenspace then there may be multiple possible matches between the image and the eigenspace and the least-squares solution will return some combination of these views.

For example consider the very simple training set in Figure2(a and b).The basis vectors in the eigenspace are shown in Figure2(c,d).Now,consider the test image in Figure3a which does not look the same as either of the training images.The least-squares reconstruction shown in Figure3b attempts to account for all the data but this cannot be done using a linear combination of the basis images.The robust formulation described below recovers the dominant feature which is the vertical bar(Figure3c)and to do so,treats the data to the right as outliers(black region in Figure3d).

6c Springer-Verlag,1996

Fig.4.Robust error norm()and its derivative().

To robustly estimate the coef?cients c we replace the quadratic error norm in Equa-tion(3)with a robust error norm,,and minimize

c e(4) where is a scale parameter.For the experiments in this paper we take to be

can be viewed as outliers.

The computation of the coef?cients c involves the minimization of the non-linear function in Equation(4).We perform this minimization using a simple gradient descent scheme with a continuation method that begins with a high value for and lowers it during the minimization(see[2,3,4]for details).The effect of this procedure is that

initially no data are rejected as outliers then gradually the in?uence of outliers is reduced. In our experiments we have observed that the robust estimates can tolerate roughly of the data being outliers.

4.1Outliers and Multiple Matches

As we saw in Figure3it is possible for an input image to contain a brightness pattern that is not well represented by any single“view”.Given a robust match that recovers the

European Conf.on Computer Vision,ECCV’96,Cambridge,England,April19967

Image Least Squares Robust1Outliers1Robust2Outliers2 Fig.5.Robust matching with structured noise.

“dominant”structure in the input image,we can detect those points that were treated as outliers.We de?ne an outlier vector,or“mask”,m as

e e

8c Springer-Verlag,1996 of an object at all possible scales and all orientations.One must be able to recognize a familiar object in a previously unseen pose and hence we would like to represent a small set of views and recover a transformation that maps an image into the eigenspace.In the previous section we formulated the matching problem as an explicit non-linear param-eter estimation problem.In this section we will simply extend this problem formulation with the addition of a few more parameters representing the transformation between the image and the eigenspace.

To extend eigenspace methods to allow matching under some parametric transfor-mation we to formalize a notion of“brightness constancy”between the eigenspace and the image.This is a generalization of the notion of brightness constancy used in optical ?ow which states that the brightness of a pixel remains constant between frames but that its location may change.For eigenspaces we wish to say that there is a view of the object, as represented by some linear combination of the basis vectors,,such that pixels in the reconstruction are the same brightness as pixels in the image given the appropriate transformation.We call this the subspace constancy assumption.

Let,c,and

c(7) where c is the approximated image for a particular set of coef?cients,c.While c is a vector we can index into it as though it were an image.We de?ne c x to be the value of c at the position associated with pixel location x.

Then the robust matching problem from the previous section can be written as

x c x(8)

c

x

where is an sub-image of some larger image.Pentland et al.[11]call the resid-ual error c the distance-from-feature-space(DFFS)and note that this error could be used for localization and detection by performing a global search in an image for the best matching sub-image.Moghaddam and Pentland extend this to search over scale by constructing multiple input images at various scales and searching over all of them si-multaneously[9].We take a different approach in the spirit of parameterized optical?ow estimation.First we de?ne the subspace constancy assumption by parameterizing the in-put image as follows

x u x a c x x(9) where u x a x a x a represents an image transformation(or motion), and represent the horizontal and vertical displacements at a pixel,and the parameters a are to be estimated.For example we may take u to be the af?ne transformation

x a

x a

where and are de?ned with respect to the image center.Equation(9)states that there should be some transformation,u x a,that,when applied to image region,makes

European Conf.on Computer Vision,ECCV’96,Cambridge,England,April19969 look like some image reconstructed using the eigenspace.This transformation can be thought of as warping the input image into the coordinate frame of the training data.

Our goal is then to simultaneously?nd the c and a that minimize

x u x a c x(10)

c a

x

As opposed to the exhaustive search techniques used by previous approaches[9,13],we derive and solve a continuous optimization problem.

First we rewrite the left hand side of Equation(9)using a?rst order Taylor series expansion

x x x a x x a c x

where and are partial derivatives of the image in the and directions respectively. Reorganizing terms gives

x x a x x a x c x(11) This is very similar to the standard optical?ow constraint equation where the c has replaced x and c takes the place of the“temporal derivative”.

To recover the coef?cients of the reconstruction as well as the transformation we combine the constraints over the entire image region and minimize

x x a x x a x c x(12)

c a

x

with respect to c and a.As in the previous section,this minimization is performed us-ing a simple gradient descent scheme with a continuation method that gradually lowers .As better estimates of a are available,the input image is warped by the transforma-tion u x a and this warped image is used in the optimization.As this warping registers the image and the eigenspace,the approximation c gets better and better.This mini-mization and warping continues until convergence.The entire non-linear optimization scheme is described in greater detail in[2].

Note that this optimization scheme will not perform a global search to“?nd”the im-age region that matches the stored representation.Rather,given an initialguess,it will re-?ne the pose and reconstruction.While the initial guess can be fairly coarse as described below,the approach described here does not obviate the need for global search tech-niques but rather compliments them.In particular,the method will be useful for tracking an object where a reasonable initial guess is typically available.

EigenPyramids.As in the case of optical?ow,the constraint equation(11)is only valid for small transformations.The recovery of transformations that result in large pixel dif-ferences necessitates a coarse-to-?ne strategy.For every image in the training set we construct a pyramid of images by spatial?ltering and sub-sampling(Figure6).The im-ages at each level in the pyramid form distinct training sets and at each level SVD is used to construct an eigenspace description of that level.

The input image is similarly smoothed and subsampled.The coarse-level input im-age is then matched against the coarse-level eigenspace and the values of c and a are

10c Springer-Verlag,1996

a b

Fig.6.Example of EigenPyramids.a:Sample images from the training set.b:First few principle components in the EigenPyramid.

estimated at this level.The new values of a are then projected to the next level(in the case of the af?ne transformation the values of and are multiplied by2).This a is then used to warp the input image towards the eigenspace and the value of c is estimated and the are re?ned.The process continues to the?nest level.

6EigenTracking

The robust parameterized matching scheme described in the previous section can be used to track objects undergoing changes in viewpoint or changes in structure.As an object moves and the view of the object changes,we recover both the current view of the object and the transformation between the current view and the image.It is important to note that no“image motion”is being used to“track”the objects in this section.The tracking is achieved entirely by the parameterized matching between the eigenspace and the image.We call this EigenTracking to emphasize that a view-based representation is being used to track an object over time.

For the experiments here a three-level pyramid was used and the value of started at

by a factor of at each of15stages. The values of c and a were updated using15iterations of the descent scheme at each stage,and each pyramid level.The minimization was terminated if a convergence cri-terion was met.The algorithm was given a rough initial guess of the transformation between the?rst image and the eigenspace.From then on the algorithm automatically tracked the object by estimating c and a for each frame.No prediction scheme was used and the motion ranged from0to about4pixels per frame.For these experiments we restricted the transformation to translation,rotation,and scale.

6.1Pickup Sequence

First we consider a simple example in which a hand picks up a soda can.The can under-goes translation and rotation in the image plane(Figure7).The region corresponding to the eigenspace is displayed as white box in the image.This box is generated by project-ing the region corresponding to the eigenspace onto the image using the inverse of the

European Conf.on Computer Vision,ECCV’96,Cambridge,England,April 199611

064084

104124

Fig.7.Pickup Sequence.EigenTracking with translation and rotation in the image plane.Every 20frames in the 75frame sequence are shown.

estimated transformation between the image and the eigenspace.This projection serves to illustrate the accuracy of the recovered transformation.Beside each image is shown the robust reconstruction of the image region within the box.

6.2Tracking a Rotating Object

Figure 8shows the tracking of a soda can that translates left and right while moving in depth over 200frames.While the can is changing position relative to the camera it is also undergoing rotations about its major axis.What this means is that the traditional brightness constancy assumption of optical ?ow will not track the “can”but rather the “texture”on the can.The subspace constancy assumption,on the other hand,means that we will recover the transformation between our eigenspace representation of the can and the image.Hence,it is the “can”that is tracked rather than “texture”.

More details are provided to the right of the images.On the left of each box is the “stabilized”image which shows how the original image is “warped”into the coordinate frame of the eigenspace.Notice that the background differs over time as does the view of the can,but that the can itself is in the same position and at the same scale.The mid-dle image in each box is the robust reconstruction of the image region being tracked.On the right of each box (in black)are the “outliers”where the observed image and the reconstruction differed by more than

12c Springer-Verlag,1996

028:

084:

140:

196:

Fig.8.EigenTracking with translation and divergence over 200frames.The soda can rotates about its major axis while moving relative to the camera.

Figure 9.A 100image training set was collected by ?xing the wrist position and record-ing a hand alternating between these four gestures.The eigenspace was constructed and 25basis vectors were used for reconstruction.In our preliminary experiments we have found brightness images to provide suf?cient information for both recognition and track-ing of hand gestures (cf.[9]).

Figure 10shows the tracking algorithm applied to a 100image test sequence in which a moving hand executed the four gestures.The motion in this sequence was large (as much as 15pixels per frame)and the hand moved while changing gestures.The ?gure shows the backprojected box corresponding to the eigenspace model and,to the right,on top,the reconstructed image.Below the reconstructed image is the “closest”image in the original training set (taken to be the smallest Euclidean distance in the space of coef?cients).While more work must be done,this example illustrates how eigenspace approaches might provide a view-based representation of articulated objects.By allow-ing parameterized transformations we can use this representation to track and recognize

European Conf.on Computer Vision,ECCV’96,Cambridge,England,April 199613

Fig.9.Examples of the four hand gestures used to construct the eigenspace.human gestures.

7Conclusions

This paper has described robust eigenspace matching,the recovery of parameterized transformations between an image region and an eigenspace representation,and the ap-plication of these ideas to EigenTracking and gesture recognition.These ideas extend the useful applications of eigenspace approaches and provide a new form of tracking for previously viewed objects.In particular,the robust formulation of the subspace match-ing problem extends eigenspace methods to situations involving occlusion,background clutter,noise,etc.Currently these problems pose serious limitations to the usefulness of the eigenspace approach.Furthermore,the recovery of parameterized transformations in a continuous optimization framework provides an implementation of a views+trans-formation model for object recognition.In this model a small number of views are rep-resented and the transformation between the image and the nearest view is recovered.Finally,the experiments in the paper have demonstrated how a view-based represen-tation can be used to track objects,such as human hands,undergoing both changes in viewpoint and and changes in pose.

References

1. A.Baumberg and D.Hogg.Learning ?exible models from image sequences.In J.Eklundh,editor,ECCV-94,vol.800of LNCS-Series ,pp.299–308,Stockholm,1994.

2.M.J.Black and A.D.Jepson.EigenTracking:Robust matching and tracking of articulated objects using a view-based representation.Tech.Report T95-00515,Xerox PARC,Dec.1995.

3.M.Black and P.Anandan.The robust estimation of multiple motions:Af?ne and piecewise-smooth ?ow ?https://www.wendangku.net/doc/015102761.html,puter Vision and Image Understanding,in press.Also Tech.Report P93-00104,Xerox PARC,Dec.1993.

4.M.J.Black and P.Anandan.A framework for the robust estimation of optical ?ow.In ICCV-93,pp.231–236,Berlin,May 1993.

5. A.Blake,M.Isard,and D.Reynard.Learning to track curves in motion.In Proceedings of the IEEE Conf.Decision Theory and Control ,pp.3788–3793,1994.

6. A.F.Bobick and A.D.Wilson.A state-based technique for the summarization and recogni-tion of gesture.In ICCV-95,pp.382–388,Boston,June 1995.

14c Springer-Verlag,1996

010:024:

036:046:

057:074:

080:099:

Fig.10.Tracking and recognizing hand gestures in video.

7. C.Bregler and S.M.Omohundro.Surface learning with applications to lip reading.Ad-vances in Neural Information Processing Systems 6,pp.43–50,San Francisco,1994.

8. F.R.Hampel,E.M.Ronchetti,P.J.Rousseeuw,and W.A.Stahel.Robust Statistics:The Approach Based on In?uence Functions .John Wiley and Sons,New York,NY ,1986.

9. B.Moghaddam and A.Pentland.Probabilistic visual learning for object detection.In ICCV-95,pp.786–793,Boston.,June 1995.

10.H.Murase and S.Nayar.Visual learning and recognition of 3-D objects from appearance.

International Journal of Computer Vision ,14:5–24,1995.

11.A.Pentland,B.Moghaddam,and T.Starner.View-based and modular eigenspaces for face

recognition.In CVPR-94,pp.84–91,Seattle,June 1994.

12.M.J.Tarr and S.Pinker.Mental rotation and orientation-dependence in shape recognition.

Cognitive Psychology ,21:233–282,1989.

13.M.Turk and A.Pentland.Face recognition using eigenfaces.In CVPR-91,pp.586–591,

Maui,June 1991.

This article was processed using the L A T E X macro package with ECCV’96style

雅思剑桥真题 拼写内容词汇(听力)

SECTION 1 答案拼写内容 a break 一次休息 a concert 一场音乐会 a good dictionary 一本好字典 a relative 一个亲戚 a single item 一件 a yellow box 黄色盒子 Academic English 学术英语accommodation 住宿 Advance Insurance Co. 前进保险公司Advance English 高级英语advanced level 高级水平 airport 机场 alarm system 报警系统 all ages 所有年龄 American Museum 美国博物馆answer the phone 接电话armchair 扶手椅 Australian 澳大利亚人 back door 后门 back pain 后背疼痛 back wall 后墙 backing n.帮助,支持 bad eyesight 视力不好 band 乐队 bank statement 银行对账单 bank transfer 银行汇款转账bargain n.特价商品 bath 沐浴,浴缸 bathroom 浴室 bed sheet 床单 bedroom 卧室 bed-sit 合租房中的一间 beginner 初学者 behind the station 在车站的后面Berlin 柏林 bicycle 自行车 bills n.账单 black 黑色 black raincoat 黑雨衣 black skirt 黑裙子 blue gate蓝色大门 blue sweater 蓝色羊毛衫bookkeeping簿记breakfast 早餐 brick 砖 bridge 桥 bright 光亮的 Bristol 布里斯托尔(英国西部港口)brother-in-law 内弟/兄,姊妹的丈夫building 建筑物 bus stop 公交车站 bus time 公交车间隔时间 bushes 灌木丛 by cheque 付支票 by fax 以传真的形式 by plane 坐飞机 by subway 乘地铁 by the door 在门的旁边 cab 出租车 café 咖啡馆 cake 蛋糕 call the police 报警 camera 照相机 campus 校园 canteen食堂 carpet 地毯 cars 小汽车 casual clothes 休闲服 cathedral 大教堂 CD 光盘 central 中心的 central heating中央供暖系统Central Park 中央公园 Chapter 10 第十章 chemist’s 药店 chicken 鸡肉 church 教堂 City Center Branch 市中心分店classroom 教室 clean 清洁的 cleaner 清洁工 clear voice 清澈的嗓音 climb the tower 爬塔楼 clothes 衣服 coffee 咖啡 coffee break 喝咖啡的休息时间College Dining Room 学校食堂

雅思听力讲义

雅思听力讲义 第一讲雅思听力应注意的问题 1,学习英语的基本方法2,考试与技巧3,计划4,机经 1, 如何习得英语 ?学习英语的5个方面: ?听,说,读,写,译. ?其中读和听是基础,而读是最基础的,是习得的主要方法 2,四门考试之间的关系 听,读---被动 说,写---主动 听,读---平均分比 说,写---高半分 ?SOUNDS-SYLLABLES-WORDS-PHRASES-SENTENCES-PARAGRAPHS-PASSAGES ?语音-词汇-语法-记忆-走神 ?同时,听懂的过程正好是说的逆过程 3,问题及解决问题方法 1,语音: 1)48个基本音素 英语与汉语发音的不同/元音/辅音/易混音 2)吞音和连读 相邻辅音,前者有口型不送气/例外 一般连读/辅音连读/元音连读/例外 3)口音和语调 英音/美音/澳音/杂音 句子重音/单词重音/结构与节奏 1),2),3)问题的解决方案 纠音: 1,学过的配有磁带的3-5篇课文 2,录下自己的声音,与磁带反复对比,模仿

3,同性的声音 4)读音规则 一个字母组合发不同的音 不同的字母组合发同一个音 读音规则问题的解决方案 1, 找到读音规则 2, 多举不同的例子 2,词汇: 1)内涵和外延 不要只记一个意思,否则在其它地方遇到就不认识了2)用法 要记住单词的语境,否则即便记住了也不会用 3)同义词 听说读写都需要同义词 听力中有20-40%的题目需要听同义词 4)派生词 熟悉单词:词根/词缀 3,语法: 1)句子结构 只有抓住句子结构才能抓住完整的含义. 2)代词还原 这需要我们更强的短时记忆 3)动词形式的含义 熟悉动词的含义以及每一种形式的含义 词汇和语法问题解决方案 快速阅读: 1,每天1-2篇学过的课文(10+遍) 2,养成抓句子结构的习惯 3,180+WPM 4,记忆---听写 1)单句 2)边听边写和听完再写 3)两遍一句 5,走神: 1)边听边走---听着玩 每一部分犯的错误都不少 2)先走后不走---躺着听

剑桥雅思听力test解析

S e c t i o n 1 题目解析: 原文难句 1.Will that work out to be any cheaper? 那样会不会便宜一点呢? that 指代前面所说内容:I know the conference is for three days but actually I want to attend on the Friday and Saturday only. 一般对于前面所重复的信息不会在下一句话中重复出现,避免语句重复现象,用指示代词 that/this/these/those等指代即可。 work out的本意为:解决;算出;实现;制定出;消耗完;弄懂;锻炼,但是在这句话中没有特别的含义。 2.They are only £15 per night, but they are very basic and you’ll have to get your own breakfast, because they don’t provide you with that. 那些房间每天晚上只需15磅,但是房间里面只有一些基础设施,而且你要自己买早餐,因为它们不提供早餐。 basic指的是房间内只有一些非常基础的设施。get 的意思有 vt. 使得;获得;受到;变成n. 生殖;幼兽vi. 成为;变得;到达,在这句话里面指的是自己买早餐吃。 provide sb. with sth. 给某人提供什么 3. The details are all in our conference pack, which I’ll send you. 细节内容在我们的会议安排表内有具体说明,我会邮寄给您的。 pack的本意是n. 包装;一群;背包;包裹;一副 vt. 包装;压紧;捆扎;挑选;塞满vi. 挤;包装货物;被包装;群集。在这句话中的意思是“所有费用都包含在会议的费用之内”。which引导非限定性定语从句,指代前面所说conference pack 4.Otherwise, you can take the bus which runs every half an hour from the station - that's the 21A — and it brings you straight to the conference centre. 或者,您可以乘坐21A路公共汽车,每半小时一趟,直接到会议中心。 which引导限定性定语从句,指代前面所说bus that's the 21A是插入语成分 straight a.直接的 题目答案: 1. 75 2. cheque/check 3.15 4,.25 5. 10minute(s')/min(s') 6. conference pack 7. South 8. library 9. 5 10.21A

解析雅思阅读信息配对题

解析雅思阅读信息配对题 关于雅思阅读当中的配对题,很多学生和老师都已经习惯了称它为matching题。但matching的范畴太过于宽泛,如果刻意细分的话,我们还可以把它分为单词配描述,句子完成题和段落信息配对题三个大类。针对句子完成题(题干往往为complete each sentence with the correct ending)和段落信息配对题(题干往往为which paragraph contains the following information),我将以后另开专题来讲解,本次将重点讲解最传统的配对题型(单词配描述),并根据雅思阅读题干部分将其继续细分为match题(match each XX with the correct statement)和classify题(classify the following features)。 首先强调一下配对题的基本功:定位能力+词汇量积累 我们分别来解释一下。定位,就是拿着给定的主语去文章当中去找到位置。这些主语往往是专有名词,首字母都会大写。这一个步骤看起来很容易,也不需要任何语言能力和技巧,我们经常开玩笑说就算这是一篇法语的文章,这一步也能很快搞掂。但在大班课堂的练习中,我们会发现,有很多同学定位单词耗费时间太多,往往三四个单词就要找两三分钟,无形中给自己增添了很多时间紧迫的压力。而这种办法的唯一破解,就是拿着一些练习题做专项训练,而唯一的技巧在于,match 类(非classify类)的定位词出题顺序与文章中原词的出现顺序是一致的,一项找不到,我们就用夹击法,把前后项都找到,当中的就好找了。 那么为什么强调词汇量的积累呢?因为到了我们配对题的关键破题点,会发现想在原文中找到与选项中一摸一样的单词或句子并不容易,往往会用到近义词转述,比如unable会变成at loss,lay stress on会变成emphasize。这些单词看起来很简单,但能不能意识到他们之间的近义词关系,需要课下多进行“词以类记”的单词强化记忆,单词不求难,但求精准和举一反三。 下面我们先说说match一类题型的做题步骤。 以雅思剑桥真题集5,第91页的题为例: Look at the following people and the list of statements below. Match each person with the correct statement. Write the correct letter A-H in boxes 14-17 on your answer

完整雅思写作高分范文 科技类5篇

科技类future for humans' robots are very important 1.Some people think that development. Others, however, think that robots are a dangerous invention that could have negative effects on society. Discuss both views and give your opinion. The field of science and technology has advanced throughout the past decades and has changed our perspectives of life. Among those advancements, robots have become significant by trying to get close to human lives. While some people argue that though these robots have managed to solve most of our day-are robots I believe that create to-day problems, they can still problems, significant for humans' current and future development.It is normal for people to be worried about the rise of robotics in every aspects of our lives. In workplace, one of the real impacts humans face is the loss of jobs and the economic displacement of workers. As prices continue to

1雅思基础阅读课件

Reading handout for lesson 1 Part 1 Part 1.1 :Vocabulary Chess Basketball Board games Sports Swimming Tennis Cards Dancing Weightlifting Skiing Yoga Shopping Exercise Having fun PartyingKarate Poker Puzzles Kick boxing Hiking Sharing Chatting Part 1.2: Practice3

Part 2:Reading 1.Read the following text and then look at the questions on the next page. The value of friendship Recent research into the world of teenagers has suggested that they value friendship above everything else. Children aged between 12 and 15 were asked what was important to them. Their answers included possessions such as money and computer gadgets but also relationships with people. The teenagers questioned said that friends were the most important to them, more even than family, or boyfriends and girlfriends. We wanted to find out more about the results of this research so we asked our readers what they thought about the value of friendship. Here are some examples of what they said about their friends: Ben, 15: Every time I have a fight with my parents, I need some time on my own. But after that, the first thing I do is meet up with my friends. After playing football for a while, or skateboarding, I usually feel much happier again. Rory, 13: When I moved to a village in the countryside, I thought that it would be the end of my friendships. But my old friends have kept in touch and they come and visit in the holidays. There's a lake nearby, so we often go sailing, water-skiing or windsurfing. And I have made some new friends here too, at school, and since I joined the rugby club. Carlos, 11: Last year, I broke my arm on a skiing holiday. Unfortunately, it was my left arm and I am lefthanded. My school friends all helped and copied their notes for me. It seems that our readers value their friendships very highly. From what they told us, they spend a lot of time with their friends, just hanging out, or sharing hobbies and interests. They seem to need their friends for advice, help, chats, and for having fun. Clearly, friends make each other feel better. Looking at what our readers told us, the results of the recent research are not really surprising. 2.Try to answer this question yourself first, before reading the explanation. Choose the best answer from the letters a-d. To teenagers, money is ... a)not important. b)as important as computer gadgets. c)as important as relationships with people. d)less important than friendships. 3.Look at the questions in Exercise 4, without reading the answer options. Underline the question words (e.g. where, when, what) and the key words in each of the questions (1-3) and sentence stems (4-5). 4.Now answer these multiple-choice questions. Choose the appropriate letter a, b, c or d. i.Why are Ben, Rory and Carlos mentioned in the article? A.They know why teenagers value friendship. B.They gave information about themselves. C.They read magazines, D.They are teenage boys.

重要的事情要说3遍!剑桥雅思听力真题要听3遍!

重要的事情要说3遍!剑桥雅思听力真题要听3遍! 经常会有学生问我,剑桥雅思听力都听完了,还有哪些听力材料适合拿来练习。这种情况通常我都会问他们剑桥雅思听了几遍,而大部分同学都说听了一遍对完答案就没有再听了,甚至都不会翻书后的听力原文。这简直是暴殄天物啊!剑桥雅思真题作为全宇宙最有权威的指导书籍,它的听力材料中的题目一般都来自于官方题库的真题,具有很强的实战价值,这样精华而且珍贵的材料只听一遍就丢在一边,实在太可惜了!那么接下来点课台的老师就来告诉大家,到底应该如何去运用剑桥雅思才能更充分高效地提高雅思听力。 首先,你要记住,听力真题要听三遍!听完第一遍之后,千万不要就心急火燎的去对答案,这个时候,再听一遍录音,把自己第一遍没有听到的或者不确定、蒙出来的答案尽可能地听到,效果会更好。虽然正式考试录音只放一遍,但是平时练习的时候,要多给自己一次机会,锻炼自己寻找正确答案的能力,带着疑问去听录音,可以更有针对性的去练习自己的弱项,提高耳朵的灵敏度。 听完两遍之后对答案,把做错的、蒙对的、听了两遍还是听不出答案的题目都做上记号,然后去书后翻看听力原文。如果答案句中有某个单词不认识,那么立刻把这个单词抄在单词本上,回头务必把它背下来。如果单词全部都认识还听不出答案,那么就要分析一下原因了,有可能是同义词替换,有可能是对单词的发音不够熟悉,也有可能是单词的形式错误。分析完原因之后,再听一遍听力材料,这回就应该全都能听懂了,本来听不懂的现在能听懂,这不就是进步嘛! 最后再强调一下听力原文的重要性。大家千万不要以为只看答案句就行了,其实每份雅思听力原文都可以视作一份阅读文章,有具体的场景、前后文以及丰富的词汇。做完题目后,去仔细阅读研究听力原文,不仅能帮助你找到听不懂的地方,而且可以补充大量的词汇。所以,在做完练习后,一定要认认真真的把听力原文研读一遍,把生词查出来,不是光会拼写就行了,最重要的是要学会发音,毕竟我们练习的是听力! 雅思考试再怎么变,它总归是有范围的,而剑桥雅思的内容就是这个范围的精华。所以Phoebe老师希望这篇文章能够教会大家充分利用剑桥雅思的听力材料去猛提分数!

雅思强化写作精讲班第4讲讲义

雅思强化写作精讲班第4讲讲义 有效避免7种语法错误 有效避免7种语法错误 1 Employee can benefit more from telecommuting than employer. 2.1 Work at home using modern technology can greatly enhance our efficiency. 2.2 Children who are raise in impoverished families can generally deal with problems more effectively in their adult years. 2.3The problems that are created by environmental contamination is very hard to resolve. 2.4 Many students are very like studying home economics. 2.5 In present-day society, cultures were becoming very similar. 3 Intelligent students should not be treated different by their teachers. 4.1 Countries should pay attention on the disadvantages globalisation may create. 4.2 The Internet has instead of teachers in many classrooms. 5 Some parents do not obey traffic rules himself. 6 Some people think the Internet only has positive impact, other people think it also has negative influence on our lives. 7 There are a great many children think the main purpose of education should be to afford them pleasure and enjoyment. 高分范文讲解 范文讲解 In Britain, when some people get old they often go to live in a home with other old people where there are nurses to look after them. The government has to pay for his care. Who should be responsible for our old people? Give reasons for your answer. Example 2 Band 8 One of the most challenging problems of today’s society is the question about who should be responsible for our old people. It is not only a problem with money but also a question of the system we want our society to have. In my essay, I would like to present four possible models. Firstly, the company could be responsible for its retired employees. To do this, a special fund could be established. The advantage of this model is that if one believes in the capitalist system, it should be the cheapest solution. One possible problem is that the companies may have competition disadvantage due to higher staff costs. A second solution is that the government has to take the responsibility for the old people’s care. It can finance this with its tax in comes. Actually, this is the most democratic model since everybody gets as much money as he or she needs. Unfortunately, as the present situation in our country shows, this solution seems not to work very well.

雅思英语语法讲义.

雅思英语语法材料 第一章谓语动词 第一讲时态 一、时态表格 (一)一般(现在、过去、将来、过去将来) 现在时,并不难;表重复,表习惯;表状态,表客观;有频度,有次数;看主语,定单三。

(三)(现在、过去、将来、过去将来)完成 (现在、过去、将来、过去将来)完成进行 (四) 二、基本时态演练 Listening to the following conversation.

(一) Task One: fill in the blanks. 1. Interviewer: Your name? Peter: Peter __________. (1) 2. Interviewer: ________ (2) you work or _________ (3) you a student? Peter: I’m studying really hard for my exams this month—I’m doing math’s at university—but I also________ (4) my parents out. They own a _________ (5) and I ________ (6) there as a waiter in the evenings, so I ____________ (7) a lot of free time during the week. My mom is always saying that I _____________ (8) enough in the restaurant! But I do manage to find some free time most days. 3. Interviewer: Can you look at the _____ (9) and tell me whether you do any of these things and if so, how ____ (10)? Peter: I love music and I’m learning to play the piano. I ______ (11) really early and practice for an hour or so just about every day. I also play the guitar in a band with some other friends. We used to practice together at least _____________ (12) a week but these days we only manager to meet about once __________________ (13). 4. Interviewer: What about the next thing on the list? -_________________ (14)? Peter: Well, I used to play them all the time but now I’m too busy studying and I _______ (15) miss them at all. 5. Interviewer: Do you use a computer for other things? Peter: I use the Internet just about ________ (16) for my studies. And I also use it to _____________ (17) my friends and my family. My cousin is living in Thailand at the moment and he _______ (18) me regular emails to let me know how much fun he’s having! He’s always visiting exciting places. 6. Interviewer: Now, how about _________________ (19)? Peter: Actually, I joined the local football team when I was at school and I still play _______________ (20) provided I can get to training. I much prefer playing football to watch it on TV, though I do ______________ (21) watch a match if there’s a big final or something. 7. Interviewer: What about going to watch live matches? Peter: I’d love to be able to afford to go every week because I ____________ (22) my local team, but students don’t ____________ (23) have much money, you know! I can remember the ___________ (24) I went to a live match. Oh, sorry, I can see my friends—I ____________ (25)

雅思写作真题及高分范文2

2009年2月21日 Some people think that environmental problems are too big for individuals to deal with. Others, however, think that each individual should take some actions. Discuss both views and give your own opinion. 考题分析:本题是一道典型的雅思环境类写作考题。在历届雅思写作环境类话题中,出题思路主要有以下三个方面: 1.环境保护是谁的责任?考题给出的选项通常会有个人、公司、国家、国际合作等(例如04年5月15日,0-7年2月3日考题)。 2.如何保护环境?考题通常会给出某个方案问其正确与否(如06年6月17日,08年6月14日考题)。 3.为什么人们明知环境正在被破坏却无动于衷例?(如08年9月13日A类议论文) 如果考前对以上三方面的雅思写作真题都思考过,相信本题一定会变的很简单,实际上只要把一些理由进行组合就可以了。由于考题的提问形式是“Discuss both views and give your own opinion. ”所以,既要讨论到个人能够采取何种行动解决一些环境问题,也要讨论为什么有些环境问题个人无法解决(即只有政府或公司才能解决)。Sample Answer: It is true that our tiny individual actions often seem insignificant compared to the scale and complexity of global environmental problems such as pollution, deforestation and depletion of natural resources. But that does not relieve our duty as individuals to do as much as we can to deal with these problems. Individual actions, small as they may be, can prove more effective than we realize. We can avoid driving the car, and take public transit, walk, or bicycle instead. This will reduce the use of fossil fuels and cut pollution. Saving energy at home, like turning on the air conditioner only when we have to, or turning water heater down a few degrees, is also ultimately good for the environment. Individual actions can also turn into united powers when, for example, the whole neighborhood is mobilized to participate in a local campaign to oppose environmentally damaging policies. There are, however, obstacles that stand in the way of individual action towards environmental conservation. The first obstacle is the lack of professional knowledge needed to cope with serious environmental issues like soil erosion and salinity, which require a significant amount of investigation and research. Another obstacle which makes individual action almost impossible is when an environmental emergency or accident happens, e.g. an oil spill near the bord der line, which would necessitate government intervention or even international coordination.

常用免疫学检验技术的基本原理

常用免疫学检验技术的基本原理 免疫学检测即是根据抗原、抗体反应的原理,利用已知的抗原检测未知的抗体或利用已知的抗体检测未知的抗原。由于外源性和内源性抗原均可通过不同的抗原递呈途径诱导生物机体的免疫应答,在生物体内产生特异性和非特异性T 细胞的克隆扩增,并分泌特异性的免疫球蛋白(抗体)。由于抗体-抗原的结合具有特异性和专一性的特点,这种检测可以定性、定位和定量地检测某一特异的蛋白(抗原或抗体)。免疫学检测技术的用途非常广泛,它们可用于各种疾病的诊断、疗效评价及发病机制的研究。 最初的免疫检测方法是将抗原或抗体的一方或双方在某种介质中进行扩散,通过观察抗原-抗体相遇时产生的沉淀反应,检测抗原或抗体,最终达到诊断的目的。这种扩散可以是蛋白的自然扩散,例如环状沉淀试验、单向免疫扩散试验、双向免疫扩散实验。单向免疫扩散试验就是在凝胶中混入抗体,制成含有抗体的凝胶板,而将抗原加入凝胶板预先打好的小孔内,让抗原从小孔向四周的凝胶自然扩散,当一定浓度的抗原和凝胶中的抗体相遇时便能形成免疫复合物,出现以小孔为中心的圆形沉淀圈,沉淀圈的直径与加入的抗原浓度成正比。 利用蛋白在不同酸碱度下带不同电荷的特性,可以利用人为的电场将抗原、抗体扩散,例如免疫电泳试验和双向免疫电泳。免疫电泳首先将抗原加入凝胶中电泳,将抗原各成分依次分散开。然后沿电泳方向平行挖一直线形槽,于槽内加入含有针对各种抗原的混合抗体,让各抗原成分与相应抗体进行自然扩散,形成沉淀线。然后利用标准的抗原-抗体沉淀线进行抗原蛋白(或抗体)的鉴别。上述的方法都是利用肉眼观察抗原-抗体反应产生的沉淀,因此灵敏度有很大的局限。比浊法引入沉淀检测产生的免疫比浊法就是利用浊度计测量液体中抗原-抗体反应产生的浊度,根据标准曲线来计算抗原(或抗体)的含量。该方法不但大大提高了检测的灵敏度,且可对抗原、抗体进行定量的检测。

剑桥雅思听力材料 6 手打 可打印

Text1: Section: 1 1-4 complete, no more than three words

9-10 write ONE WORD ONLY for each answer 9 To join the centre, you need to book an instructor’s 10 To book a trial session, speak to David (0458 95311) Section: 2 11-16 choose, What change has been made to each part of the theatre? Part of the theatre 11 box office 12 shop 13 ordinary seats 14 seats for wheelchair users 15 lifts 16 dressing rooms 17-20 complete, no more than two words and/or a number

21 choose 21 What is Brian going to do before the course starts? A attend a class B write a report C read a book 22-25 complete, no more than two words 26-30 complete, no more than two words The Business Resource Centre contains materials such as books and manuals to be used for training. It is possible to hire 26and 27. There are materials for working on study skills (e.g. 28) and other subjects include finance and 29. 30membership costs £50 per year.

雅思精讲阅读班精讲班第8讲讲义

雅思精讲阅读班精讲班第8讲讲义 Questions 25-28 说明:录音开始的2分钟内容已经在上一讲中讲过。。。 Questions 25-28 What is a dinosaur? A. Although the name dinosaur is derived from the Greek for “terrible lizard”, dinosaurs were not, in fact, lizards at all. Like lizards, dinosaurs are included in the class Reptilia, or reptiles, one of the five main classes of Vertebrata, animals with backbones. However, at the next level of classification, within reptiles, significant differences in the skeletal anatomy of lizards and dinosaurs have led scientists to place these groups of animals into two different superorders: Lepidosauria, or lepidosaurs, and Archosauria, or archosaurs. B. Classified as lepidosaurs are lizards and snakes and their prehistoric ancestors. Included among the archosaurs, or “ruling reptiles”, are prehistoric and modern crocodiles, and the now extinct thecondonts, pterosaurs and dinosaurs. Paleontologists believe that both dinosaurs and crocodiles evolved, in the later years of the Triassic Period (c. 248-208 million years ago), from creatures called pseudosuchian thecodonts. Lizards, snakes and different types of thecondont are believed to have evolved earlier in the Triassic Period from reptiles known as eosuchians. C. The most important skeletal differences between dinosaurs and other archosaurs are in the bones of the skull, pelvis and limbs. Dinosaur skulls are found in a great range of shapes and sizes, reflecting the different eating habits and lifestyles of a large and varied group of animals that dominated life on Earth for an extraordinary 165 million years. However, unlike the skulls of any other known animals, the skulls of dinosaurs had two long bones known as vomers. These bones extended on either side of the head, from the front of the snout to the level of the holes in the skull known as the antorbital fenestra, situated in front of the dinosaur’s orbits or eyesockets. D. All dinosaurs, whether large or small, quadrupedal or bipedal, fleet-footed or

相关文档
相关文档 最新文档