文档库 最新最全的文档下载
当前位置:文档库 › 13-PR-Efcient iris segmentation method in unconstrained environments - 副本 (2)

13-PR-Efcient iris segmentation method in unconstrained environments - 副本 (2)

13-PR-Efcient iris segmentation method in unconstrained environments - 副本 (2)
13-PR-Efcient iris segmentation method in unconstrained environments - 副本 (2)

Ef?cient iris segmentation method in unconstrained environments

Shaaban A.Sahmoud n,Ibrahim S.Abuhaiba

Islamic University of Gaza,Jameaa street,Islamic University,Gaza972,Palestine

a r t i c l e i n f o

Article history:

Received16July2012

Received in revised form

31March2013

Accepted2June2013

Available online15June2013

Keywords:

Biometrics

Iris recognition

Iris segmentation

Non-cooperative iris recognition

Eyelid localization

a b s t r a c t

Recently,iris recognition systems have gained increased attention especially in non-cooperative

environments.One of the crucial steps in the iris recognition system is the iris segmentation because

it signi?cantly affects the accuracy of the feature extraction and iris matching steps.Traditional iris

segmentation methods provide excellent results when iris images are captured using near infrared

cameras under ideal imaging conditions,but the accuracy of these algorithms signi?cantly decreases

when the iris images are taken in visible wavelength under non-ideal imaging conditions.In this paper,a

new algorithm is proposed to segments iris images captured in visible wavelength under unconstrained

environments.The proposed algorithm reduces the error percentage even in the presence of types of

noise include iris obstructions and specular re?ection.The proposed algorithm starts with determining

the expected region of the iris using the K-means clustering algorithm.The Circular Hough Transform

(CHT)is then employed in order to estimate the iris radius and center.A new ef?cient algorithm is

developed to detect and isolate the upper eyelids.Finally,the non-iris regions are removed.Results of

applying the proposed algorithm on UBIRIS iris image databases demonstrate that it improves the

segmentation accuracy and time.

&2013Elsevier Ltd.All rights reserved.

1.Introduction

With an increasing attention to security,the need for an

automatic personal identi?cation system based on biometrics

has increased because traditional identi?cation systems based on

cards or passwords can be broken by losing cards,stealing them or

forgetting passwords.Iris recognition is becoming one of the most

important biometrics used in recognition.This importance is due

to its high reliability for personal identi?cation[1–3].Human iris

patterns are very stable throughout a person's life[4,5].Further-

more,each iris is unique and even irises of identical twins are also

different.This is because the human iris is a complex pattern and

contains many distinctive features such as arching ligaments,

furrows,ridges,crypts,rings,freckles and a zigzag collarette,thus

iris patterns possess a high degree of randomness.

Since the concept of automatic iris recognition was proposed in

1987[4],many researchers have proposed a lot of powerful

algorithms in this?eld.Most of these algorithms need user

cooperation to get a high-quality image and to provide the users

with feedback to ensure that they are properly positioned for

image capture.The most relevant algorithms and widely used in

current real applications are those developed by Daugman,which

require NIF camera to capture the iris images.

When current iris recognition algorithms deal with noisy iris

images taken in visible wavelength under non-ideal imaging

conditions,the algorithms accuracy signi?cantly decreases

because the segmentation stage is much affected with noise and

non-ideal lighting conditions.Fig.1(a)shows an image taken

under ideal conditions with NIF camera,where the image in

Fig.1(b)was taken in visible wavelength under non-ideal condi-

tions,and thus it is extremely challenging to segmentation

process.

The main motivation in this paper is to propose a robust iris

segmentation algorithm able to deal with highly noisy iris images

captured under unconstrained conditions and non-ideal environ-

ments,which cannot be handled using current iris segmentation

algorithms such as Daugman algorithms.The CHT is the best

circles localizing operator in the noisy images but it is very

expensive in time.Therefore,the proposed algorithm adds a new

pre-processing step using K-means algorithm to divide the iris

image into three regions namely iris region,skin region and sclera

region.The K-means pre-processing step could exclude the non-

iris regions which cause many errors and decrease the searching

time of the CHT.Furthermore,a number of new methods are

propose to enhance the performance of the segmentation in noisy

images such as a method to localize the upper eyelid through

detecting it in the sclera region as that will enable the algorithm to

deal more effectively with noisy iris images.The proposed algo-

rithm segments the noisy iris images and reduces the execution

time,enabling it to be used in real-time applications.

The rest of the paper is organized as follow.Section2describes

a brief survey of related work in iris segmentation.Section3

Contents lists available at SciVerse ScienceDirect

journal homepage:https://www.wendangku.net/doc/cc9153485.html,/locate/pr

Pattern Recognition

0031-3203/$-see front matter&2013Elsevier Ltd.All rights reserved.

https://www.wendangku.net/doc/cc9153485.html,/10.1016/j.patcog.2013.06.004

n Corresponding author.Tel.:+972595800875.

E-mail addresses:shaaban.sahm@https://www.wendangku.net/doc/cc9153485.html,,

ssahmoud1@https://www.wendangku.net/doc/cc9153485.html,(S.A.Sahmoud).

Pattern Recognition46(2013)3174–3185

explain the proposed iris segmentation algorithm.Experimental results are represented in Section 4and Section 5concludes the paper.References have been mentioned in Section 6.

2.Related work

Many researchers have much contributed in iris segmentation [6].They have used different techniques to increase the perfor-mance of their algorithms.Previous algorithms have been classi-?ed according to two criteria.The ?rst classi ?cation is according to the region of starting in segmentation,whereas the second is according to the operators or techniques used in describing the shapes inside the eye.In this section,we present the most prominent works in these two classi ?cation.2.1.The region of starting the segmentation

There are three categories of researchers depending on where do they start the segmentation.The ?rst category of researchers starts from pupil [7,8]because it is the darkest region in the image.Based on this fact,pupil is localized ?rst,and then the iris is determined using different techniques.Finally,noises are detected and isolated from the iris region.In the second category [9],the segmentation starts from the sclera region because it is found to be less saturated (white)than other parts of the eye and then the iris is detected using any type of operators.Finally,the pupil and noises are detected and isolated from iris region.The third category [10,11]of researchers searches for the iris region directly by using edge operators or applying clustering algorithms to extract iris texture features.

2.2.The techniques used to describe the shapes inside the eye According to the techniques and operators that are used in iris segmentation,there are two common approaches used in localiz-ing the iris region.The ?rst approach [12,13]applies a type of edge detection followed by CHT or one of its derivatives to detect the shape of iris and pupil.A ?nal stage can be applied to correct the shape of iris or pupil.The main problem with this approach is that the CHT is practically very expensive in time.The second approach [7,14–16]uses different types of operators to detect the edges of iris like Daugman Integro-Differential operator [17]or Camus and Wildes [18]operator and then the pupil and noises are detected and isolated.However,these operators are affected by noises and separability between iris and sclera.As a result,it could not be used with noisy iris images.

3.Overview of the proposed approach

The proposed segmentation algorithm avoids starting from the

pupil because it is not always the darkest region in the noisy iris

images taken in a visible wavelength.Further,the algorithm avoids starting from sclera because it can be covered by dark colors which cause errors in determining the iris region and thus in segmenta-tion process.

The algorithm starts by determining the expected region of iris using K-means clustering algorithm and then,vertical Canny edge detection is applied on the output image to produce edge-maps.The K-means algorithm and the vertical edge map are used to reduce the searching time of the CHT which applied on the edge image to estimate iris center and radius.Therefore,the input image to the CHT is the binary edge image that comes from applying the edge detection on the masked region obtained from K-means.After determining the iris circle,new techniques are applied to isolate the noisy factors like eyelids,eyelashes,lumi-nance and re ?ections.Finally,the pupil region is removed from the iris region.Fig.2shows the steps of the proposed segmentation algorithm.These steps will be explained in details in the following paragraphs.

3.1.Determining iris region

One of the most important sources of error in segmentation is the high local contrast occurring on non-iris regions.These sources may include eyelashes,eyebrow,glass frame or white areas due to luminance on skin behind eye region.Therefore,excluding the non-iris regions before the iris segmentation step avoid such segmentation errors.In other words,if the image is divided into three regions namely iris region,skin region and sclera region,then the errors in segmentation and the searching time are reduced at the overall segmentation process.

Image K-means clustering algorithm is used to divide the eye image into three different regions.The ?rst region which has small intensity values,consists of the iris including the pupil and eyelashes.The second region which has high intensity values,consists of sclera and some highlights or luminance re ?ections.A third region occurs between the previous two regions is the skin region.The image K-means algorithm is an iterative technique used to divide an image into K clusters by assigning each point to the cluster whose center has the smallest distance.The center is the arithmetic mean of all the points in the cluster.The distance is the absolute difference between a pixel and a cluster center and it is typically based on pixel intensity in our algorithm.The image K-means clustering algorithm is effective because our main concern is the darkest region only.We experimen-tally found the optimum number of clusters to be three.The basic steps of the image K-means algorithm are

(1)Compute the intensity distribution.

(2)Initialize the centroids with k random intensities.

(3)Repeat the following steps until there is no change in the

cluster labels.

(4)Cluster the points based on distance of their intensities from

the centroid.

c ei T?argmin j

‖x ei T?μj ‖2

e1T

(5)Compute the new centroid of each clusters.

μi ?

∑m i ?11f c ei T?j g x ei T

i ?11f c ei Te2T

where i iterates overall the intensities,j iterates overall the centroids and μi is the centroids intensities.

In the following paragraphs we will discuss in details the procedures of determining the iris region.First,the red color is separated from the RGB color space of the image,because it contains most of the iris details.Then,the image K-means clustering algorithm is applied on the red color space of

the

Fig. https://www.wendangku.net/doc/cc9153485.html,parison of iris images from the UBIRIS.v2and CASIA (version 4).(a)image from the UBIRIS.v2.(b)image from CASIA (version 4).

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–31853175

image.After clustering,the resulting image is morphologically handled to delete small blocks and noise,of the remaining blocks,the nearest block from the center is used to discard the eyebrow region.Fig.3shows the result of applying the clustering algorithm on some images.White regions in Fig.3(b)present the darkest region,where black regions present the region of sclera and some highlights or luminance re ?ections,and gray regions are the region of skin.It is seen in Fig.3that white area covers the iris,eyelashes and sometimes eyebrow,while excluding luminance and specular re ?ections.This exclusion is very helpful in reducing the handled areas.It was found that the clustering algorithm reduced the handled pixels by more than 70%.Consequently,the searching time of the next steps will be reduced.3.2.Detecting edges

To ?nd the edge points in the iris image,Canny edge detection is used [19].The implemented Canny edge detection has six arguments.The upper threshold and the lower threshold inputs are experimentally adjusted to make the algorithm suitable for the noisy iris images.Because the vertical edges are more signi ?cant than the horizontal edges,high value for vertical edges weight and low value for horizontal edges weight are assigned to extract the iris –sclera border.These values are adjusted only once for the whole database and do not need to be computed for each iris image.This process decreases the errors resulting from the horizontal edges due to eyelashes and eyelids.

Instead of using the grayscale image,it is found experimentally that the Y component of YCbCr color space is the best image that can be used to detect the edges.This is done by converting the RGB iris image to YCbCr color space,and then the Y component is separated.In order to smooth the image and handle small noises,median ?lter is applied on the Y component image.Note that the edge detection is applied only on the reduced area which resulting from the clustering step.Fig.4shows the result of applying the Canny edge detection on sample images.Fig.4(a)shows the real images and Fig.4(b)shows the Y component after converting the images to YCbCr color space.It is noted that the Y component reduces the effect of luminance,specular re ?ection and the red regions in the sclera.As a result,less points are processed in CHT.Fig.4(c)shows images after applying Canny edge detection.It is seen that the biggest two connected components are the iris boundaries,in spite of the existence of some small noise compo-nents which are reduced using morphological operations as shown in Fig.4(d).It is found that the edge points that will be processed or searched by CHT were reduced by more than 90.Furthermore,the edge points are reduced by scaling the image by factor of 0.5,because it is found that reducing the scale factor to

Remove noise (Upper and lower eyelid –luminance)

Captured Image

Apply K-means algorithm to get the iris region

Localize and remove the pupil

area

Delete small blocks and noise

Segmented Image

Fig.2.The stages of the proposed iris segmentation method.

Fig.3.Illustration the results of K-mean clustering.(a)Real images.(b)Clustering result images,white regions represent the estimated iris region.

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–3185

3176

value less than 0.5,will cause many edge points to disappear and thus cause errors in localizing the iris boundary.3.3.Applying the circular Hough transform

The CHT [20]belongs to R3space,so its complexity equals O(n3).Therefore,three proposed methods which explained in previous steps are used to reduce the execution time:

(1)Scaling factor:to reduce the image size and cause reduction in

the edge points.

(2)K-means clustering:to reduce the searching area and the edge

points when Canny edge detection is applied.

(3)Morphological operations:to remove small blocks and noise

from the binary image.After applying the CHT on the binary edge image,the max-imum group of parameters (a ,b ,r )is selected from the accumu-lator,and then the Cartesian parameters (x ,y ,r )are found to localize the iris.Fig.5shows samples of segmented irises from UBIRIS database.As shown in this ?gure,the iris location bound-ary is detected and white circles were precisely ?t the irises,despite the presence of some noisy factors,like specular re ?ec-tions,iris occlusion by eyelids and eyelashes.Moreover,it is noted that irises are correctly localized regardless of their size and this is another bene ?t of the proposed algorithm.3.4.Isolating noise

In non-cooperative iris recognition,the user has little or even no active participation in the image capturing process [21].As a result,the iris images are often captured with more noisy factors,such as re ?ections,occlusions by eyelids or eyelashes,shadows,etc.It has

been reported that the most localization errors occur on non-iris regions due to the high local contrast.Therefore,to avoid such localization errors,the non-iris regions should be excluded and all sources of errors must be handled.In this section,we will explain how the proposed segmentation method handles each of the error sources.

3.4.1.Upper eyelid localization

When iris images are captured in the ideal environments,researchers used many methods to localize the eyelid of the iris e.g.edge detection,Integro-differential operator and L ine H ough T ransform (LHT).However,these methods are not effective when used in noisy iris images,because the intensity contrast of iris and eyelid can be very low,especially for heavily pigmented dark irises,as that in Fig.1(a).A new method is proposed to localize the eyelids by detecting there in the sclera region because the intensity contrast between the sclera and the upper eyelid is higher than that between the iris and the upper eyelid.The following steps explain the upper eyelid localization algorithm:(1)Isolate two small rectangles from the outer two sides of the iris.

Each of these rectangles has a length that equals the iris radius and a width that equals one-third of the iris radius as shown in Fig.6.The coordinates of the two lower corners of the rectangle on the outer right side of the iris are (x +r ,y ),(x +r +r /3,y )and the coordinates of the lower corners of the rectangle on the outer left side of the iris are (x ?r ,y ),(x ?r ?r/3,y )where the point (x ,y )is the center of the iris circle and r is the radius of the iris.(2)Apply horizontal Canny edge detection on the two rectangles

and isolate the noise using morphological operations.

(3)Determine the coordinates of upper eyelid on both rectangles,

assuming that it is the center of the biggest horizontal edge line on each

rectangle.

Fig.4.Illustration the results of applying Canny edge detection on three images (a)Real images.(b)Y component of the image after converting it to YCbCr color space.(c)Binary image resulting from Canny edge detection using scale factor of 0.5(d)Binary image after removing small blocks and noises.

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–31853177

(4)Draw an arc that passes through the two coordinates of the

upper eyelid on each rectangle with a radius equals double the iris radius.The center of the arc is computed using the following steps:Let the coordinates of the upper eyelid on the ?rst rectangle be (x 1,y 1),and coordinates of the upper eyelid on the second rectangle be (x 2,y 2).The line passing through the two coordinates of the upper eyelid on each rectangle is given by the equation:ax tby tc hori ?0

e3T

where a ?y 2?y 1,b ?x 1?x 2,c hori ?x 2y 1–x 1y 2.Let (p ,q )be the midpoint of the line joining (x 1,y 1)and (x 2,y 2).

The equation of the perpendicular to the line joining (x 1,y 1)and (x 2,y 2)at the midpoint of these two points is bx –ay tc vert ?0

e4T

where c vert ?aq –bp .Then,the distance between the arc center and the middle of the line joining (x 1,y 1)and (x 2,y 2)is twice the iris radius as shown in Fig.6.

Fig.7shows sample images of UBIRIS v2and v1after using the proposed method to localize the upper eyelid of the iris.Note that,due to the use of the arc in the proposed algorithm,the iris region will not lose non-noise regions as in using the LHT algorithm.It is seen in Fig.7that the proposed algorithm isolated the upper eyelid accurately,even if the very low intensity contrast between the iris and the eyelid,whereas the normal algorithms like LHT and Integro-Differential operator of Daugman cannot isolate it.The effectiveness of the proposed algorithm is due to the usage of the intensity contrast between the sclera and the upper eyelid rather than the iris and the upper eyelid,therefore the algorithm overcomes the segmentation errors resulting from low intensity contrast between the iris and the upper eyelid.Furthermore,in contrast to other algorithms the proposed algorithm is still work-ing,when a huge area of iris is occluded by upper eyelid.3.4.2.Lower eyelid localization

To localize the lower eyelid of the iris,the LHT is used because most of the lower eyelid occlusions are approximately linear.First,the Canny edge detection is applied to the lowest half of the iris and then the best line is found using LHT.If the vote of the line is less than a certain value,then we assume that lower eyelid occlusions do not occur.

Fig.8shows some examples after localizing the lower eyelid.The blue line represents the largest edge line that separates the iris and the lower eyelid.It is known that the lower eyelid isolation process is easier than that of the upper eyelid because there is no eyelashes occlusion and the occluded area of the iris due to the lower eyelid is usually less than that due to the upper eyelid.3.4.3.Specular re ?ections isolations

Specular re ?ections can be a serious problem when there are noisy images processed by the iris recognition system.A new simple re ?ection removal method is proposed in two

steps:

Fig.5.Samples of segmented irises from UBIRIS

database.

Fig.6.Upper eyelid localization model.

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–3185

3178

(1)Compute the average intensity of the iris region in the three

RGB color spaces (After upper and lower eyelids removal).(2)Test the intensity of each pixel in the iris,and if the intensity of

the pixel in certain color space is greater than the average intensity computed in the ?rst step plus constant value,then consider this pixel as a re ?ection noise.The constant value is adjusted only once for the whole iris database.Fig.9shows some images after localizing the specular re ?ec-tions.The pixels which are marked with a red color are masked to be isolated,when iris template code is extracted.The specular

re ?ection and highlight regions are determined precisely even in the presence of light re ?ection or small occluded regions as shown in Fig.9.Note that this process can also discard some redundant white spaces that might result from off-angle captured images.

3.4.4.Pupil region removing

Pupil removal is performed in the last step because one of the major differences between the eye images in the noisy databases were captured under visible wavelengths and with those taken under Near Infrared illumination is that the intensity contrast

of

Fig.7.Upper eyelid localization algorithm (a)segmented images from UBIRIS v2.(b)segmented images from UBIRIS v2after using the proposed upper eyelid localization.(c)segmented images from UBIRIS v1.(d)segmented images from UBIRIS v1after using the proposed upper eyelid localization.

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–31853179

the iris and the pupil is very low,especially for heavily pigmented irises as shown in Fig.6.If we try to localize the pupil in noisy images directly,the segmentation will fail because the darkness of the iris.Therefore the best method is to enhance the image iris to make the pupil more visible using the contrast enhancement method [22].The following steps explain the pupil removal process:

(1)Adjust iris image by mapping the intensity values of its bits to

new values to focus on dark intensities.This step makes the difference between the iris and the pupil more clear.

(2)Apply median ?lter to reduce the noisy factors and to preserve

the edges.

(3)Use Canny edge detection to get the edge map.

(4)Apply the CHT to localize the pupil,assuming that it is circular.

The pupil radius is set to be in the range of 1/10–7/10of the iris radius.

Fig.10shows the steps of this algorithm using an image from UBIRIS v2.Note that all previous steps are applied only on a square inside the iris to reduce the execution time and to avoid errors that may happen due to the edge points that lays outside the pupil region.The center of this square is the same of the iris and the coordinates of the left corner of the square is (x –r /2,y –r /2)where the point (x ,y )is the iris center and r is the iris radius.As shown in Fig.10,the pupil localization succeeds even with very low intensity contrast between the iris and the pupil.

Fig.11shows the behavior of the proposed segmentation algo-rithm on image sequences taken in real-conditions from UBIRIS iris image database.As illustrated in previous sections,the K-means algorithm determines the iris region which is marked with the blue rectangle in Fig.11(c)and the Canny edge detection is applied on this region only.The result is a small number of edge points as in Fig.11(d)which makes the CHT work faster.In Fig.11(f),the eyelids,the pupil and the luminance are removed using proposed algorithms.

It

Fig.8.Lower eyelid localization samples using UBIRIS v1

database.

Fig.9.Isolating re ?ections from irises in the proposed algorithm (a)image with re ?ections.(b)detect the re ?ection regions (marked with red

color).

Fig.10.Steps of Pupil removal algorithm (a)the inner square of the iris.(b)result of adjust image in (a).(c)result of Canny edge detection.(d)result of CHT.

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–3185

3180

is Clear that the proposed algorithm succeed in segmenting iris images even in the presence of noise types such as iris obstructions,specular re ?ection and glasses.

4.Experimental results

To evaluate the proposed segmentation algorithm,it is implemen-ted using MATLAB 7.0software.The environment where the experi-ments were conducted is Compaq PC,Core 2Due Intel Pentium Processor (2.00GHz),with 1GB RAM and Windows 7operating

system.We start the implementation with K-means clustering followed by edge detection and CHT as shown in Fig.2.It is assumed that both the iris and the pupil have circular forms,therefore every circle is completely described by the values of its center (x ,y )and radius r .

UBIRIS v1(Session 1)[26]iris database is used to evaluate the proposed segmentation algorithm.This database is composed of 1877images collected from 241eyes.It simulates unconstrained imaging conditions.Fig.12shows the segmented images after applying the proposed iris segmentation algorithm on UBIRIS v1database.The accuracy and the average segmentation time

are

Fig.11.The behavior of the proposed segmentation algorithm on image sequences taken in real-conditions from UBIRIS iris image database.(a)the real image.(b)the binary image after applying the K-means.(c)the clustered binary image after applying some morphological operations.(d)the edge map image after applying the Canny algorithm on the estimated iris region.(e)the image after localizing the iris using CHT.(f)the image after removing the noise

regions.

Fig.12.Examples of correct segmented irises.

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–31853181

computed for the proposed algorithm and some previous algo-rithms [23].The segmentation is considered accurate when the following two conditions are provided:

(1)The two circles of the iris and the pupil fall exactly into the iris

and the pupil borders of the resulting images.

(2)The upper and the lower eyelids are correctly localized.If any of the two previous conditions is not satis ?ed,then the segmentation is not accurate as shown in Fig.13.The average segmentation time is computed by calculating the segmentation time for all the correctly segmented iris images in the dataset divided by the number of the correctly segmented iris images.Table 1shows comparison between the accuracy of proposed algorithm and some previous algorithms.

As shown in the table,the accuracy of the proposed algorithm is better than Daugman and Wildes algorithms.Meanwhile,the execution time of our segmentation algorithm is the lowest one because of applying the proposed steps to reduce the searching areas in CHT.Wildes used multi-resolution coarse-to-?ne search approach,and searches over the hole iris image pixels without any preprocessing techniques which take more time than the pro-posed algorithm.The Fourier spectral algorithm got good results in limbic localization but the error is high in pupil localization,which increase the total segmentation error.The proposed algorithm failed in segmenting noisy irises when eyelids and eyelashes obstruct big portions of the iris (more than 60%)or when the upper or lower eyelids cover the pupil of the iris (Note that most segmentation methods fail in these cases).Fig.13gives examples

of iris images which the proposed algorithm failed to segment.In the second column of Fig.13,the proposed segmentation algo-rithm failed because the iris is almost covered by eyelids.In the fourth column,the proposed segmentation algorithm succeeded in localizing the iris boundary,but it failed in localizing the pupil boundary when it is covered by eyelids.Further,it failed in isolating the eyelids regions when the iris image is blurred due to fast movements while capturing the image.

To evaluate the performance of the proposed segmentation method in the whole recognition system,the other three stages (normalization,encoding,comparisons)is implemented.Some functions from Masek iris recognition algorithm are used [24].The implemented system is used to generate the iris template code for every iris.To plot the match and non-match distributions for this database,each iris image is compared with the other all irises in the database.For the UBIRIS v1(session1)iris database,the total number of comparisons equals 1,448,410where the total number of intra-class comparisons equals 2410and that of inter-class comparisons equals 1,446,000.

During the comparison stage,the H amming D istance (HD)is used as the metric of dissimilarity between two considered iris codes codeA and codeB :HD ?

‖ecode A ?code B T∩mask A ∩mask B ‖

‖mask A ∩mask B ‖

e5T

where maskA and maskB are the masks of codeA and codeB ,respectively.A mask as proposed by Daugman [17]signi ?es whether any iris region is occluded by eyelids,eyelashes,lumi-nance,etc.HD is therefore a fractional measure of dissimilarity after noise regions are removed.

Fig.14shows the distribution of the HD when the proposed segmentation algorithm is used and Fig.15shows the distribution of the HD when Daugman segmentation algorithm is used.The results show that the match distribution when the proposed algorithm was used has shifted a signi ?cant distance to the left and the mean of the match distribution has decreased by 0.12.In Fig.15when Daugman algorithm is applied,the distance between the two means equals 0.075and the distance between the two means in Fig.14when proposed algorithm is applied equals 0.19.It is seen that the distance between the match and non-match distribution increase when the proposed segmentation

algorithm

Fig.13.Examples of failed segmented noisy irises when eyelids and eyelashes obstruct a big portions of the iris.

Table 1

Comparison between the accuracy of proposed algorithm and some previous algorithms.Method

Accuracy

Time (s)Daugman 95.22% 2.73Wildes

98.68% 1.95Camus and Wildes 96.78 3.12Martin-Roche

77.18%

–Fourier spectral [28]98.49(limbic)94.47(pupil)–Proposed

98.76%

1.49

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–3185

3182

is used.As a result,the error rates will decrease and a large improvement over Daugman algorithm performance is achieved. Moreover,it is noticed that when the proposed segmentation algorithm is used,the interference of the match and non-match distribution is less than that when Daugman segmentation algo-rithm is used.This is because the Daugman segmentation algorithm is very sensitive to noise and cannot handle noisy factors which occur in non-ideal conditions such as specular re?ections,pupil isolation and eyelids occlusion.

Fig.16shows the EER of iris recognition system when our proposed algorithm is used.EER is the point at which FMR and FNMR are almost equal.FMR occurs when the system accepts an identity claim,but the claim is not true.FNMR occurs when the system rejects an identity claim,but the claim is true.EER enables evaluation of FMR and FNMR at a single operating point and the lower the EER value is,the higher the accuracy of the biometric system will be.When the proposed segmentation algorithm is used,the EER is0.126which is a low value comparing with EER value of the latest iris recognition systems which range from3.6%to0.07%as described in[27].This result shows that the proposed segmentation algorithm accurately isolates error regions in iris template.Thus,the FMR and FNMR will decrease.The other segmentation algorithms like Daugman cannot handle these sources of error because it is designed to work under ideal conditions and use the Integro-Differential operator which frequently fails when the images do not have suf?cient intensity separability between the iris and the sclera.

To evaluate the performance of the proposed segmentation algorithm over different techniques,Daugman's method is imple-mented using some functions from Masek's iris recognition algo-rithm[24]and the results of applying SVM Match Score Fusion algorithm[25]are used.We use the most important points which are FAR(%)at0.0001%FRR and FRR(%)at0.0001%FAR to compare our algorithm with others.Where the FRR is the probability that the system fails to detect a match between the input pattern and a

Mean = 0.28

Mean = 0.47

Fig.14.The match and non-match distributions for UBIRIS v1when the proposed segmentation algorithm is used.

Mean = 0.475

Mean = 0.4

Fig.15.The match and non-match distributions for UBIRIS v1when Daugman algorithm is used.

S.A.Sahmoud,I.S.Abuhaiba/Pattern Recognition46(2013)3174–31853183

matching template in the database,and the FAR is the probability that the system incorrectly matches the input pattern to a non-matching template in the database.

The results of the three algorithms are summarized in Table 2.It is shown that the proposed segmentation algorithm signi ?cantly reduces the two metric values.At 0.0001%FRR,the proposed algorithm reduces the FAR by 5.7%when Daugman algorithm is used and by 4.4%when SVM algorithm is used.Whereas at FAR of 0.0001%,the proposed algorithm reduces the FRR by 7.14%when Daugman algorithm is used and by 2.89%when SVM algorithm is used.This means that the proposed algorithm could reduces the error in the two common security cases,the ?rst case when the cost of FRR error may exceeds the cost of FAR error such as in a customer context,the second case when the cost of FAR error may exceeds the cost of FRR error such as in the military context.These improvements in the performance of the iris recognition system when the proposed segmentation algorithm is used are due to its ability to handle many types of errors which might occur in non-ideal environments.

5.Conclusion

This research proposed a new effective and fast algorithm to segment the non-ideal iris images captured under unconstrained imaging conditions which generate several types of noises,such as iris obstructions and blurred iris images.The proposed algorithm added a new pre-processing step to segmentation steps which is clustering the iris image using K-means algorithm.This pre-processing step excludes from the image the non-iris regions which cause many errors and decrease the searching time in the

next steps.CHT is applied to the binary edge map on the estimated iris region to localize the outer iris borders.A new method to localize the upper eyelids is proposed by detecting it in the sclera region.To localize the lower eyelid of the iris LHT is used because most occlusions of the lower eyelid is approximately linear.Finally,in order to remove the pupil region,iris image is adjusted by mapping its bits intensity values to new values to focus on dark intensities and then CHT is applied.Experimental results on UBIRIS iris database indicate the high accuracy and less execution time of the proposed segmentation algorithm compared with previous algorithms.

Con ?ict of interest statement None declared.References

[1]J.Daugman,Statistical richness of visual phase information:update on

recognizing persons by their iris patterns,International Journal of Computer Vision 45(1)(2001)25–38.

[2]J.Daugman,Demodulation by complex-valued wavelets for stochastic pattern

recognition,International Journal of Wavelets,Multiresolution and Informa-tion Processing 1(1)(2003)1–17.

[3]R.Wildes,Iris recognition:an emerging biometric technology,Proceedings of

the IEEE 85(1997)1348–1363.

[4]L.Flom,A.Sa ?r,Iris Recognition System,US Patent 4641394,1987.

[5]J.Daugman,Biometric Personal Identi ?cation System Based on Iris Analysis,

United States Patent,Patent Number:5,291,560,1994.

[6]J.G.Daugman,High Con ?dence Visual Recognition of Persons by a Test of

Statistical Independence,IEEE Transactions on Pattern Analysis and Machine Intelligence 15(11)(1993)1148–1161.

[7]A.Ross,S.Shah,Segmenting non-ideal irises using geodesic active contours,

in:Proceedings of the IEEE 2006Biometric Symposium,2006,pp.1–6.

[8]R.Donida Labati,V.Piuri,F.Scotti,Agent-based image iris segmentation and

multipleviews boundary re ?ning,in:Proceedings of the IEEE Third Interna-tional Conference on Biometrics:Theory,Applications and Systems,November 20,2009.

[9]Y.Chen,M.Adjouadi,CA Han,J.Wang,A.Barreto,N.Rishe,J.Andrian,A highly

accurate and computationally ef ?cient approach for unconstrained iris seg-mentation,Image and Vision Computing (2010).

[10]M.Vatsa,R.Singh,A.Noore,Improving iris recognition performance using

segmentation quality enhancement match score fusion and indexing,IEEE Transactions on Systems,Man,and Cybernetic 38(4)(2008)1021–1035.

[11]H.Proenca,L.A.Alexandre,Iris segmentation methodology for non-cooperative recognition,IEE Proceedings of Vision,Image and Signal Proces-sing 153(2)(2006)199–205.

[12]X.Liu,K.W.Bowyer,P.J.Flynn,Experiments with an Improved Iris Segmenta-tion Algorithm,in:Proceedings of the 4th IEEE Workshop on Automatic Identi ?cation Advanced Technologies,2005,pp.118–123.

[13]M.Dobes,J.Martineka,D.S.Z.Dobes,J.Pospisil,Human eye localization using

the modi ?ed Hough transform,Optik —International Journal for Light and Electron Optics 117(2006)468–473.

[14]S.Schuckers,N.Schmid,A.Abhyankar,V.Dorairaj,C.Boyce,L.Hornak,On

techniques for angle compensation in nonideal iris recognition,IEEE Transac-tions on System,Man,and Cybernetics –Part B:Cybernetics 37(5)(2007)1176–1190.

[15]T.Tan,Z.He,Z.Sun,Ef ?cient and robust segmentation of noisy iris images for

non-cooperative iris recognition,Image and Vision Computing 28(2010)223–230.

[16]J.Zuo,N.Kalka,N.Schmid,A robust iris segmentation procedure for

unconstrained subject presentation,in:Proceedings of the Biometric Con-sortium Conference,2006,pp.1–6.

[17]J.Daugman,New methods in iris recognition,IEEE Transactions on System,

Man,and Cybernetics –Part B:Cybernetics 37(5)(2007)1167–1175.

[18]T.A.Camus,R.Wildes,Reliable and fast eye ?nding in close-up images,in:

Proceedings of the IEEE 16th International Conference on Pattern Recognition,Quebec,August 2002,pp.389–394.

[19]J.Canny,A computational approach to edge detection,IEEE Transactions on

Pattern Analysis and Machine Intelligence 8(1986)679–698,Nov..

[20]D.Ballard,Generalized Hough transform to detect arbitrary patterns,IEEE

Transactions on Pattern Analysis and Machine Intelligence (1981)111–122.[21]H.Proenca,L.A.Alexandre,The NICE I:Noisy Iris Challenge Evaluation —Part I,

in:Proceedings of the of First International Conference on Biometrics:Theory,Applications,and Systems,2007,pp.1–4.

[22]K.Delac,M.Grgic,A survey of biometric recognition methods,in:Proceedings

of the 46th International Symposium Electronics in Marine,ELMAR-2004,Croatia,June 2004,pp.184–

193.

Fig.16.The Equal Error Rate value where FMR and FNMR are equal.

Table 2

Comparison of the proposed algorithm with two previous algorithms.Method

FAR(%)at 0.0001%FRR FRR(%)at 0.0001%FAR Daugman

7.212.96SVM Match Score Fusion 5.98.71Proposed

1.5

5.82

S.A.Sahmoud,I.S.Abuhaiba /Pattern Recognition 46(2013)3174–3185

3184

[23]Hugo Proenca,Towards non-cooperative biometric iris recognition(PhD

thesis),University of Beira Interior,October2006.

[24]L.Masek,P.Kovesi,MATLAB source code for a biometric identi?cation system

based on iris patterns,The University of Western Australia,2003.?http:// https://www.wendangku.net/doc/cc9153485.html,.au/ pk/studentprojects/libor/?.

[25]M.Vatsa,R.Singh,A.Noore,Improving iris recognition performance using

segmentation,quality enhancement,match score fusion,and indexing,Pre-sented at IEEE Transactions on System,Man,and Cybernetics–Part B,2008, pp.1021–1035.[26]H.Proenc?a,L.A.Alexandre,UBIRIS:a noisy iris image database,in:Proceed-

ings of the13th International Conference on Image Analysis and Processing, ICIAP2005,Calgary,September2005,pp.970–977.

[27]Kevin W.Bowyer,Karen Hollingsworth,Patrick J.Flynn,Image understanding

for iris biometrics:a survey,Computer Vision and Image Understanding110

(2)(2008)281–307.

[28]Niladri B.Puhan,N.Sudha,Anirudh S.Kaushalram,Ef?cient segmentation

technique for noisy frontal view iris images using Fourier spectral density, 2011.

Shaaban Sahmoud received the BS degrees in Computer Engineering in2006,from the Islamic University of Gaza,Palestine.He obtained a Master of Philosophy from the same university in2011in the?eld of pattern recognition.His research interests include computer vision,image processing,pattern recognition and arti?cial intelligence.He is a teaching assistant at the University Collage of Applied Sciences,Gaza,Palestine.

Prof.Abuhaiba is a professor at the Islamic University of Gaza,Computer Engineering Department.He obtained his Master of Philosophy and Doctorate of Philosophy from Britain in the?eld of document understanding and pattern recognition.His research interests include computer vision,image processing,document analysis and understanding,pattern recognition,arti?cial intelligence.Furthermore,Prof.Abuhaiba is open-minded and can do and supervise research in all?elds of computer engineering,computer science,information systems,and related?elds.Prof.Abuhaiba presented important theorems and more than30algorithms in document understanding.He published several original contributions in the?eld of document understanding in well-reputed international journals and conferences.Because of the reference value of Prof.Abuhaiba’s achievements,Marquis Who’s Who has selected his biographical pro?le for inclusion in the23rd(2006)Edition of Who’s Who in the World.This edition features biographies of50,000of the most accomplished men and women from around the globe and across all?elds of endeavor.

S.A.Sahmoud,I.S.Abuhaiba/Pattern Recognition46(2013)3174–31853185

项目管理全过程(所有环节)WORD模板

项目管理全过程各项模板 项目需求建议书(RFP) A. 项目信息 提供关于项目名称、客户名称、项目经理以及项目发起人姓名等方面的一般信息 项目名称:客户名称:项目经理:文件起草人:项目发起人:日期: B. 项目目标 描述完成项目的时间、质量要求等方面的信息 C. 工作描述(SOW) 描述执行项目的具体工作 D. 可交付结果 描述执行项目的阶段,完成项目任务的主要交付结果等方面的信息 E. 合同类型 描述使用哪种性质的合同

F. 付款方式 描述付款的时间、金额、币种、方式等 G. 建议书的内容 描述建议书应包括的具体内容 H. 建议书的评价标准 描述评价建议书的主要标准,包括价格、技术方案、项目管理方法、经验与资质等方面 I. 提交建议书的时间、地点要求 描述建议书的截止日期、提交的地点等信息

项目授权书 A. 项目信息 提供项目名称、客户名称、项目经理以及项目发起人姓名等与项目相关的一般信息 项目名称:客户名称:项目经理:授权书起草人:项目发起人:日期: B. 项目授权书 描述项目的工作任务,被任命的项目经理的姓名,项目经理的职责、权力等方面的信息

项目计划文件 A. 项目信息 提供项目名称、客户名称、项目经理以及项目发起人姓名等与项目相关的一般信息 项目名称:客户名称:项目经理:文件起草人:项目发起人:日期: 项目关系人名单 列出项目执行过程中涉及的相关人员的信息

公司名称: B. 项目概述 提供关于项目需要解决的问题、项目的工作任务、项目目标、项目管理采用的方法等的信息业务需求/难题 工作描述 项目目标

项目管理方法 C. 技术要求 提供关于项目的技术参数、性能指标、设计要求、实施规范以及技术方面的培训计划等的信息 D. 相关人员签名 E. 项目计划文件汇总 检查项目计划是否包含下列文件

Linux下抓包工具tcpdump应用详解

TCPDUMP简介 在传统的网络分析和测试技术中,嗅探器(sniffer)是最常见,也是最重要的技术之一。sniffer工具首先是为网络管理员和网络程序员进行网络分析而设计的。对于网络管理人员来说,使用嗅探器可以随时掌握网络的实际情况,在网络性能急剧下降的时候,可以通过sniffer 工具来分析原因,找出造成网络阻塞的来源。对于网络程序员来说,通过sniffer工具来调试程序。 用过windows平台上的sniffer工具(例如,netxray和sniffer pro软件)的朋友可能都知道,在共享式的局域网中,采用sniffer工具简直可以对网络中的所有流量一览无余!Sniffer 工具实际上就是一个网络上的抓包工具,同时还可以对抓到的包进行分析。由于在共享式的网络中,信息包是会广播到网络中所有主机的网络接口,只不过在没有使用sniffer工具之前,主机的网络设备会判断该信息包是否应该接收,这样它就会抛弃不应该接收的信息包,sniffer工具却使主机的网络设备接收所有到达的信息包,这样就达到了网络监听的效果。 Linux作为网络服务器,特别是作为路由器和网关时,数据的采集和分析是必不可少的。所以,今天我们就来看看Linux中强大的网络数据采集分析工具——TcpDump。 用简单的话来定义tcpdump,就是:dump the traffice on a network,根据使用者的定义对网络上的数据包进行截获的包分析工具。 作为互联网上经典的的系统管理员必备工具,tcpdump以其强大的功能,灵活的截取策略,成为每个高级的系统管理员分析网络,排查问题等所必备的东东之一。 顾名思义,TcpDump可以将网络中传送的数据包的“头”完全截获下来提供分析。它支持针对网络层、协议、主机、网络或端口的过滤,并提供and、or、not等逻辑语句来帮助你去掉无用的信息。 tcpdump提供了源代码,公开了接口,因此具备很强的可扩展性,对于网络维护和入侵者都是非常有用的工具。tcpdump存在于基本的FreeBSD系统中,由于它需要将网络界面设置为混杂模式,普通用户不能正常执行,但具备root权限的用户可以直接执行它来获取网络上的信息。因此系统中存在网络分析工具主要不是对本机安全的威胁,而是对网络上的其他计算机的安全存在威胁。 普通情况下,直接启动tcpdump将监视第一个网络界面上所有流过的数据包。 ----------------------- bash-2.02# tcpdump

以太网常用抓包工具介绍_464713

v1.0 可编辑可修改 i RTUB_105_C1 以太网常用抓包工具介绍 课程目标: 课程目标1:了解常见抓包软件 课程目标2:掌握根据需要选择使用抓包软件并分析报文

v1.0 可编辑可修改 目录 第1章以太网常用抓包工具介绍.............................................................................................................. 1-1 1.1 摘要 ................................................................................................................................................ 1-1 1.2 简介 ................................................................................................................................................ 1-1 1.3 抓包工具介绍 ................................................................................................................................ 1-2 1.4 Sniffer使用教程 .......................................................................................................................... 1-3 1.4.1 概述 ..................................................................................................................................... 1-3 1.4.2 功能简介 ............................................................................................................................. 1-3 1.4.3 报文捕获解析 ..................................................................................................................... 1-4 1.4.4 设置捕获条件 ..................................................................................................................... 1-8 1.4.5 报文放送 ........................................................................................................................... 1-10 1.4.6 网络监视功能 ................................................................................................................... 1-12 1.4.7 数据报文解码详解 ........................................................................................................... 1-14 1.5 ethreal的使用方法 .................................................................................................................... 1-28 1.5.1 ethreal使用-入门 ......................................................................................................... 1-28 1.5.2 ethereal使用-capture选项 ......................................................................................... 1-30 1.5.3 ethereal的抓包过滤器 ................................................................................................... 1-31 1.6 EtherPeekNX ................................................................................................................................ 1-35 1.6.1 过滤条件设置 ................................................................................................................... 1-35 1.6.2 设置多个过滤条件 ........................................................................................................... 1-41 1.6.3 保存数据包 ....................................................................................................................... 1-45 1.6.4 分析数据包 ....................................................................................................................... 1-47 1.6.5 扩展功能 ............................................................................................................................. 1-1 1.6.6 简单分析问题的功能 ......................................................................................................... 1-5 1.6.7 部分解码功能 ..................................................................................................................... 1-9 1.6.8 案例 ..................................................................................................................................... 1-1 1.7 SpyNet ............................................................................................................................................ 1-1 1.7.1 使用简介 ............................................................................................................................. 1-1 1.7.2 使用步骤: ......................................................................................................................... 1-2 i

项目管理流程和表格模板大全(完整版)

项目管理流程和表格 模板大全 目录 目录 项目目标管理的模型 ....................................................................................... - 3 -项目目标管理的SMART原则 ....................................................................... - 5 -项目策划书模板.......................................................................................................... - 7 - 项目立项书(见表4-1) ......................................................................................... - 10 - 项目申请表(见表4-2) ......................................................................................... - 11 - 项目需求分析表(见表4-3) ................................................................................. - 13 - 项目需求调查表(见表4-4) ................................................................................. - 14 - 项目范围说明书(见表4-5) ................................................................................. - 15 - 项目任务书(见表4-6) ......................................................................................... - 16 - 项目内部验收报告(见表4-7).............................................................................. - 17 - 项目外部验收报告(见表4-8).............................................................................. - 18 - 项目成果移交报告(见表4-9).............................................................................. - 19 -撰写项目进度报告 ......................................................................................... - 20 -项目进度控制 ................................................................................................. - 22 -工作计划管理表(见表7-1) ................................................................................. - 30 - 项目模块设计计划书(见表7-2) ......................................................................... - 31 - 项目进度计划书(见表7-3) ................................................................................. - 32 - 项目进度执行情况考核表(见表7-4).................................................................. - 33 - 项目关键性阶段计划及执行情况表(见表7-5).................................................. - 34 - 项目完成情况表(见表7-6) ................................................................................. - 35 - 项目工作周报表(见表7-7) ................................................................................. - 36 - 项目工作月报表(见表7-8) ................................................................................. - 37 - 项目管理跟踪报告表(见表7-9) ......................................................................... - 38 - 偏差状态跟踪表(见表7-10) ............................................................................... - 39 - 项目进度偏差分析表(见表7-11)........................................................................ - 40 - 项目重大缺陷详单(见表7-12)............................................................................ - 41 - 项目变动报告(见表7-13) ................................................................................... - 42 - 项目变更状态统计表(见表7-4) ......................................................................... - 43 - 项目变更影响因素分析表(见表7-15)................................................................ - 44 -编制项目成本预算 ......................................................................................... - 45 -预算调整控制的方法................................................................................................ - 47 -

项目管理流程图制作软件

流程图制造软件是一款用于制造各种流程图,同时兼具跨渠道,云贮存,分享功能的专业流程图制造软件。操作简略,功能强大,非常简略完成可视化、分析和沟通杂乱信息。软件内置海量精美的流程图模板与图库,帮助你轻松制造项目办理流程图,程序流程图,作业流程图,进程流程图等。 当你对那些简洁美观的流程图感到羡慕不已,是否好奇它们是怎样做出来的,是否想知道需要什么样的专业技能。今天,这一切将变得非常简单,你只需要点击几下鼠标就能制作出属于自己的可视化流程图。而且一切操作都异常简洁。

流程图的基本符号 首先,设计流程图的难点在于对业务逻辑的清晰把握。熟悉整个流程的方方面面。这要求设计者自己对任何活动、事件的流程设计,都要事先对该活动、事件本身进行深入分析,研究内在的属性和规律,在此基础上把握流程设计的环节和时序,做出流程的科学设计。研究内在属性与规律,这是流程设计应该考虑的基本因素。也是设计一个好的流程图的前提条件。

然后再根据事物内在属性和规律进行具体分析,将流程的全过程,按每个阶段的作用、功能的不同,分解为若干小环节,每一个环节都可以用一个进程来表示。在流程图中进程使用方框符号来表达。 既然是流程,每个环节就会有先后顺序,按照每个环节应该经历的时间顺序,将各环节依次排开,并用箭头线连接起来。箭头线在流程图中表示各环节、步骤在顺序中的进展。 对某环节,按需要可在方框中或方框外,作简要注释,也可不作注释。 经常判断是非常重要的,用来表示过程中的一项判定或一个分岔点,判定或分岔的说明写在菱形内,常以问题的形式出现。对该问题的回答决定了判定符号之外引出的路线,每条路线标上相应的回答。 选择好的流程图制作工具 亿图发布第一款支持快捷操作的流程图制作工具从而极大的降低了专业流程设计的门槛,让大多数人可以在很短的时间里绘制出专业的流程图。

抓包工具演示

Wireshark抓包软件简单使用 wireshark是一款抓包软件,比较易用,在平常可以利用它抓包,分析协议或者监控网络。 一、抓包使用简单过程: Wireshark启动界面: 看到启动界面后,现在主要会用到这几个按钮:

2.点击“开始”获取抓取结果(抓取到的为未加密数据)

4.显示结果:

加密数据抓取: 抓取结果:

二、捕捉过滤器使用方法: Protocol(协议): 可能的值: ether, fddi, ip, arp, rarp, decnet, lat, sca, moprc, mopdl, tcp and udp. 如果没有特别指明是什么协议,则默认使用所有支持的协议。 Direction(方向): 可能的值: src, dst, src and dst, src or dst 如果没有特别指明来源或目的地,则默认使用“src or dst”作为关键字。 例如,”host 10.2.2.2″与”src or dst host 10.2.2.2″是一样的。 Host(s): 可能的值:net, port, host, portrange. 如果没有指定此值,则默认使用”host”关键字。 例如,”src 10.1.1.1″与”src host 10.1.1.1″相同。 Logical Operations(逻辑运算): 可能的值:not, and, or. 否(“not”)具有最高的优先级。或(“or”)和与(“and”)具有相同的优先级,运算时从左至右进行。 例如,

“not tcp port 3128 and tcp port 23″与”(not tcp port 3128) and tcp port 23″相同。“not tcp port 3128 and tcp port 23″与”not (tcp port 3128 and tcp port 23)”不同。 例子: tcp dst port 3128 //捕捉目的TCP端口为3128的封包。 ip src host 10.1.1.1 //捕捉来源IP地址为10.1.1.1的封包。 host 10.1.2.3 //捕捉目的或来源IP地址为10.1.2.3的封包。 ether host e0-05-c5-44-b1-3c //捕捉目的或来源MAC地址为e0-05-c5-44-b1-3c的封包。如果你想抓本机与所有外网通讯的数据包时,可以将这里的mac地址换成路由的mac 地址即可。 src portrange 2000-2500 //捕捉来源为UDP或TCP,并且端口号在2000至2500范围内的封包。 not imcp //显示除了icmp以外的所有封包。(icmp通常被ping工具使用) src host 10.7.2.12 and not dst net 10.200.0.0/16 //显示来源IP地址为10.7.2.12,但目的地不是10.200.0.0/16的封包。 (src host 10.4.1.12 or src net 10.6.0.0/16) and tcp dst portrange 200-10000 and dst net 10.0.0.0/8 //捕捉来源IP为10.4.1.12或者来源网络为10.6.0.0/16,目的地TCP端口号在200至10000之间,并且目的位于网络10.0.0.0/8内的所有封包。 src net 192.168.0.0/24 src net 192.168.0.0 mask 255.255.255.0 //捕捉源地址为192.168.0.0网络内的所有封包。

WIN8系统抓包工具使用介绍

抓包过程简要说明 对于工程上某些需要确认网管下发到设备上的数据或者设备上报给网管的数据正确性,需要对网卡进行数据抓包分析。现将详细的抓包方法进行说明(此版本抓包工具可用于windows server2003、WIN7和windows server2008操作系统上,其他的没试用过) 说明: windows server2008操作系统有两种,一种32位,一种64位。 查看操作系统位数的方法有两种(输入命令后可能会等待5~20s时间): 1、运行---输入“cmd”---在命令提示符窗口中输入“systeminfo”---找到其中的“System type:(系统类型)”对应的就是了。 2、运行DXDIAG就可以查看系统位数了。x86代表32位,X64代表64位! 该抓包工具根据操作系统位数有以下区别,x86为32位操作系统,x64为64位操作系统。本文档以32位操作系统为例进行说明。 步骤一:将附件的netmon_34.rar解压到任何位置,例如D:\ 步骤二:运行D:\netmon_3\ NM34_x86.exe文件,执行安装,步骤中全部选择默认安装即可。安装完成后,桌面会生成Microsoft Network Monitor 3.4的快捷图标。 步骤三:双击运行Microsoft Network Monitor 3.4,在菜单栏选择Tools->Options..可以看到下面的面板,在Faster Parsing上点击右键选择Create->Create From Selected,

步骤四:在Create New Parser Profile面板中可以自己命名Name(本例中命名为fiberhome_set,可自定义), 并选中路径列表中的第2项,然后选择Open Folder,

项目管理方案模板

项目管理方案模板

四、投标人或拟建立的项目公司的法人治理结构、管理结构和人员编制方案草案。 施工组织机构详见《图1.6 项目组织管理机构框图》。 主要管理人员详见《图1.7 主要管理人员一览表》。 按业主和招标文件的要求,以及工程特点、规模、工期、质量等方面的要求,我单位将为本项目工程的施工组成一套高效、精干、强有力的领导机构和装备先进、技术水平过硬的施工队伍。在项目建立一个由项目经理领导的质量保证机构,形成一个横到边、纵到底的项目质量控制网络,使工程质量网络处于有效的监督和控制状态。 本工程施工管理实行项目经理负责制,项目经理1人,技术负责人1人,负责工程总体进度计划、成本核算、施工进度及与工程有关各方的联络工作,下设技术员、质检员、安全员、材料员、造价员等,各管理人员在项目经理统一安排和领导下,分管工程施工安排调度、质量检查、安全管理、主材料采购管理等工作。本分部工程的质量保证机构及职能如下图:《图 1.6 项目组织管理机构框图》

《图1.7 主要管理人员一览表》 二、主要管理人员岗位职责 项目经理 工地财 地方事 机械物 施工队 安全环 工程技 工地试 测量班 质检科 计量统 工地医 综合办

(一)、项目经理岗位职责: 1、在总包项目经理部管理下,负责项目部全面管理工作,贯彻执行国家法令、法规及各项规章,确保各项经济技术指标的全面完成。 2、完善项目组织机构并落实项目部人员的岗位责任制,审批职责,并对其工作进行全面监督和管理。 3、负责对外协调工作,在确保工程安全、质量、工期、文明施工、环保的前提下本着公平、公正、负责的态度正确处理甲方、设计、监理及总承包商的关系。 4、科学组织施工中人、财、物等资源的管理,及时解决施工中出现的问题。 5、掌握施工进度,确保安全生产,使工程质量在施工过程中处于受控状态。 6、定期主持召开项目质量管理评审会议,针对项目施工中存在的问题,及时采取纠正和预防措施,不断改进和加强项目管理。 7、严格财经制度,加强预算管理,推行多种形式的承包责任制。 8、加强对职工的法纪教育,抓好本工程的治安保卫工作和四

项目管理制度模板

黑龙江珍宝岛药业股份有限公司 项目管理制度 第一章总则 第一条为使股份公司项目投资决策科学合理、责权分明、流程清晰,防止决策的主观性和随意性,保证项目投资的有效实施,降低项目投资风险,保证公司所拥有资产的完整性、良好性,特制订本制度。 第二条项目投资原则 1、战略性原则:投资项目必须符合公司发展战略规划所确定的目标和方向; 2、必要性原则:投资项目必须是公司紧急且必要的项目,投资前必须对其必要性进行调研和审查; 3、可行性原则:投资项目必须具备可行性,不能够超出公司所掌握的资金和资源之上。 第三条制度适用范围 1、适用于股份公司基本建设投资项目,包括扩建和新建、技术改造项目; 2、适用于股份公司大宗物资、设备采购(不含生产物资采购)项目; 3、适用于股份公司一般物资、设备采购项目; 4、适用于股份公司政策性投融资项目,是根据国家产业政策,以本公司的生产经营、科技开发为平台,争取国家政策资金扶持的项

目; 5、适用于股份公司新产品研发项目以及生产工艺与设备的技术攻关与革新项目,新产品研发项目根据公司整体经营战略遴选、确定的,承载着公司新的效益增长点; 6、适用于股份公司人力资源、信息化建设、工艺改造等项目。 为使项目管理工作流程更加清晰,特编制《项目管理流程图》。 第二章项目立项 第一部分项目立项管理体系及职责分工第四条项目立项审批领导小组是项目立项管理的最高决策机构,主要职责包括: 1、负责审核项目方案的可行性研究报告,并做出是否立项的决议 2、有权对项目整体工作提出要求 3、审议项目的监控报告 4、项目立项审批领导小组人员由董事长、总裁、执行总裁、副总裁、总裁助理、董事会秘书、相关中心总监、各公司总经理、相关人员组成 第五条总裁办公会是公司重大项目的审核机构,也是一般项目立项的决策机构。主要审核投资项目的必要性和可行性,其职责主要包括: 1、负责审核项目可行性资料,研究项目的必要性和可行性 2、负责将提报的项目分类,分为重大项目和一般性项目,并要求项目申报单位对重大项目进一步补充可行性资料,形成可研报告;负

抓包工具使用简介

抓包工具wireshark使用简介 在wireshark安装包安装完成后,以管理员身份运行Wireshark.exe,进入wireshark主界面,如图1-1所示。 图1-1 wireshark主界面 在主界面上,在菜单栏中选择Capture->interface,在点击interface按钮弹出的对话框中点击start按钮,进入如图1-2所示界面。 图1-2启动wireshark 当您在客户端点击某路视频时,会出现如图1-3所示界面,目前我们播放视频采用的是UDP协议,你在抓包工具中看到的大部分数据包都是UDP数据包。此时如果你只想去查看给某台机器上发的数据包时,可以通过过滤条件进行过滤,比如我只想查看我给172.20.32.168这台机器上发送的数据包,那么你只需要在Filter后面输入

ip.dst==172.20.32.168即可,需要注意的是这里是“==”而不是“=”。多个条件之间用&&。 数据过滤操作如图1-4所示。 图1-3 视频数据展示界面 图1-4数据过滤界面 此时你看到的数据包就都是发给172.20.32.168的UDP数据包,此处需要说明的是如果你点击视频时播放不出来视频,那么是不会有持续的UDP数据包发给指定的客户端的,客户端控件播放不出来视频那就不是控件的问题了,因为就根本没有视频数据包发送到客户端。 如果你在抓包时发现有持续的数据包时,那么你想看一下你抓的数据包是否正常,能不能播放出来,那么也可以通过wireshark将抓的数据包保存成文件用VLC播放试一下,具体操作流程如下: (1)点击某条视频数据,右键选择decode as(如图1-5),此时会弹出如图1-6所示界面;(2)在如图1-6所示的界面上选择RTP,点击OK,此时就会把所有的UDP数据包转换成RTP数据包,转换之后界面如图1-7所示; (3)在图1-7所示的界面上,在菜单栏中选择Telephony,在下拉的菜单中选择RTP,在其子菜单中选择Stream analysis,会弹出如图1-8所示界面; (4)在图1-8所示的界面上点击Savepayload按钮,进入视频保存页面(如图1-9),选择你保存的位置并设置保存的视频文件名; (5)此时就会在保存的目录下生成出来对应的视频文件,此处需要说明的是视频文件不

项目部管理人员框架图和工作流程图

Word 文档 下载可编辑 项目部管理人员框架图及岗位职责 1.1工程项目部管理职能 1.1.1工程项目部管理职能:围绕承担的装饰工程项目施工阶段、保修阶段等的管理,在施工组织、施工技术、施工质量控制、施工进度(工期)控制、施工成本控制、施工材料计划、安全文明生产、环境保护、节能降耗、合同管理、信息管理及工程项目施工内外协调管理等方面履行其工作权力和职责义务。 1.2项目部人员框架图 技术负责人高级工程师 项目经理 技术员高级工程师 施 工 员中级工程师 库 管 员初级工程师 资 料 员初级工程师 安 全 员 初级工程师 质 检 员 初级工程师 预 算 员 初级工程师 劳务分包 施工 员 中级工程师

1.3工程项目经理职责 工程项目经理是企业法人在项目施工管理中的授权代表, 工程项目经理在其授权范围内领导和负责履行赋予其项目部的职能管理工作并为完成承担的装饰工程项目合同所包含的内容及目标任务履行其管理职责;工程项目经理是项目安全生产的第一责任者,对各项目标任务的实现负全责。 1.3.1协助分管副总经理负责本工程项目的管理工作;负责企业质量方针和项 目创优质量目标在本项目工程的贯彻落实;建立健全本工程项目质保体系、各级质量责任制,保证质量体系持续有效运行,使建筑装饰、安装生产和服务质量处于严格受控状态。 1.3.2协助分管副总经理参与本工程项目合理配置人力资源,项目部管理架构的组建工作并负责项目部人员管理工作。 1.3.3协助分管副总经理及相关部门参与对工程分包队伍的推荐、评审、选择。参与劳务队伍的报价审核、劳务结算。 1.3.4履行与建设单位签订的合同;负责工程技术(经济)洽商、工程项目预(结)算和变更及追加项目的报价、经济签证编制、审报、确认工作。 1.3.5负责对未定价材料报批手续的办理工作,并在分项工程实施前或材料合同签订前办妥同时,按照公司规定报送各相关部门以便控制成本。 1.3.6 负责工程项目进度款和完工项目尾款的回收办理工作。 1.3.7负责组织编制施工组织设计、技术方案、项目质量目标,对施工现场质量、安全文明施工、环境卫生等进行严格控制管理,对存在质量事件、安全隐患,环境卫生低劣情况进行限时整改并酌情处罚。 1.3.8负责对施工成本的控制管理,根据合同及进度计划编制详细的现场资金使用(月)计划及工程项目现场费用支出情况报表,并按照公司规定报送相关部门。 1.3.9负责施工进度(工期)的控制管理,组织编制项目总(工期)进度控制计划及月、周实施计划,并对执行情况进行监督与检查,进行动态控制管理。 1.3.10负责成品、半成品装饰件及材料计划的编制、审核、组织进场、入出库、建卡立账登记、调拨、现场材料存放的管理工作;按照公司规定将编制的成品、半成品装饰件及材

项目管理工作流程

项目管理工作制度 (讨论稿,供项目部项目管理参考) 第一章总则 第一条贯彻公司以市场为中心的基本思想,理顺项目管理部门和人员的关系,确定工作流程,明确工作责任,遵照国家有关标准规范和公司项目管理规定,制定项目管理工作流程制度。 第二章定义 第二条遵循项目经理负责制的原则,通过项目经理和项目组织的努力,运用系统的理论和方法对特定项目及其相关可利用资源进行计划、组织、协调、控制,以实现项目的预定目标。 第三条适用范围 公司项目部管理的项目,以及所涉及的项目业务、部门、人员。 第四条名词解释 1、项目经理,负责项目全程管理,完成项目计划、组织、协调、控制, 实现项目的预定目标,对项目总监负责。 2、项目业务经理:在项目签约前的项目经理,主要负责完成项目的前期 需求调研及总体设计方案,从项目的前期公关、跟踪,直至项目的签 约。对项目经理负责。 3、项目实施经理:在项目签约之后的项目经理,主要负责项目的详细调 研及详细设计方案,从实施计划的制定、执行,直至项目的完工验 收。对项目经理负责。 4、项目业务员:负责销售业务,与项目成败具有直接利益关系的人员。 对项目经理负责。

汇总 汇报 指导 协调 第三章 流程 第五条 项目准备 1、业务信息的管理 2、意向客户的确定 第六条 项目立项 1、立项 2、跟踪 3、签约 第七条 项目实施 1、确定实施组 2、制定实施计划 3、编制项目预算 4、执行实施计划 5、协助项目决算 6、项目内部评审 7、完成竣工验收 8、提交竣工文档 第八条 项目终止 第九条 项目文件归档 第四章 项目准备 第十条 适用范围:项目部 第十一条 业务信息的管理 1、任务:项目信息调研,收集、汇总项目业务信息 2、工作流程:业务员 每日 项目经理 汇报 项目经理 每日 项目总监 汇报 3、形式:口头报告、书面报告,晨会、例会,重大问题随时报告。 4、报表:《项目业务日报表》、《项目业务周报表》 5、任务:提出意向客户名单;确定意向客户;提出售前技术支持要求。 6、工作流程:业务员 提出、反馈管理建议 项目经理

抓包工具学习总结

抓包工具学习 一、配置镜像端口方法:(机换器型号:S2008) 方法一: 1、打开串口,把串口线接入交换机端口 2、从串口“选项-会话选项”把“波特率设为9600” 3、按确认键 4、输入system-view 5、输入monitor-port e0/8 //8指的是PC线接到交换机的端口号(如果要撤销该端口使用命令: (undo port mirror Ethernet 0/1 to Ethernet 0/2 observing-port Ethernet 0/8) (undo monitor-port e0/8),如果需查看端口是否成功用该命令:display mirror) 6、port mirror Ethernet 0/1 to Ethernet 0/2 //指的是盒接到交换机1到2之间的端口号都可 以抓包 7、配置完后启动Ethereal抓包 方法二: 可以一次性定义镜像和被镜像端口 port mirror Ethernet 0/1 to Ethernet 0/2 observing-port Ethernet 0/8 //0/1 to Ethernet 0/2被镜像端口号,0/8指PC接到交换机端口号 二、抓包工具如何使用: 1、抓包工具基础功能介绍 从本机上双击Ethereal工具,在工具栏目点击capture->Interfaces选项界面 Packets/s 指的是每秒钟抓包的个数。Packets 指抓包总数量,点击options进入抓包过虑条件页面,Interface 选择本机网卡,IP address 指本机IP;capture packets in promiscuous mods 指的混合模式抓包;capture Filter 指是输入过虑条件;file 指定包存放路径;update list of packets in real time 指抓包过程中“实时更新”;Automatic scrolling in live capture 指以滚动方式显示;Hide capture info dialog 指显示TCP/UDP/ICMP/ARP等信息点击start开始抓包;cancle 退出抓包 2、抓包前过虑使用命令: 在capture->options->capture Filter对应框输入: Host 119.1.1.180 //指的是抓180机顶盒包 Ether host 00:07:63:88:88:08 //在没IP情况下抓盒子包,比如DHCP还未给机顶盒分配到IP时,此时需要抓包 Port 33200 //指指定端口抓包 Port 554 //指rtsp包 Port 123 //指NTP服务器包 (Port 33200 || port 8082 ) && host 119.1.1.180 //指抓33200端口同时也抓8082端口包,同时还抓180盒子包 Host 238.255.2.2 //指抓一个组播流包

相关文档