文档库 最新最全的文档下载
当前位置:文档库 › Support vector machine classifier with truncated pinball loss

Support vector machine classifier with truncated pinball loss

Support vector machine classifier with truncated pinball loss
Support vector machine classifier with truncated pinball loss

Pattern Recognition68(2017)199–210

Contents lists available at ScienceDirect

Pattern Recognition

journal homepage:https://www.wendangku.net/doc/d417563670.html,/locate/patcog

Support vector machine classi?er with truncated pinball loss

Xin Shen a,b,Lingfeng Niu b,c,?,Zhiquan Qi b,c,?,Yingjie Tian b,c

a School of Mathematical Sciences,University of Chinese Academy of Sciences,Beijing100049,China

b Research Center on Fictitious Economy and Data Science,Chinese Academy of Sciences,Beijing100190,China

c Key Laboratory of Big Data Mining an

d Knowledg

e Management,Chinese Academy o

f Sciences,Beijing100190,China

a r t i c l e i n f o

Article history:

Received9November2016

Revised11January2017

Accepted5March2017

Available online9March2017

Keywords:

Pinball loss

Feature noise

Sparsity

Support vector machine

a b s t r a c t

Feature noise,namely noise on inputs is a long-standing plague to support vector machine(SVM).Con-

ventional SVM with the hinge loss(C-SVM)is sparse but sensitive to feature noise.Instead,the pinball

loss SVM(p in-SVM)enjoys noise robustness but loses the sparsity completely.To bridge the gap between

C-SVM and pin-SVM,we propose the truncated pinball loss SVM(p in-SVM)in this paper.It provides a

?exible framework of trade-off between sparsity and feature noise insensitivity.Theoretical properties in-

cluding Bayes rule,misclassi?cation error bound,sparsity,and noise insensitivity are discussed in depth.

To train pin-SVM,the concave-convex procedure(CCCP)is used to handle non-convexity and the decom-

position method is used to deal with the subproblem of each CCCP iteration.Accordingly,we modify the

popular solver LIBSVM to conduct experiments and numerical results validate the properties of pin-SVM

on the synthetic and real-world data sets.

?2017Elsevier Ltd.All rights reserved.

1.Introduction

Support vector machine(SVM)[1,2]is a powerful machine

learning tool for classi?cation and regression tasks.The fundamen-

tal idea of SVM is to?nd a hyperplane in the feature space that

divides different categories of observations to the largest separa-

tion.Concretely,C-support vector machine(C-SVM)uses the hinge

loss to maximize the width of margin between the nearest points

of different classes.Since su?ciently rightly classi?ed points are

not punished,C-SVM generally enjoys good sparsity[3],namely a

low percentage of support vectors(SVs).For classi?cation problems,

the noise on inputs is called feature noise[4,5].Normally,feature

noise causes perturbation around the boundary of different classes.

Since the resulting decision hyperplane is determined by a rela-

tively small number of SVs,C-SVM is sensitive to feature noise and

unstable for re-sampling.

Different types of attempts have been made to tackle feature

noise[4,6–8].The pinball loss is connected with quantile and ini-

tially used for regression[9–11].Recently,it is applied to vari-

ous SVM models[5,12,13]to handle input data uncertainty.Among

these variants,the pinball loss SVM(p in-SVM)[5,14]has a similar

form with C-SVM except the loss function is changed.But unlike C-

SVM,pin-SVM gives penalty on numerous rightly classi?ed points.

?Corresponding authors.

E-mail addresses:shenxin14@https://www.wendangku.net/doc/d417563670.html,(X.Shen),niulf@https://www.wendangku.net/doc/d417563670.html,(L.Niu),

qizhiquan@https://www.wendangku.net/doc/d417563670.html,(Z.Qi),tyj@https://www.wendangku.net/doc/d417563670.html,(Y.Tian).

The latent rationality has been thoroughly discussed in[5].In a

nutshell,by penalizing a mass of rightly classi?ed points,pin-SVM

approximates a model which maximize the quantile distance be-

tween two classes.Moreover,pin-SVM is still a convex model and

possesses many attractive theoretical properties,including noise

insensitivity,robustness,and bounded misclassi?cation error[5].

However,compared to C-SVM,pin-SVM loses sparsity because

the sub-gradient of the pinball loss is not equal to zero almost

everywhere.The lack of sparsity is generally fatal for SVM mod-

els,since it affects computational time especially in the prediction

phase.To remedy this drawback,pinball loss withεinsensitive

zone was proposed[5],which is similar to theε-insensitive loss

used for SVM regression[2].However,this correction is not ele-

gant.Apart from its actual performance,some theoretical proper-

ties of the pinball loss have proved invalid for its insensitive zone

version.For example,pinball loss withεinsensitive zone will not

lead to the Bayes classi?er.The corresponding misclassi?cation er-

ror cannot be bounded by Zhang’s inequality[15]neither.More-

over,there are too many equality constraints in the dual problem,

which makes the optimizing procedure relatively di?cult[5,16].In

short,many advantages of pin-SVM disappeared.

From C-SVM to pin-SVM,precious sparsity is completely lost.

The change is so abrupt that it is natural to believe there ex-

ists a transition state connecting these two models.In this pa-

per,we propose a new loss function which combines the merits

of the hinge and pinball loss together.Since the horizontal part

in loss function curve is related to the sparsity,and penalizing

https://www.wendangku.net/doc/d417563670.html,/10.1016/j.patcog.2017.03.011

0031-3203/?2017Elsevier Ltd.All rights reserved.

200X.Shen et al./Pattern Recognition68(2017)199–210

rightly classi?ed data brings feature noise insensitivity,we?atten

the negative part of the pinball loss at an appropriate position.This

new loss function is called the“truncated pinball loss”,and the

corresponding SVM model is written as pin-SVM for short.By in-

troducing additional hyper-parameters,pin-SVM can be viewed as

a compromise between sparsity and feature noise robustness.Re-

lated theoretical properties,especial those inherited from the pin-

ball loss are investigated in detail.

Since pin-SVM is a non-convex problem,we utilize a popular

and handy algorithm,concave-convex procedure(CCCP)[17–20]to

solve it.Similar technique has been used to handle other non-

convex SVMs,such as the ramp loss SVMs[21–23].For pin-SVM,

subproblem of each CCCP iteration can be approximately viewed

and solved as a C-SVM problem.Decomposition method[24–27]is

a common way to train SVMs.For each loop,it modi?es only a

subset of optimization variables,which is called the working set.

In this way,a optimization problem with smaller size is solved at

per iteration.Sequential minimal optimization(SMO)[25]is an ex-

treme case of the decomposition method,which restricts the work-

ing set to the least number of elements.We modify the SMO-type

decomposition method proposed in[28]to solve the dual problem

of each CCCP iteration.Although non-convexity seems to bring ad-

ditional complexity and computation cost,we show that the extra

expense can be minimized with some simple speedup techniques.

The rest of the paper is organized as follows.In Section2,we

review some preliminary knowledge,such as the basic ideas of

SVM,loss functions,and CCCP algorithm.In Section3,the trun-

cated pinball loss and pin-SVM are introduced,and their statistical

properties are investigated.In Section4,we discuss the algorithms

to train pin-SVM.Experimental results on synthetic and real-life

problems are presented in Section5.Finally,Section6ends the

paper with conclusions.

2.Preliminaries

2.1.SVMs with different loss functions

Given the training set{(x i,y i)}l i=1,where x i∈ n and y i∈

{?1,1},letθ=(w T,b)T,a SVM model tries to seek a decision hy-

perplane w T (x)+b=0that separates two classes of data to the

greatest extent.Accordingly,the following optimization problem is

solved:

min θ1

2

w 2+C l

i=1

L

1?y i(w T (x i)+b)

,(1)

where (x i)maps x i into a higher-dimensional space,C>0is the regularization parameter,and L(u)can be any loss function.

Differentiating(1)shows that the minimum w satis?es:

w=C

l

i=1

y i?L(1?y i fθ(x i)) (x i),(2)

where fθ(x)=w T (x)+b is the decision function.For given fθ(x), input(x i,y i)satisfying?L(1?y i fθ(x i))=0becomes a SV because (x i,y i)contributes to w in(2).Hence the sub-gradient function ?L(u)is directly associated with the sparsity.

The hinge loss(F ig.1(a))is de?ned as L hinge(u)=max{0,u} (?u∈ ).If L(u)in(1)is speci?ed as L hinge(u),we can get C-SVM. Since all the points that meets y i fθ(x i)>1are not counted as SVs for C-SVM,it is a relatively sparse model.However,as shown in Fig.2(a),C-SVM tries to maximize the shortest distance between two classes.Hence the support hyperplanes w T x+b=±1are de-termined by a small number of SVs,making it less robust to fea-ture noise.

Moreover,the dual form of C-SVM is a quadratic program-ming(QP):

min

α

1

2

αT Qα?e Tα

s.t.y Tα=0,0≤α≤C e

(3)

where y=(y1,···,y l)T is the vector of labels,α∈ l is the vector of dual variables,e∈ l is a vector of all ones,Q is the kernel matrix,Q ij≡y i y j K(x i,x j),and K(x i,x j)≡ (x i)T (x j)is the kernel function.

The pinball loss(F ig.1(b))is given as Lτ(u)=max{u,?τu} (?u∈ ,0≤τ≤1),and the corresponding SVM model is written as pin-SVM for short.First,with a demo in Fig.2(b),we illustrate the way pin-SVM handles input data uncertainty.For given w and b,the distances of positive or negative points from their support hyperplanes w T x+b=±1can be computed,and these distances can be sorted in order.pin-SVM aims to push a certain percent of remote positive or negative observations away from their sup-port hyperplanes.For example,by adjusting hyper-parameterτ, pin-SVM can maximizes the quantile distance at the position of nearest forty percent of points.From a geometric point of view, two support hyperplanes w T x+b=±1cut across the quantile of each category of data.Thus observations near the border of two categories become less important in deciding the decision plane than that of C-SVM.Effects of feature noise are weakened.How-ever,pin-SVM gives increasingly large penalty to correctly classi-?ed points,and?Lτ(u)=0holds almost everywhere except when u=0,hence the sparsity is totally lost.

To remedy this,pinball loss withεinsensitive zone(F ig.1(c)), namely Lετ(u)=max{u?ε,0,?τu?ε}(?u∈ ,0≤τ≤1,ε≥0), was proposed for a sparse model.Since?Lετ(u)=0for?ετ

2.2.Concave-Convex Procedure(CCCP)

If x∈ n is the optimization variable,the objective function and constraints f i: n→ ,g i: n→ for all i=0,1,···,m are con-vex,and they can be decomposed into the difference of two convex parts respectively:

min f0(x)?g0(x)

s.t.f i(x)?g i(x)≤0,i=1,...,m,(4) then CCCP method tackles the original non-convex problem(4)by solving a series of convex subproblems[20].Details are listed in Algorithm1.

Algorithm1Concave-Convex Procedure(CCCP)for Problem(4). Input:an initial feasible point x0,k:=0

Output:the solution x?

repeat

1.Convexify.Form?g i(x;x k) g i(x k)+

g i(x k)T(x?x k)

i=0,1,…,m.

2.Solve.Set the value of x k+1to a solution of the convex prob-

lem:

min f0(x)??g0(x;x k)

s.t.f i(x)??g i(x;x k)≤0,i=1,...,m

3.Update iteration.k:=k+1.

until some stopping criterion is satis?ed.

3.SVM with the truncated pinball loss

In order to combine the advantages of C-SVM and pin-SVM to-gether,we propose a modi?ed version of the pinball loss and apply

X.Shen et al./Pattern Recognition68(2017)199–210201

Fig.1.A comparison of three existing loss functions.

Fig.2.A comparison between shortest distance and quantile distance.This demo is restricted to the linear classi?cation for clarity.Positive and negative inputs are marked by red circles and green crosses respectively.The decision plane w T x+b=0is marked by a solid line and corresponding support planes w T x+b=±1are marked by two dash lines.(For interpretation of the references to colour in this?gure legend,the reader is referred to the web version of this article.)

Fig.3.Truncated pinball loss Pτ,s(u).

this new loss function to SVM model(1).Since the horizontal part

in loss function curve is related to the sparsity,we makes negative

part of the pinball loss?at at a prede?ned position?s.Here,s>

0is a constant and?s is the hinge point.At the same time,this

change still preserves feature noise insensitivity of the pinball loss

since all the rightly classi?ed points are penalized.Enlightened by

the shape of the function curve(F ig.3),we name it as“the trun-

cated pinball loss”.

For convenience,an auxiliary function is introduced:Hτ(u)=

max{τu,0}(?u∈ ),whereτis a constant.Then the truncated

pinball loss is de?ned as:

Pτ,s(u)=H1+τ(u)?(Hτ(u+s)?τs)=

τ

s(u≤?s)

?τu(?s

u(u≥0),

(5)

where0≤τ≤1.

And the truncated pinball loss SVM,namely pin-SVM can be

formulated as:

min

θ

J(θ)=

1

2

w 2+C l

i=1

Pτ,s

1?y i(w T (x i)+b)

.(6)

Similar to(2),the minimum w satis?es:

w=C

l

i=1

y i?Pτ,s(1?y i fθ(x i)) (x i),(7)

where the sub-gradient function of the truncated pinball loss is

stated as:

?Pτ,s(u)=

?

???

???

0(u

[?τ,0](u=?s)

?τ(?s

[?τ,1](u=0)

1(u>0).

(8)

Clearly,the data(x i,y i)that satis?es y i fθ(x i)>1+s is not counted

as SV.With a properly selected parameter s,pin-SVM is a relatively

sparse model.

Moreover,the hinge and pinball loss can be written as P0,0(u)

and Pτ,+∞(u)respectively,which means that they can be viewed as

202 X. Shen et al. / Pattern Recognition 68 (2017) 199–210

the limit cases of P τ, s ( u ). Next, we will show that P τ, s

( u ) is the- oretically sound, and it inherits most of statistical properties from the pinball loss. 3.1. Bayes rule

Following the settings in [5] , some preliminary knowledge are introduced. For a typical binary classi?cation problem, the samples

{ (x i , y i ) } l i =1

are independently drawn from a probability measure ρ, where ρis de?ned on X ×Y, X ? n , and Y = {?1 , 1 } . For any

loss function L ( u ), the expected L -risk of a measurable function

f : X → is de?ned as R L ,ρ(f ) =

X ×Y L (1 ?y f (x )) d ρ. Minimizing the expected L -risk over all measurable functions results in func- tion f L, ρ(

x ): f L ,ρ(x

) = arg min ν∈

Y

L ( 1 ?y (x ) ν) d ρ(y | x ) , ? x ∈ X (9)

where ρ( y | x ) is the conditional distribution of ρat x .

The Bayes classi?er is de?ned as:

f c

(x ) =

1 , i f P rob (y = 1 | x ) ≥P rob (y = ?1 | x )

?1 , i f P rob (y = 1 | x ) < P rob (y = ?1 | x ) .

(10)

It has been proved that minimizing the pinball loss L τ(

u ) results in the Bayes classi?er [5] . Now, we prove the Bayes rule also holds for the truncated pinball loss P τ,s . Theorem 1. Function f P τ,s

,ρ, which minimizes the expected P τ, s -risk over all measurable functions f : X → Y, is equal to the Bayes classi?er, i.e., f P τ,s ,ρ(x ) = f c (x ) ? x ∈ X Proof. Simple calculation shows that

Y

P τ,s

( 1 ?y ( x ) ν) d ρ( y | x ) = P τ,s ( 1 ?ν) P rob ( y = 1 | x ) + P τ,s

( 1 + ν) P rob ( y = ?1 | x ) = ?

? ? ? ? ?

? ? ? ? ?

( 1 ?ν) P rob ( y = 1 | x ) + τsProb ( y = ?1 | x ) ( ν≤?s ?1 )

( 1 ?ν) P rob ( y = 1 | x ) ?τ( 1 + ν) P rob ( y = ?1 | x ) ( ?s ?1 < ν< ?1 ) ( 1 ?ν) P rob ( y = 1 | x ) + ( 1 + ν) P rob ( y = ?1 | x ) ( ?1 ≤ν≤1 ) ?τ( 1 ?ν) P rob ( y = 1 | x ) + ( 1 + ν) P rob ( y = ?1 | x ) ( 1 < ν< s + 1 ) τsProb ( y = 1 | x ) + ( 1 + ν) P rob ( y = ?1 | x )

( ν≥s + 1 )

Notice the fact that P rob (y = 1 | x ) + P rob (y = ?1 | x ) = 1 always

holds. When P rob (y = 1 | x ) > P rob (y = ?1 | x ) , the minimal value is 2 P rob (y = ?1 | x ) , which is achieved by ν= 1 . When P rob (y = 1 | x ) < P rob (y = ?1 | x ) , the minimal value is 2 P rob (y = 1 | x ) , which is achieved by ν= ?1 . When P rob (y = 1 | x ) = P rob (y = ?1 | x ) , the minimal value is 1, which is achieved by any ν∈ [ ?1 , 1] . There- fore, f P τ,s ,ρ(x ) , which minimizes the expected risk measured by P τ, s , satis?es f P τ,s ,ρ(x ) = f c (x ) . 3.2. Bounding the misclassi?cation error

In practice, we often seek a real-valued function f : X → and use its sign, i.e., sgn ( f ), to induce a binary classi?er [5] . If the mis- classi?cation loss is de?ned as:

L mis

(u ) =

1 , u > 1

0 , u ≤1 ,

(11)

then the misclassi?cation error satis?es

R L m is ,ρ(sgn (f )) = R L m is

,ρ(f ) =

X ×Y

L mis

(1 ?y f (x )) d ρ. (12)

For the hinge loss, the Zhang’s inequality [15] is stated as:

R L m is ,ρ(sgn (f )) ?R L m is ,ρ(f c ) ≤R L hinge ,ρ(f ) ?R L hinge

,ρ(f c ) . (13)

The proof of this inequality can be found in [29] . This property has

been extended to the case of the pinball loss [5] . We show that it also holds for the truncated pinball loss.

Theorem 2. For any given τ∈ (0, 1), any probability measure ρand any measurable function f : X → ,

R L m is ,ρ(sgn (f )) ?R L m is ,ρ(f c ) ≤R L hinge ,ρ(f ) ?R L hinge ,ρ(f c ) ≤R P τ,s ,ρ(f ) ?R P τ,s ,ρ(f c ) ≤R L τ,ρ(f ) ?R L τ,ρ(f

c ) . (14)

Proof. For any loss function L ( u ),

R L ,ρ(f ) =

X

P rob (y = 1 | x ) L (1 ?f (x ))

+ P rob (y = ?1 | x ) L (1 + f (x )) d ρX ,

where ρX is the marginal distribution of ρon X .

First, we prove the following inequality:

R L hinge ,ρ(f ) ?R L hinge

,ρ(f c ) ≤R P τ,s ,ρ(f ) ?R P τ,s ,ρ(f c ) . (15)

Since 0 ≤P rob (y = ±1 | x ) ≤1 and 0 ≤L hinge

( u ) ≤P τ, s ( u )( ? u ∈ ), we then obtain R L hinge ,ρ(f ) ≤R P τ,s ,ρ(f ) , ? f . Since 1 ±f c ( x ) ∈ {0, 2}, then L hinge (1 + f (x )) = P τ,s (1 + f (x )) , L hinge

(1 ?f (x )) = P τ,s (1 ?f (x ))(? x ∈ X ) , and we can get R L hinge

,ρ(f c ) = R P τ,s ,ρ(f c ) . Therefore inequality (15) holds. Following similar proof procedure as above, it can be proved that

R P τ,s ,ρ(f ) ?R P τ,s ,ρ(f c ) ≤R L τ,ρ(f ) ?R L τ,ρ(f

c ) (16)

holds for any ?xed τ∈ (0, 1). According to (13), (15) and (16) , in- equality chain (14) holds. As the minimal classi?cation error is measured by R L mis

,ρ(f c ) , the goal in binary classi?cation was to ?nd a function f whose excess classi?cation risk R L mis ,ρ(sgn (f )) ?R L mis

,ρ(f c ) is small [29] . Theorem 2 gives three ordered upper bounds for estimating the excess classi?cation risk. Obviously, bound given by the truncated pinball loss is between the hinge and pinball loss. 3.3. Noise insensitivity and sparsity

For pin -SVM, giving penalty on correctly classi?ed points brings insensitivity with respect to noise around the decision boundary [5] . In this subsection, we show that pin -SVM inherits this bene- ?t from pin -SVM, and preserves sparsity to a certain extent at the same time. For easy comprehension, we focus on the linear classi- ?er in this subsection [5] .

The optimality condition for (6) can be written as

0 ∈ w C

?l i =1

?P τ,s 1 ?y i (w T

x i + b ) y i x i

, (17)

where 0 denotes the vector of which all the components equal

zero, and ?P τ, s

( u ) is the sub-gradient function de?ned in (8) . For given w and b , the index set can be partitioned into ?ve sets,

S w ,b 0

= { i : 1 ?y i (w T x i + b ) < ?s } , S w ,b 1

= { i : 1 ?y i (w T x i + b ) = ?s } , S w ,b 2

= { i : ?s < 1 ?y i (w T x i + b ) < 0 } , S w ,b 3

= { i : 1 ?y i (w T x i + b ) = 0 } , S w ,b 4

= { i : 1 ?y i (w T

x i + b ) > 0 } . (18)

Only the points in S w ,b 0

do not contribute to the vector w in Eq. (17) , hence S w ,b 0

is closely associated with the sparsity of pin - SVM. Set S w ,b 0

is controlled by the value of parameter s . On one

X. Shen et al. / Pattern Recognition 68 (2017) 199–210

203

hand, when the value of s is smaller, there are more points in S w ,b

0 and model (6) is sparser. On the other hand, when s → + ∞ , the truncated pinball loss degenerates to the pinball loss and the spar- sity is totally lost.

With the notations S w ,b

i

(i = 0 , 1 , 2 , 3 , 4) , the optimality condi-

tion can be written as the existence of ψ i ∈ [ ?τ, 0] and ζi ∈ [ ?τ, 1] such that

w C

?

i ∈S w ,b 1

ψ i y i x i + τ i ∈S w ,b 2

y i x i ? i ∈S w ,b 3

ζi y i x i ? i ∈S w ,b

4

y i x i = 0 . (19)

The points in S w ,b 1 and S w ,b 3 are much fewer than S w ,b 0 , S w ,b

2 and S w ,b 4

. They contribute little to the Eq. (19) . Thus our main issue here is about the sets S w ,b 0 , S w ,b 2 and S w ,b 4

. When the value of pa- rameter s is ?xed, the parameter τcontrols the numbers of points

in S w ,b 0 , S w ,b 2 and S w ,b 4

, so that the sparsity is affected. For the clar- ity of our explanation, s is ?xed to a moderate value here. When τis large, e.g. τ= 1 , all the three sets contain many points and hence the result is less sensitive to zero mean noise on feature.

When τis small, e.g. τ= 0 . 1 , there are few points in S w ,b

4 and the resulting model is more sensitive. When τ= 0 , the truncated pin- ball loss reduces to the hinge loss and there is no point or only a

small number of points in S w ,b

4

. Therefore, the feature noise around the decision boundary will signi?cantly affect the classi?cation re-

sult [5] . Since τdetermines the size of the set S w ,b

4

and the total number of the training data is a constant, τalso determines the

size of the set S w ,b

. That is to say, the sparsity is controlled by τ. When τis smaller, the sparsity is better.

To summarize, the selection of τand s provides an approach to trading sparsity for feature noise insensitivity and vice versa. Only an appropriate selection of these two values leads to a well- balanced model.

4. Training algorithm for pin -SVM 4.1. CCCP for primal problem

Function J ( θ) in (6) can be decomposed as the sum of a convex part J v ex ( θ) and a concave part J c av

( θ): J (θ) = 1 2

w 2 + C l i =1

H 1+ τ( 1 ?y i f θ(x

i ) )

J v e x

(θ) ?C

l i =1

H τ( 1 ?y i f θ(x

i ) + s ) + Cl τs

J c a v

(θ) . (20)

By applying the Algorithm 1 in Section 2.2 , we can get the con-

crete implementation to minimize function (20) in Algorithm 2 . Algorithm 2 Concave-Convex Procedure(CCCP) for problem (20) . input: an initial feasible point θ0 , the termination accuracy , k := 0

output: the solution θ?repeat

θk +1 = arg min θ

J v e x (θ) + ?J c a v

(θk ) T θ, k := k + 1 . until k > 1 and | (J (θk ) ?J (θk ?1 )) /J (θ1 ) | ≤

Since P τ, s

( u ) is a piecewise-linear function, following the stan- dard convergence analysis of CCCP in [30] , it can be proved that Algorithm 2 is globally convergent to the stationary point and the convergence rate is at least linear.

For convenience, we introduce the following notation:

δk i =

C τy

i f θk (x i ) = y i (w kT (x i ) + b k ) < s + 1 0 otherwise

(21)

Then subproblem of each iteration in Algorithm 2 can be stated

as:

min

θ

J v e x (θ) + ?J c a v

(θk ) T

θ= 1 2

w 2

+ C l i =1

H 1+ τ( 1 ?y i f θ(x i ) ) + l i =1

δk i y i f θ(x

i ) (22)

By introducing the slack variables ξ= (ξ1 , ···, ξl ) T , (22) is

equivalent to the following form:

min

θ, ξ

1 2

w 2 + C l i =1

ξi + l i =1

δk

i y i f θ(x i ) s .t .

y i f θ(x

i ) ≥1 ?1

1 + τ

ξi

, ξi ≥0 , i = 1 , . . . , l (23)

4.2. Dual form of subproblem

In this subsection, we focus on solving the subproblem (23) and the iteration superscript k is dropped for clarity. To derive the dual form, we introduce the Lagrange function as:

L (w , b , ξ, β, ? β) = 1 2

w 2 + C e T ξ+ δT Y A T w + b δT Y e + βT (e ?1 1 + τ

ξ?Y A T w ?bY e ) ?? βT ξ, (24)

where e = (1 , ···, 1) T ∈ l , Y = diag (y 1 , ···, y l ) , δ= (δk 1

, ···, δk l ) T , A = ( (x 1 ) , ···, (x l

)) , and β= (β1 , . . . , βl

) T , ? β= ( ? β1

, . . . , ? βl

) T are the Lagrange multiplier vectors.

According to the Karush–Kuhn–Tucker(KKT) conditions, we can get:

? w

L = w + AY (δ?β) = 0 , (25a)

? b

L = (δ?β) T Y e = 0 , (25b)

? ξL = C e

?1

1 + τ

β?? β

= 0 . (25c)

Substituting w = AY (β?δ) in (24) yields the dual problem of

(23) :

min

β

1 2

(β?δ) T

Q (β?δ)?βT e s .t . (β?δ) T

Y e = 0 , 0 ≤β≤(1 + τ) C e

(26)

where the kernel matrix Q follows the same de?nition as

Section 2.1 .

Let α= β?δ, and de?ne the lower and upper bounds of box

constraints as: L ≡?δ∈ l , U ≡(1 + τ) C e ?δ∈ l , then (26) is

equivalent to:

min

α

f (α) ≡

1 2

αT Q α?e T αs .t .

αT Y e = 0 , L ≤α≤U (27)

Compared to C -SVM, problem (27) resembles the dual form (3) except the lower and upper bounds of its box constraints are a bit different. We leave the discussion of its solving algorithm

in Section 4.3 , and assume α

is the solution of (27) temporarily.

204 X. Shen et al. / Pattern Recognition 68 (2017) 199–210

Then according to KTT conditions, bias term b can be computed by

Chang and Lin [31] :

b = ?

i : L i < αi

f ( α)

|{ i | L i < αi < U i

}| .

(28)

For C -SVM with a nonlinear kernel, vector w is usually not com-

puted because of its potential high dimensionality. Dual problem (3) is solved, its solution is stored and used for prediction. Sim- ilarly, for pin -SVM, we do not solve primal problems or compute w k in CCCP iterations. From w = AY (β?δ) and α= β?δ, we can

get w k = AY α. Along with (21) and (28) , new δand dual problem

(27) of next CCCP iteration can be constructed. In the k -th loop,

the objective function value J ( θk ) can be computed by α

: J (θk ) = ? J ( α) = 1 2

αT Q α+ C l i =1

P τ,s 1 ?y i ( l j =1

αj y j K (x i , x j

) + b )

(29)

and then stopping criterion in Algorithm 2 can be checked.

Suppose α?and b

?are obtained from the last iteration when CCCP terminates, then the decision function becomes:

sgn

l i =1

α?i y i K (x i

, x ) + b ?

. (30)

In practice, we de?ne SVs as the corresponding x i of nonzero el-

ements in α?. The sparsity of α?greatly affects the computation

time for prediction.

4.3. Decomposition method for subproblem

Dual problem (27) of each CCCP iteration is a QP, and we mod- ify the SMO-type decomposition method proposed in [28] to solve it. Most implementations are the same with Fan et al. [28] and Chang and Lin [31] except some minor modi?cations. The con- vergence and convergence rate of this algorithm can be found in [28] and [32] . Shrinking and caching tricks are inherited naturally.

Firstly, the index sets I u p ( α) and I low ( α) for working set selection are rede?ned as:

I u p (α) ≡{ t | αt < U k t ,

y t = 1 or αt > L k

t

, y t = ?1 } I low (α) ≡{ t | αt < U k t , y t = ?1 or αt > L k

t , y t = 1 } (31)

Secondly, if αis the solution of one SMO iteration, then the two-variable subproblem of next SMO iteration is:

min

αi ,αj

1

2 αi

, αj

Q ii Q i j

Q ji

Q j j

αi αj

+(Q i ,N αN ?1) αi + (Q j ,N

αN ?1) αj s .t .

y i αi + y j αj = ?y T N αN , L k i ≤αi ≤U k i , L k j ≤αj ≤U k

j

where B = { i , j } is the current two-variable working set, N ≡{1,

, l } \ B is the complementary index set, and αN is the sub-vector of αcorresponding to set N . Similar to C -SVM [31] , the above two- variable problem has a closed-form solution and it can be solved e?ciently.

4.4. Speedup techniques

In Algorithm 2 , θk of one iteration determines the subproblem of the next iteration. Hence the selection of θ0 is important. In- tuitively, if θ0 is relatively close to the solution θ?, the training time is expected to be shortened. Inspired by this, we construct subproblem of the ?rst CCCP iteration manually. According to the characteristics of pin -SVM, if s is set to a moderate value, for a suf- ?ciently rightly classi?ed input ( x i , y i ), y i f θ(

x i ) is most likely to be in (1 + s , + ∞ ) . For an input like this, according to (21) , the cor- responding lower and upper bounds in the dual problem should be 0 and (1 + τ) C respectively. Therefore, we deliberately set the lower bounds to 0 and upper bounds to (1 + τ) C for all the box constraints in the ?rst CCCP iteration.

For problem (27) of each iteration, if no prior information is considered, a QP of the same size has to be solved from a raw initial value. Consequently, similar computation time is needed for each loop. The warm start strategy is one common speedup tech- nique to solve a series of similar subproblems. For (27) , since αinherits useful information from the previous iteration, we set the solution of αat one iteration as the initial value for the next sub- problem. Notice that the feasible region differs from one loop to

another, the initial value is projected into the box constraints L

k ≤α≤U k . In the experimental part, we would show that the above

two simple techniques greatly speed up our algorithm.

At the last of this section, we list the detailed training proce- dure of pin -SVM in Algorithm 3 . Algorithm 3 Training algorithm for pin -SVM.

Input: L 0 := 0 , U 0 := (1 + τ) C e , α0 := 0 , > 0 , k := 0 .

Output: the solution α?and the decision function (30).

repeat

Project αk to L k ≤α≤U k , and set the result as initial

value.

Solve the following problem with the method in subsec-

tion 4.3.

min α1 αT Q α?e T αs .t . αT Y e = 0 , L ≤α≤U

Set the solution as αk +1 .

Compute b k +1 according to (28). Compute δk

+1 according to (21). L k +1 := ?δk , U k +1 := (1 + τ) C e ?δk +1 .

Compute function value of ? J (αk +1 ) according to (29).

k := k + 1 .

until k > 1 and | ( ? J (αk ) ?? J (αk ?1 )) / ? J (α1 ) | ≤ .

5. Numerical results

In this section, we present numerical results on synthetic and real-life data sets. The results implies our modi?ed solver of pin - SVM is potentially e?cient in practical applications. Our solver is modi?ed based on LIBSVM(version 3.21) [31] , a popular and state- of-art SVM library. The code is available at https://https://www.wendangku.net/doc/d417563670.html,/ PaperCode/SVMTPL/tree/master/libsvm . 5.1. Synthetic data with noise

A main advantage of pin -SVM is its ability to deal with noise around the decision boundary [5] . In this subsection, we’ll show that pin -SVM has a similar property while preserving the sparsity to some extent.

Consider the example used in [5] : positive and negative sam- ples come from Gaussian distributions x i

, i ∈ I ~N (μ1 , 1 ) , and x j

, j ∈ II ~N (μ2 , 2 ) , where μ1 = [0 . 5 , ?3] T , μ2 = [ ?0 . 5 , 3] T and 1 = 2 = diag (0 . 2 , 3) , respectively. The Bayes classi?er is f c (x ) = 2 . 5 x 1 ?x 2 . We generate m (= 20 0 , 50 0) data, and apply the linear C -SVM, pin -SVM and pin -SVM to calculate the classi?cation bound- ary x (2) = w (1) x (1) + b . The decision boundary given by the Bayes classi?er is x (2) = 2 . 5 x (1) and the ideal results are w (1) = 2 . 5 and b = 0 . We repeat the sampling and training process 100 times, then report the mean and the standard deviation of w (1), b, and the percentage of SVs in Table 1 .

X.Shen et al./Pattern Recognition68(2017)199–210205

Table1

Classi?cation boundary for synthetic data.

m=200m=500

w(1) 2.399±0.800 2.456±0.437

C-SVM b0.013±0.2790.006±0.212

SVs(%) 5.43±1.74 5.14±1.20

τ=0.25τ=0.5τ=0.75τ=0.25τ=0.5τ=0.75

w(1) 2.410±0.827 2.404±0.831 2.410±0.828 2.461±0.441 2.490±0.480 2.486±0.500

pin-SVM b0.004±0.2840.014±0.2820.012±0.2810.008±0.2190.004±0.2310.003±0.234

s=0.25SVs(%)7.00±2.397.22±2.627.59±3.02 6.74±1.517.28±1.777.69±2.00

w(1) 2.432±0.846 2.481±0.824 2.552±0.886 2.478±0.492 2.477±0.505 2.479±0.458

pin-SVM b0.011±0.2860.009±0.296?0.002±0.280?0.001±0.2210.015±0.218?0.020±0.174

s=0.5SVs(%)9.97±4.5215.16±11.0637.40±34.5210.39±3.4724.10±18.9473.03±27.33

w(1) 2.461±0.829 2.572±0.791 2.485±0.695 2.473±0.490 2.471±0.335 2.546±0.310

pin-SVM b0.004±0.298?0.032±0.219?0.027±0.1970.009±0.208?0.028±0.132?0.027±0.126

s=0.75SVs(%)21.36±15.4171.10±35.9085.67±30.5128.26±19.6192.36±2.3897.49±0.89

w(1) 2.556±0.777 2.563±0.673 2.553±0.609 2.514±0.385 2.518±0.316 2.551±0.301

pin-SVM b?0.044±0.237?0.014±0.191?0.001±0.172?0.030±0.149?0.023±0.118?0.020±0.122

s=1SVs(%)59.60±34.9591.75±23.7097.65±13.3885.47±12.2398.32±0.7999.55±0.37

w(1) 2.649±0.592 2.627±0.542 2.600±0.536 2.515±0.323 2.519±0.306 2.546±0.298

pin-SVM b?0.017±0.182?40.003±0.170?0.006±0.176?0.018±0.124?0.027±0.120?0.021±0.119 SVs(%)100.00±0.00100.00±0.00100.00±0.00100.00±0.00100.00±0.00100.00±0.00

Table2

Classi?cation boundary for synthetic data with noise(r=0.05).

m=200m=500

w(1) 1.681±0.691 1.709±0.511

C-SVM b0.035±0.3000.007±0.182

SVs(%)10.28±2.2310.48±1.34

τ=0.25τ=0.5τ=0.75τ=0.25τ=0.5τ=0.75

w(1) 1.671±0.681 1.678±0.683 1.706±0.698 1.715±0.516 1.732±0.528 1.765±0.540 pin-SVM b0.029±0.3010.026±0.3050.027±0.310?0.001±0.1870.001±0.196?0.003±0.193

s=0.25SVs(%)12.52±2.9213.00±3.3413.39±3.7213.01±1.9113.97±2.5714.90±3.10

w(1) 1.720±0.721 1.879±0.798 1.993±0.869 1.786±0.531 2.178±0.548 2.432±0.416 pin-SVM b0.019±0.3210.030±0.3310.031±0.2960.012±0.189?0.005±0.1720.016±0.138

s=0.5SVs(%)16.56±6.2724.94±15.2747.62±32.2318.19±4.2645.58±20.0984.64±7.60

w(1) 1.929±0.845 2.211±0.746 2.437±0.691 2.131±0.574 2.437±0.393 2.473±0.404 pin-SVM b0.031±0.3350.039±0.2300.012±0.024?0.011±0.1740.014±0.1220.011±0.120

s=0.75SVs(%)30.54±17.0975.60±30.7092.89±18.4147.41±19.1592.39±2.1797.39±0.94

w(1) 2.245±0.844 2.409±0.609 2.478±0.641 2.419±0.413 2.458±0.377 2.475±0.382 pin-SVM b0.058±0.2620.034±0.1810.024±0.1900.009±0.1230.015±0.1100.010±0.117

s=1SVs(%)68.70±30.5495.52±14.3897.77±12.2388.17±3.6998.28±0.8099.48±0.35

w(1) 2.436±0.566 2.469±0.561 2.499±0.559 2.418±0.370 2.454±0.364 2.473±0.375 pin-SVM b0.033±0.1870.031±0.1800.024±0.1860.006±0.1070.014±0.0150.012±0.117

SVs(%)100.00±0.00100.00±0.00100.00±0.00100.00±0.00100.00±0.00100.00±0.00

The average values of w(1)and b are quite good,which veri?es all the three methods converge to the Bayes classi?er.The devi-ation of these two variables are directly affected by the value of hyper-parameters s andτ.For pin-SVM,the deviation of w(1)or b gets smaller when s orτbecomes larger.And the deviation is larger for C-SVM and smaller for pin-SVM,which veri?es the above analysis that C-SVM and pin-SVM can be viewed as the extreme cases of pin-SVM when s andτare set to certain values.This ob-servation implies that if s orτis bigger,pin-SVM is more stable for re-sampling.At the same time,s andτplay a similar role in controlling the sparsity.If any of them becomes larger,the sparsity gets worse.On the other hand,the average values of w(1)and b of all the methods are closer to the ideal results and the deviation is smaller when the size of the data set is larger(e.g.m=500vs. m=200).

We then repeat the above experiment with feature noise cor-rupted training set.The positions of noise points follow Gaus-sian distribution N(μn, n)withμn=[0,0]T and the labels of the noise points are selected from{?1,1}with equal probability.The noise affects the labels around the boundary and the level of the noise is controlled by the ratio of the noise data in the training set, denoted by r.Table2gives the mean and the standard deviation of w(1),b,and the percentage of SVs for repeating this process100 times.

From the above results,we can see that the classi?er obtained by the C-SVM is quite different from the Bayes classi?er,imply-ing its sensitivity to boundary noise.In contrast,pin-SVM performs pretty well and resulting decision boundary is close to the Bayes classi?er.For pin-SVM,the classi?cation boundary comes close to the ideal result when s orτbecomes larger,at the same time, the sparsity becomes worse.When s andτare large enough(e.g. s=1,τ=0.75),classi?cation boundary is quite close to the ideal one,which implies that truncating the negative part of the pinball loss at a proper position is su?cient to equal the pinball loss.To

206X.Shen et al./Pattern Recognition68(2017)199–210

Fig.4.The variation of sparsity and testing accuracy with respect to s andτ.This experiment is carried out on the data set“Australian”.

Table3

Description of benchmark data sets.

Dataset T raining data T esting data F eatures

A3a10002185123

Australian20049014

Breast cancer28340010

Diabetes2005688

Four-class3005622

Ijcnn11000500022

Mushrooms10005000112

Svmguide340084321

summarize,pin-SVM can be viewed as a transition from C-SVM to

pin-SVM,and the selection ofτand s is a trade-off between the

sparsity and feature noise insensitivity.

5.2.Real world data with noise

In this subsection,we validate the properties of pin-SVM on

real world data sets.All the data sets are downloaded from the

homepage of LIBSVM.As described in Table3,we randomly parti-

tion the data into two parts,one of which is used for training and

the other is for testing.All the training data are normalized,then

testing data are adjusted using the same linear transformation.The

RBF kernel K(x i,x j)=e?γ x i?x j 2is considered.In order to exclude

the effect of the regularization parameter C and kernel parame-

terγ,we search for optimal(C,γ)with C-SVM and use the same

values for the three methods.For each problem,we conduct grid

search and10-fold cross validation on the training sets to get the

(C,γ)with highest cross validation accuracy.Here,C is selected

from{2?5,2?4,...,27}andγis selected from{2?9,2?8,...,25}.

Clearly,this setting favors C-SVM a bit,but the results show that

pin-SVM and pin-SVM perform better.Zero-mean Gaussian noise is

used to simulate the feature noise[5].The training and the testing

data are corrupted by the same noise.This is based on a natural

assumption:if the feature noise follows the same pattern,it will

cause similar effects on both the training and testing data[4].The

ratio of noise to each feature,is denoted as r.

First,in order to see how the sparsity and testing accuracy are

affected by the hyper-parameters s andτ,a demo experiment is

conducted on the data set“Australian”.Parameters C andγare

?xed to the same optimal setting as above.τand s are traversed

on a two dimensional mesh.Training and prediction are conducted

on both the original and noisy data.The variation of SV percentage

and testing accuracy with respect to s andτare depicted in Fig.4.

From these results,the following conclusions could be drawn.In

the?rst place,pin-SVM loses the sparsity gradually with the in-

creasement ofτor s.Moreover,no method performs especially

better in prediction accuracy on the original set.Finally,if feature

noise is added,pin-SVM could yield a model with better general-

ization ability,especially whenτor s is relatively large.

Next,we investigate the performance on all the eight data sets

with different noise level.r is set to be0(i.e.noise-free),0.05,

and0.1respectively.C-SVM,pin-SVM,and pin-SVM are applied to

do classi?cation.Since the variations of hyper-parametersτand s

play a similar role in controlling sparsity and noise insensitivity,

in this experiment,τis?xed to0.5for simplicity.s is set to dif-

ferent values:0.01,0.1,0.2and0.3,to illustrate the changing ten-

dency of sparsity and prediction accuracy.We repeat the training

and prediction procedure10times for each data set when r=0.

And the average prediction accuracy and the standard deviation on

testing sets are reported in Table4.The records of highest accuracy

are marked in bold.From this table,we can see that although no

method is more advantageous in testing accuracy than the others

on the original data sets(i.e.r=0),pin-SVM and pin-SVM performs

better than C-SVM on the noisy data sets.Speci?cally,for pin-SVM,

the average testing accuracy increases as s becomes larger,and this

tendency is more obvious when the ratio of noise is high.That is

to say,pin-SVM enjoys noise insensitivity on corrupted data at the

price of sparsity,and s is the regulator to balance these two fac-

tors,which validate the analysis in Section4.3.

5.3.Performance

As Algorithm3solves a series of QPs that resemble C-SVM,a

natural consideration is whether training pin-SVM requires much

longer time than training C-SVM.In this subsection,we carry out

two experiments to prove this fear is not justi?ed.

The?rst experiment reveals that the running time of each QP

reduces quickly as CCCP iterates.Meanwhile,after a number of

outer loops,the termination condition of inner loop is satis?ed

without any execution.Concretely,we?xτto0.5,conduct grid

search and cross validation to?nd the optimal(C,s,γ)for pin-

SVM.Then six real-life data sets are trained with their optimal

hyper-parameters.Here,we do not set the stopping condition as

Algorithm3but terminate the programme when the decomposi-

tion algorithm of inner loop does not execute.We denote the run-

ning time of i t h subproblem(i=1,2,...)as t i and plot the curve of

t i/t1ratio in Fig.5.It shows that all the data sets take less than

nine iterations and data set“A3a”takes only two outer iterations.

Further more,after the?rst iteration,the running time of subse-

quent iterations decreases drastically.For example,although train-

ing the data set“Cancer”takes eight outer iterations,the seven

latter subproblems spend less than one-tenth the time of?rst sub-

problem.This veri?es effectiveness of the speedup techniques dis-

cussed in Section4.4,and implies the total time to train pin-SVM

is considerably short.

In the following experiment,we compare the overall compu-

tation time of pin-SVM with C-SVM.Following the procedure of

Section 5.2,the optimal hyper-parameters(C,γ)for these two

models are?xed to the same.The running time for training C-SVM

is denoted as t?and the default termination condition of LIBSVM is

used.For pin-SVM,We train each data set with different s,s is se-

X.Shen et al./Pattern Recognition68(2017)199–210207

Table4

Classi?cation accuracy on testing data with boundary noise.

Dataset Record r C-SVM Truncated pinball loss SVM pin-SVM

type s=0.01s=0.1s=0.2s=0.3

0.0083.2583.4883.5283.0282.4382.38

Accuracy0.0581.62±0.7181.64±0.8081.71±0.6381.80±0.6581.96±0.6082.35±0.28

0.1080.68±0.4180.92±0.4781.01±0.3281.13±0.3581.08±0.3381.03±0.35

A3a

0.0048.2050.7056.0095.6099.3099.60

SVs(%)0.0558.40±1.3563.80±1.8469.01±1.8675.64±1.7985.00±3.0399.93±0.07

0.1069.43±1.2275.48±0.9480.04±1.2484.96±1.2790.25±2.1399.97±0.07

0.0085.5186.9487.1486.9486.7386.73

Accuracy0.0584.24±1.2785.08±1.5784.53±3.0185.49±0.8785.90±0.9985.96±0.68

0.1082.94±1.8383.82±1.0383.55±2.0383.73±2.7584.35±2.9885.45±0.78

Australian

0.0041.0049.0073.5093.5098.00100.00

SVs(%)0.0543.35±4.3651.65±4.3564.00±8.3580.55±4.5791.25±5.72100.00±0.00

0.1045.25±3.0650.70±1.7756.10±7.2572.15±14.6984.00±17.11100.00±0.00

0.0097.2597.2597.2597.5097.5096.75

Accuracy0.0591.70±1.3492.68±1.6492.73±1.3392.65±1.4293.20±1.0192.73±0.83

Breast0.1087.88±0.9088.45±0.9988.63±0.8588.65±0.9388.60±1.0488.30±0.93

cancer0.0013.4314.8421.9133.2286.22100.00

SVs(%)0.0525.65±2.7326.82±3.0232.51±3.3689.90±12.5881.84±4.04100.00±0.00

0.1034.38±3.6835.44±4.0841.34±5.2760.14±11.1477.67±6.73100.00±0.00

0.0076.0675.7075.3575.7075.1874.47

Accuracy0.0571.25±1.5271.48±1.7571.43±1.9471.23±1.8071.36±1.7371.88±1.33

0.1068.52±1.7068.54±1.5768.68±1.7468.70±1.6668.84±1.5869.10±1.57

Diabetes

0.0061.0063.5066.5072.5093.00100.00

SVs(%)0.0563.25±2.7466.15±2.1670.05±2.0675.20±3.5085.35±6.64100.00±0.00

0.1069.05±3.8471.05±3.6674.40±3.2679.60±3.7186.55±4.79100.00±0.00

0.00100.00100.00100.00100.00100.00100.00

Accuracy0.0596.80±0.4896.87±0.5596.81±0.6296.81±0.6196.81±0.6796.98±0.50

0.1090.43±1.3490.53±1.3890.64±1.3490.68±1.4090.75±1.4290.71±1.45

Four-class

0.0032.0037.6775.3387.3388.33100.00

SVs(%)0.0535.80±1.9140.77±2.8467.70±5.4481.47±2.8988.57±3.13100.00±0.00

0.1041.30±2.5844.17±3.1066.20±6.4381.37±5.2388.57±4.47100.00±0.00

0.0092.5494.1894.4494.5294.9294.32

Accuracy0.0590.43±0.9790.90±1.6191.31±0.7591.36±0.6291.46±0.4891.61±0.33

0.1087.55±2.2088.60±1.4188.45±1.5788.51±1.4488.47±1.2189.18±0.86

Ijcnn1

0.0023.8026.0030.5040.1053.10100.00

SVs(%)0.0530.91±1.0134.72±2.3938.79±2.1646.79±2.1759.30±3.5799.99±0.03

0.1035.89±2.6138.95±3.5643.65±4.5250.41±3.7161.02±3.1399.99±0.03

0.0099.9099.9099.9099.9099.9099.90

Accuracy0.0599.31±0.1399.31±0.1399.31±0.1399.31±0.1499.27±0.1999.43±0.10

0.1097.71±0.4497.71±0.4497.73±0.4497.68±0.4797.70±0.4897.97±0.45

Mushrooms

0.0051.8081.6099.4098.5098.5098.50

SVs(%)0.0553.15±0.5653.88±0.5761.80±0.9173.46±1.2085.70±1.6099.96±0.07

0.1076.89±0.6777.39±0.6681.83±0.5786.59±1.0090.73±0.6899.96±0.07

0.0079.7279.7279.9579.9580.3179.83

Accuracy0.0576.80±0.9377.18±0.6977.50±0.8377.77±0.8677.85±0.7477.78±0.81

0.1076.36±0.6376.76±0.6376.89±0.7776.92±0.7277.09±0.7877.16±0.80

Svmguide3

0.0056.0059.5072.7595.2597.25100.00

SVs(%)0.0566.48±1.4071.15±1.9280.43±2.1392.15±2.4497.10±1.1199.95±0.11

0.1074.20±1.5577.93±1.7285.05±2.0293.18±1.4297.60±0.6399.95±0.16

lected from{10?4,10?3,...,102}.The termination accuracy is set to10?4,and the corresponding training time is denoted as t s.For stability,every training process is repeated ten times,and we take the average running time as the?nal result.The ratio curves of t s/t?are plotted in Fig.6.For all the data sets,the training time for pin-SVM is less than?ve times that of C-SVM.There is no obvious pattern with respect to s,which implies it provides an averagely good solver for pin-SVM with different values of s.Besides,since pin-SVM can be viewed as an approximation for pin-SVM when s is big enough,our implementation also provides an e?cient solver for pin-SVM.Furthermore,there are some links between this ex-periment and the above one.In Fig.5,“Ijcnn1”takes relatively longer time in the following loops than other data sets.Accord-ingly,the overall training time of“Ijcnn1”is also longer.One no-ticeable case is the problem“Cancer”.In Fig.5,it is clear that its following iterations are extremely fast,but its total training time is relatively long compared to C-SVM.We give two explanations for this.Firstly,these two experiments are conducted with different hyper-parameters.Secondly,from Fig.5,we can see that training the problem“Cancer”needs eight outer loops.In that experiment, we do not count the time to compute the objective function,which involves the time-consuming computation of kernel matrix.But in this experiment,the total training time is considered.

208X.Shen et al./Pattern Recognition68(2017)199–210

Fig.5.The ratio of i t h subproblem’s running time to that of the?rst subproblem. In this experiment,the time between solving subproblems,e.g.the time to com-pute the objective function value,is not counted.In this way,the effectiveness of speedup techniques is more obviously exhibited.Curve of each data set disappears at a certain outer loop.It means after that loop,the termination condition of the subproblem is met at the beginning andαdoes not change after that loop.

Fig.6.The ratio of overall training time of pin-SVM to that of C-SVM.The red hor-izontal solid line refers to the C-SVM baseline.(For interpretation of the references to colour in this?gure legend,the reader is referred to the web version of this article.)

6.Conclusion

Both C-SVM and pin-SVM have their own advantages and dis-advantages.C-SVM is sparse but less robust when input data is corrupted by noise.While pin-SVM proves to be less noise in-sensitive,but it totally loses sparsity.In this paper,we propose pin-SVM to act as a transition between these two models.pin-SVM has a similar form with C-SVM and pin-SVM except the loss function is https://www.wendangku.net/doc/d417563670.html,pared to pin-SVM,pin-SVM also pun-ishes all the rightly classi?ed points but most of these points are equally treated.Hence both the sparsity and feature noise insensi-tivity are preserved,and their effects can be controlled by hyper-parametersτand s.Moreover,pin-SVM inherits many crucial char-acteristics from C-SVM and pin-SVM.For example,it satis?es Bayes rule and the misclassi?cation error can be bounded.pin-SVM per-forms pretty well both on synthetic and real world data sets if its hyper-parameters are properly tuned.It shows feature noise insen-sitivity and keeps sparsity to a certain extent at the same time. LIBSVM is modi?ed to train pin-SVM e?ciently.Numerical results show that,it does not take much more effort than training C-SVM,and the computation e?ciency of our solver is good enough for practical use.

At this point,it is meaningful and enlightening to conclude the similarities and differences of the ramp loss SVMs[21–23]and our method.Ramp loss truncates the hinge loss to get label noise ro-bustness and enhance sparsity.But our proposed loss in this pa-per truncates the pinball loss to get sparsity and preserve feature noise robustness.Both of these two truncated loss functions are non-convex but piecewise linear,and they can be decomposed into the difference of two“hinge-like”convex functions.Accordingly, the corresponding SVM models can be solved by CCCP algorithm, and the sub-problem of each CCCP iteration is a QP that is simi-lar to C-SVM in form.In the future,we will try to propose a novel loss function,which combines the merits of the pinball and ramp loss to handle both the boundary and label noise.We expect this new loss would yield a better performance under various circum-stances.

Acknowledgement

This work was supported in part by the National Natural Sci-ence Foundation of China under Grant No.11671379,No.11331012 and UCAS Grant No.Y55202LY00.

References

[1]V.Vapnik,The nature of statistical learning theory,1995

[2]V.Vapnik,Statistical learning theory,1998.

[3]I.Steinwart,Sparseness of support vector machines,J.Mach.Learn.Res.4

(Nov)(2003)1071–1105.

[4]J.Bi,T.Zhang,Support vector classi?cation with input data uncertainty,Adv.

Neural Inf.Process.Syst.17(2004)161–168.

[5]X.Huang,L.Shi,J.Suykens,et al.,Support vector machine classi?er with pin-

ball loss,Pattern Anal.Mach.Intell.IEEE Trans.36(5)(2014)984–997.

[6]https://www.wendangku.net/doc/d417563670.html,nckriet,L.E.Ghaoui,C.Bhattacharyya,M.I.Jordan,A robust minimax

approach to classi?cation,J.Mach.Learn.Res.3(3)(2003)555–582.

[7]X.Zhang,Using class-center vectors to build support vector machines,in:Neu-

ral Networks for Signal Processing Ix,1999.Proceedings of the1999IEEE Sig-nal Processing Society Workshop,1999,pp.3–11.

[8]J.Zhang,Y.Wang,A rough margin based support vector machine,Inf.Sci.Int.

J.178(9)(2008)2204–2214.

[9]R.Koenker,Quantile regression,Cambridge University Press,2005.

[10]I.Steinwart, A.Christmann,et al.,Estimating conditional quantiles with the

help of the pinball loss,Bernoulli17(1)(2011)211–225.

[11] A.Christmann,I.Steinwart,How SVMs can estimate quantiles and the median,

in:Advances in neural information processing systems,2007,pp.305–312. [12]S.Mehrkanoon,X.Huang,J.A.Suykens,Non-parallel support vector classi?ers

with different loss functions,Neurocomputing143(2014)294–301.

[13]Y.Xu,Z.Yang,X.Pan,A novel twin support vector machine with pinball loss.,

IEEE Trans.Neural Netw.Learn.Syst.(2016).

[14]V.Jumutc,X.Huang,J.A.Suykens,Fixed-size pegasos for hinge and pinball loss

SVM,in:Neural Networks(IJCNN),The2013International Joint Conference on, IEEE,2013,pp.1–7.

[15]T.Zhang,Statistical analysis of some multi-category large margin classi?cation

methods,J.Mach.Learn.Res.5(Oct)(2004)1225–1251.

[16]X.Huang,L.Shi,J.A.Suykens,Sequential minimal optimization for SVM with

pinball loss,Neurocomputing149(2015)1596–1603.

[17]A.Yuille,A.Rangarajan,The concave-convex procedure(CCCP),Adv.Neural Inf.

Process.Syst.(2002)1033–1040.

[18]A.Yuille,A.Rangarajan,The concave-convex procedure,Neural Comput.15(4)

(2003)915–936.

[19] D.Hunter,https://www.wendangku.net/doc/d417563670.html,nge,A tutorial on MM algorithms,Am.Stat.58(1)(2004)

30–37.

[20]T.Lipp,S.Boyd,Variations and extension of the convex–concave procedure,

Optim.Eng.(2014)1–25.

[21]Y.Liu,Y.Wu,Robust truncated hinge loss support vector machines,J.Am.Stat.

Assoc.102(September)(2007)974–983.

[22]R.Collobert,F.Sinz,J.Weston,L.Bottou,Trading convexity for scalability,in:

Proceedings of the23rd international conference on Machine learning,ACM, 2006,pp.201–208.

[23]X.Huang,L.Shi,J.A.Suykens,Ramp loss linear programming support vector

machine.,J.Mach.Learn.Res.15(1)(2014)2185–2211.

[24]T.Joachims,Making large scale SVM learning practical,Technical Report,Uni-

versit?t Dortmund,1999.

[25]J.Platt,et al.,Fast training of support vector machines using sequential mini-

mal optimization,Adv.Kernel Methods Support Vector Learn.3(1999).

[26]S.S.Keerthi,S.K.Shevade, C.Bhattacharyya,K.R.K.Murthy,Improvements to

Platt’s SMO algorithm for SVM classi?er design,Neural Comput.13(3)(2001) 637–649.

X.Shen et al./Pattern Recognition68(2017)199–210209

[27] C.-W.Hsu,C.-J.Lin,A simple decomposition method for support vector ma-

chines,Mach.Learn.46(1–3)(2002)291–314.

[28]R.-E.Fan,P.-H.Chen,C.-J.Lin,Working set selection using second order in-

formation for training support vector machines,J.Mach.Learn.Res.6(Dec) (2005)1889–1918.

[29]I.Steinwart,A.Christmann,Support Vector Machines,Springer Science&Busi-

ness Media,2008.[30]I.E.-H.Yen,N.Peng,P.-W.Wang,S.-D.Lin,On convergence rate of concave–

convex procedure,in:Proceedings of the NIPS2012Optimization Workshop, 2012.

[31] C.-C.Chang, C.-J.Lin,LIBSVM:A library for support vector machines,ACM

Trans.Intell.Syst.Technol.2(2011)27:1–27:27.Software available at http: //https://www.wendangku.net/doc/d417563670.html,.tw/~cjlin/libsvm.

[32]P.-H.Chen,R.-E.Fan,C.-J.Lin,A study on SMO-type decomposition methods

for support vector machines,IEEE Trans.Neural Netw.17(4)(2006)893–908.

210X.Shen et al./Pattern Recognition68(2017)199–210

Xin Shen received the B.Eng.degree in software engineering from Fudan University in2013.Currently,he is a graduate student with University of Chinese Academy of Sciences.His current research interests lie in optimization and machine learning.

Lingfeng Niu received the B.S.degree in mathematics from Xi’an Jiaotong University in2004,and the Ph.D.degree in mathematics from the Chinese Academy of Sciences in2009.She is currently an Associate Professor with Chinese Academy of Sciences.Her current research interests include optimization and machine learning.

Zhiquan Qi received the master and Ph.D.degrees from China Agricultural University in2006and2011,respectively.He is currently a Research Assistant with Chinese Academy of Sciences.His research interests include data mining and SVMs.He won the Best Paper award of biennial pattern recognition journal2013and2014.

Yingjie Tian received the Ph.D.degree in management science and engineering from China Agricultural University in2005.He is now a Professor with Chinese Academy of Sciences.His current research interests include SVMs and applications.He has published four books on SVMs,one of which has over1400citations.

ps中各个工具的作用

Ps中各个工具的作用 1、移动工具,可以对PHOTOSHOP里的图层进行移动图层。 2、矩形选择工具,可以对图像选一个矩形的选择范围,一般对规则的选择用多。 3、单列选择工具,可以对图像在垂直方向选择一列像素,一般对比较细微的选择用。 4、裁切工具,可以对图像进行剪裁,前裁选择后一般出现八个节点框,用户用鼠标对着节点进行缩放,用鼠标对着框外可以对选择框进行旋转,用鼠标对着选择框双击或打回车键即可以结束裁切。 5、套索工具,可任意按住鼠标不放并拖动进行选择一个不规则的选择范围,一般对于一些马虎的选择可用。 6、多边形套索工具,可用鼠标在图像上某点定一点,然后进行多线选中要选择的范围,没有圆弧的图像勾边可以用这个工具,但不能勾出弧度。 7、磁性套索工具,这个工具似乎有磁力一样,不须按鼠标左键而直接移动鼠标,在工具头处会出现自动跟踪的线,这条线总是走向颜色与颜色边界处,边界越明显磁力越强,将首尾连接后可完成选择,一般用于颜色与颜色差别比较大的图像选择。 8、魔棒工具,用鼠标对图像中某颜色单击一下对图像颜色进行选择,选择的颜色范围要求是相同的颜色,其相同程度可对魔棒工具双击,在屏幕右上角上容差值处调整容差度,数值越大,表示魔棒所选择的颜色差别大,反之,颜色差别小。 9、喷枪工具,主要用来对图像上色,上色的压力可由右上角的选项调整压力,上色的大小可由右边的画笔处选择自已所须的笔头大小,上色的颜色可由右边的色板或颜色处选择所须的颜色。 10、画笔工具,同喷枪工具基本上一样,也是用来对图像进行上色,只不过笔头的蒙边比喷枪稍少一些。 11、铅笔工具,主要是模拟平时画画所用的铅笔一样,选用这工具后,在图像内按住鼠标左键不放并拖动,即可以进行画线,它与喷枪、画笔不同之处是所画出的线条没有蒙边。笔头可以在右边的画笔中选取。 12、图案图章工具,它也是用来复制图像,但与橡皮图章有些不同,它前提要求先用矩形选择一范围,再在"编辑"菜单中点取"定义图案"命令,然后再选合适的笔头,再在图像中进和行复制图案。 13、历史记录画笔工具,主要作用是对图像进行恢复图像最近保存或打开图像的

ps基本工具介绍初学者必看解析

广军影视2015-11-15 初学者必看 工具介绍 1、移动工具:可以对PS里的图层进行移动。 2、 矩形选框工具:可以对图像选择一个矩形的选择范围 单列选框工具:可以在图像或图层中绘制出1个像素高的横线或竖线区域,主要用于修复图像中丢失的像素。 椭圆选框工具:可以对图片选择一个椭圆或正圆的选择范围。【椭圆变正圆:按着shift 画圆为正圆;按shift+alt是从中心点出发往外画正圆】 3、【取消选区:ctrl+d 或菜单栏【选择】--取消选择】 套索工具:可以用来选区不规则形状的图像【在图像适当的位置单机并按住鼠标左键,拖曳鼠标绘制出需要的选区,松开鼠标左键,选区会自动封闭】 有羽化50所以看到的效果为圆选区) 属性栏红框:为选择方式选项【相加、相减、交叉】。 黄框:用于设定边缘的羽化程度。 白框:用于清除选区边缘的锯齿。 多边形套索工具:可以用来选取不规则的多边形图像(属性与套锁工具相同)【没有圆弧的图像沟边可以用这个工具,但不能勾出弧度】 【使用套索工具选区时,按enter键封闭选区。按ESC键取消选区,按delete键,删除上一个单击建立的选区点。】

磁性套索工具:可以用来选取不规则的并与背景反差大的图像【不须按鼠标而直接移动鼠标,在工具头处会出现自动跟踪的线,这条线总是走向颜色与颜色边界处,边界越明显磁力越强,将首尾相接后可完成选择】 属性:“宽度”选项用于设定套索检测检测范围,磁性套索工具将在这个范围内选取反差最大的边缘。“对比度”选项用于设定选取边缘的灵敏度,数值越大,则要求边缘与背景的反差越大。“频率”选项用于设定选区点的速率,数值越大,标记速率越快,标记点越多。 频率57 频率71 对比度10% 对比度50% 4、 魔棒工具:可以用来选取图像中的某一点,并将与这一点颜色相同或相近的点自动融入选区中。【直接在图像上单击就会出现选区】

ps工具作用介绍大全

ps工具作用介绍大全 位图:又称光栅图,一般用于照片品质的图像处理,是由许多像小方块一样的"像素"组成的图形。由其位置与颜色值表示,能表现出颜色阴影的变化。在PHOTOSHOP主要用于处理位图 矢量图:通常无法提供生成照片的图像物性,一般用于工程持术绘图。如灯光的质量效果很难在一幅矢量图表现出来。 分辩率:每单位长度上的像素叫做图像的分辩率,简单讲即是电脑的图像给读者自己观看的清晰与模糊,分辩率有很多种。如屏幕分辩率,扫描仪的分辩率,打印分辩率。 图像尺寸与图像大小及分辩率的关系:如图像尺寸大,分辩率大,文件较大,所占内存大,电脑处理速度会慢,相反,任意一个因素减少,处理速度都会加快。 通道:在PHOTOSHOP中,通道是指色彩的范围,一般情况下,一种基本色为一个通道。如RGB颜色,R为红色,所以R通道的范围为红色,G为绿色,B为蓝色。 图层:在PHOTOSHOP中,一般都是多是用到多个图层制作每一层好象是一张透明纸,叠放在一起就是一个完整的图像。对每一图层进行修改处理,对其它的图层不含造成任何的影响。 图像的色彩模式: 1)RGB彩色模式:又叫加色模式,是屏幕显示的最佳颜色,由红、绿、蓝三种颜色组成,每一种颜色可以有0-255的亮度变化。 2)CMYK彩色模式:由青色Cyan、洋红色Magenta、禁用语言Yellow。而K取的是black最后一个字母,之所以不取首字母,是为了避免与蓝色(Blue)混淆,又叫减色模式。一般打印输出及印刷都是这种模式,所以打印图片一般都采用CMYK模式。 3)HSB彩色模式:是将色彩分解为色调,饱和度及亮度通过调整色调,饱和度及亮度得到颜色和变化。4)Lab彩色模式:这种模式通过一个光强和两个色调来描述一个色调叫a,另一个色调叫b。它主要影响着色调的明暗。一般RGB转换成CMYK都先经Lab的转换。 5)索引颜色:这种颜色下图像像素用一个字节表示它最多包含有256色的色表储存并索引其所用的颜色,它图像质量不高,占空间较少。 6)灰度模式:即只用黑色和白色显示图像,像素0值为黑色,像素255为白色。 7)位图模式:像素不是由字节表示,而是由二进制表示,即黑色和白色由二进制表示,从而占磁盘空间最小。 ___________工____________具_____________用_____________法_____________ 移动工具,可以对PHOTOSHOP里的图层进行移动图层。 矩形选择工具,可以对图像选一个矩形的选择范围,一般对规则的选择用多。 单列选择工具,可以对图像在垂直方向选择一列像素,一般对比较细微的选择用。 裁切工具,可以对图像进行剪裁,前裁选择后一般出现八个节点框,用户用鼠标对着节点进行缩放,用鼠标对着框外可以对选择框进行旋转,用鼠标对着选择框双击或打回车键即可以结束裁切。 套索工具,可任意按住鼠标不放并拖动进行选择一个不规则的选择范围,一般对于一些马虎的选择可用。

PS基本用法工具介绍

PS基本用法工具介绍 它是由Adobe公司开发的图形处理系列软件之一,主要应用于在图像处理、广告设计的一个电脑软件。最先它只是在Apple机(MAC)上使用,后来也开发出了forwindow的版本。 一、基本的概念。 位图:又称光栅图,一般用于照片品质的图像处理,是由许多像小方块一样的"像素"组成的图形。由其位置与颜色值表示,能表现出颜色阴影的变化。 在PHOTOSHOP主要用于处理位图。 矢量图:通常无法提供生成照片的图像物性,一般用于工程持术绘图。如灯光的质量效果很难在一幅矢量图表现出来。 分辩率:每单位长度上的像素叫做图像的分辩率,简单讲即是电脑的图像给读者自己观看的清晰与模糊,分辩率有很多种。如屏幕分辩率,扫描仪的分辩率,打印分辩率。 图像尺寸与图像大小及分辩率的关系:如图像尺寸大,分辩率大,文件较大,所占内存大,电脑处理速度会慢,相反,任意一个因素减少,处理速度都会加快。 通道:在PHOTOSHOP中,通道是指色彩的范围,一般情况下,一种基本色为一个通道。如RGB颜色,R为红色,所以R通道的范围为红色,G为绿色,B为蓝色。 图层:在PHOTOSHOP中,一般都是多是用到多个图层制作每一层好象是一张透明纸,叠放在一起就是一个完整的图像。对每一图层进行修改处理,对其它的图层不含造成任何的影响。 二、图像的色彩模式 1)RGB彩色模式:又叫加色模式,是屏幕显示的最佳颜色,由红、绿、蓝三种颜色组成,每一种颜色可以有0-255的亮度变化。 2)、CMYK彩色模式:由品蓝,品红,品黄和黄色组成,又叫减色模式。 一般打印输出及印刷都是这种模式,所以打印图片一般都采用CMYK模式。 3)、HSB彩色模式:是将色彩分解为色调,饱和度及亮度通过调整色调,饱和度及亮度得到颜色和变化。 4)、Lab彩色模式:这种模式通过一个光强和两个色调来描述一个色调叫a,另一个色调叫b。它主要影响着色调的明暗。一般RGB转换成CMYK 都先经Lab的转换。 5)、索引颜色:这种颜色下图像像素用一个字节表示它最多包含有256色的色表储存并索引其所用的颜色,它图像质量不高,占空间较少。 6)、灰度模式:即只用黑色和白色显示图像,像素0值为黑色,像素255为白色。

PS工具作用介绍 及 快捷键大全

基本术语了解: 位图:又称光栅图,一般用于照片品质的图像处理,是由许多像小方块一样的"像素"组成的图形。由其位置与颜色值表示,能表现出颜色阴影的变化。在PHOTOSHOP主要用于处理位图 矢量图:通常无法提供生成照片的图像物性,一般用于工程持术绘图。如灯光的质量效果很难在一幅矢量图表现出来。 分辩率:每单位长度上的像素叫做图像的分辩率,简单讲即是电脑的图像给读者自己观看的清晰与模糊,分辩率有很多种。如屏幕分辩率,扫描仪的分辩率,打印分辩率。 图像尺寸与图像大小及分辩率的关系:如图像尺寸大,分辩率大,文件较大,所占内存大,电脑处理速度会慢,相反,任意一个因素减少,处理速度都会加快。 通道:在PHOTOSHOP中,通道是指色彩的范围,一般情况下,一种基本色为一个通道。如RGB颜色,R为红色,所以R通道的范围为红色,G为绿色,B为蓝色。 图层:在PHOTOSHOP中,一般都是多是用到多个图层制作每一层好象是一张透明纸,叠放在一起就是一个完整的图像。对每一图层进行修改处理,对其它的图层不含造成任何的影响。 图像的色彩模式: 1)RGB彩色模式:又叫加色模式,是屏幕显示的最佳颜色,由红、绿、蓝三种颜色组成,每一种颜色可以有0-255的亮度变化。 2)CMYK彩色模式:由青色Cyan、洋红色Magenta、禁用语言Yellow。

而K取的是black最后一个字母,之所以不取首字母,是为了避免与蓝色(Blue)混淆,又叫减色模式。一般打印输出及印刷都是这种模式,所以打印图片一般都采用CMYK模式。 3)HSB彩色模式:是将色彩分解为色调,饱和度及亮度通过调整色调,饱和度及亮度得到颜色和变化。 4)Lab彩色模式:这种模式通过一个光强和两个色调来描述一个色调叫a,另一个色调叫b。它主要影响着色调的明暗。一般RGB转换成CMYK 都先经Lab的转换。 5)索引颜色:这种颜色下图像像素用一个字节表示它最多包含有256色的色表储存并索引其所用的颜色,它图像质量不高,占空间较少。 6)灰度模式:即只用黑色和白色显示图像,像素0值为黑色,像素255为白色。 7)位图模式:像素不是由字节表示,而是由二进制表示,即黑色和白色由二进制表示,从而占磁盘空间最小。 工具用途: 移动工具,可以对PHOTOSHOP里的图层进行移动图层。 矩形选择工具,可以对图像选一个矩形的选择范围,一般对规则的选择用多。 单列选择工具,可以对图像在垂直方向选择一列像素,一般对比较细微的选择用。 裁切工具,可以对图像进行剪裁,前裁选择后一般出现八个节点框,用户用鼠标对着节点进行缩放,用鼠标对着框外可以对选择框进行旋转,用鼠标对着选择框双击或打回车键即可以结束裁切。 套索工具,可任意按住鼠标不放并拖动进行选择一个不规则的选择范围,一般对于一些马虎的选择可用。

Photoshop基本操作介绍(图文介绍)

第一课:工具的使用 一、Photoshop 简介: Adobe 公司出品的Photoshop 是目前最广泛的图像处理软件,常用于广告、艺术、平面设计等创作。也广泛用于网页设计和三维效果图的后期处理,对于业余图像爱好者,也可将自己的照片扫描到计算机,做出精美的效果。总之,Photoshop 是一个功能强大、用途广泛的软件,总能做出惊心动魄的作品。 二、认识工具栏 1、 选框工具 :用于选取需要的区域 ----选择一个像素的横向区域 属性栏: 注:按shift 键+框选,可画出正方形或正圆形区域 2、移动工具: -----用于移动图层或选区里的图像 3、套索工具: ----用于套索出选区 ----用于套索出多边形选区 ----可根据颜色的区别而自动产生套索选区 4、魔术棒工具: ----根据颜色相似原理,选择颜色相近的区域。 注:“容差”,定义可抹除的颜色范围,高容差会抹除范围更广的像素。 5、修复工具: 且是 ----类似于“仿制图工具”,但有智能修复功能。 选区相减

----用于大面积的修复 一新 ----用采样点的颜色替换原图像的颜色 注:Alt+鼠标单击,可拾取采样点。 6、仿制图章工具----仿制图章工具从图像中取样,然后您可将样本应用到其它图像或同一图像的其它部分。 ----仿制图章工具从图像中取样,然后将样本应用到其它图像或同 一图像的其它部分(按Alt键,拾取采样点)。 ----可先自定义一个图案,然后把图案复制到图像的其它区域或其它图像上。 三、小技巧: ①、取消选区:【Ctrl+D】 ②、反选选区:【Shif+F7】 ③、复位调板:窗口—工作区—复位调板位置。 ④、ctrl+[+、-]=图像的缩放 ⑤空格键:抓手工具 ⑥Atl+Delete = 用前景色填充 Ctrl+Delete = 用背景色填充 第二课:工具的使用二 一、工具栏 二、小技巧 1、自由变换工具:【Ctrl 、使用框选工具的时候,按【

PS CC 面及工具介绍

工具栏: 1.移动工具,可以对PHOTOSHOP里的图层进行移动图层。 2.矩形选框工具,可以对图像选一个矩形的选择范围,一般对规则的选择用多。 3.椭圆选框工具,可以对图像选一个矩形的选择范围,一般对规则的选择用多。 4.单行选框工具,可以对图像在水平方向选择一行像素,一般对比较细微的选择用。 5.单列选框工具,可以对图像在垂直方向选择一列像素,一般对比较细微的选择用。 6.?套索工具,可任意按住鼠标不放并拖动进行选择一个不规则的选择范围,一般对于一些马虎的选择可用。 7.?多边形套索工具,可用鼠标在图像上某点定一点,然后进行多线选中要选择的范围,没有圆弧的图像勾边可以用这个工具,但不能勾出弧线,所勾出的选择区域都是由多条线组成的 8.磁性套索工具,这个工具似乎有磁力一样,不须按鼠标左键而直接移动鼠标,在工具头处会出现自动跟踪的线,这条线总是走向颜色与颜色边界处,边界越明显磁力越强,将首尾连接后可完成选择,一般用于颜色与颜色差别比较大的图像选择。 9.魔棒工具,用鼠标对图像中某颜色单击一下对图像颜色进行选择,选择的颜色范围要求是相同的颜色,其相同程度可对魔棒工具双击,在辅助工具栏上容差值处调整容差度,数值越大,表示魔棒所选择的颜色差别大,反之,颜色差别小。 10.快速选择工具,选择快速选择工具后我们可以调节工具的大

小,(工具大的话选择的快一些,小的话可以更精准)。根据处理的图片选择快速选择工具的大小。如果多选或者少选可以添加选区,从选区减去。 11.裁切工具,可以对图像进行剪裁,前裁选择后一般出现八个节点框,用户用鼠标对着节点进行缩放,用鼠标对着框外可以对选择框进行旋转,用鼠标对着选择框双击或打回车键即可以结束裁切。 12.吸管工具,主要用来吸取图像中某一种颜色,并将其变为前景色,一般用于要用到相同的颜色时候,在色板上又难以达到相同的可能,宜用该工具。用鼠标对着该颜色单击一下即可吸取。 13.画笔工具,用来对图像进行上色,主要用来对图像上色,上色的压力可由右键的选项调整压力,上色的大小可由右边的画笔处选择自已所须的笔头大小,上色的颜色可由右边的色板或颜色处选择所须的颜色。 14.铅笔工具,主要是模拟平时画画所用的铅笔一样,选用这工具后,在图像内按住鼠标左键不放并拖动,即可以进行画线,它与画笔不同之处是所画出的线条没有蒙边。笔头可以在右边的画笔中选取。 15.仿制图章工具,主要用来对图像的修复用多,亦可以理解为局部复制。先按住Alt键,再用鼠标在图像中需要复制或要修复取样点处单击一左键,再在右边的画笔处选取一个合适的笔头,就可以在图像中修复图像。 16.?图案图章工具,它也是用来复制图像,但与橡皮图章有些不同,它前提要求先用矩形选择一范围,再在”编辑”菜单中点取”定义图案”命令,然后再选合适的笔头,再在图像中进和行复制图案。 17.?历史记录画笔工具,主要作用是对图像进行恢复图像最近保存或打开图像的原来的面貌,如果对打开的图像操作后没有保存,使

Photoshop工具箱中各工具的名称及功能介绍

Photoshop工具箱中各工具的名称及功能介绍: 一、选择工具组 1、矩形选框工具:选择该工具可以在图像中创建矩形选区。按住“Shift”键拖动光标,可创建出正方形选区。 2、移动工具:移动选区的图像部分,如果没有建立选区,则移动的是整幅图像。 3、套索工具:用这个工具可以建立自由形状的选区。 4、魔棒工具:这个工具自动地以颜色近似度作为选择的依据,适合选择大面积颜色相近的区域。如果想选定不相邻的区域,按住“Shift”键对其它想要增加的部分单击,得以扩大选区。 5、裁切工具:可用来切割图像,选择使用该工具后,先在图像中建立一个矩形选区,然后通过选区边框上的控制句柄(边线上的小方块)来调整选区的大小,按下“Enter”键,选择区域以外的图像将被切掉,同时Photoshop会自动将选区内的图像建立一个新文件。按“Esc”键可以取消操作。使用该工具时,光标会变成按钮上的图标样子。 6、切片工具:可以在Photoshop6.0中切割图片并输出和将切割好的图片转移至ImageReady重进行更多的操作。 二、着色编缉工具组: 1、喷枪工具:用来绘制非常柔和的手绘线。 2、画笔工具:用来绘制比较柔和的线条。 3、橡皮图章工具:这是自由复制图像的工具。选择该工具后,按住“Alt”键单击图像某一处,然后在图像的其他地方单击鼠标左键,即可将刚才光标所在处的图像复制到该处。如果按住鼠标左键不放拖动光标,则可将复制的区域扩

大,在光标的旁边会有一个十字光标,用来指示你所复制的原图像的部位。(注意:可以在同是地打开的几个图像之间进行这种自由复制。) 4、历史记录画笔工具:使用该工具时,按住鼠标左键,在图像上拖动,光标所过之处,可将图像恢复到打开时的状态。当你对图像进行了多次编辑后,使用它能够将图像的某一部分一次恢复到初始状态。 5、橡皮擦工具:能把图层擦为透明,如果是在背景层上使用此工具,则擦为背景色。 6、模糊工具:用来减少相邻像素间的对比度,使图像变模糊。使用该工具时,按住鼠标左键拖动光标在图像上涂抹,可以减弱图像中过于生硬的颜色过渡和边缘。 7、减淡工具:拖动此工具可以增加光标经过之处图像的亮度。 三、专用工具组: 1、渐变工具:用逐渐过渡的色彩填充一个选择区域,如果没有建立选区,则填充整幅图像。 2、油漆桶工具:用前景颜色填充选择区域。 3、直接选择工具:用来调整路径上锚点的位置的工具。使用时光标变成箭头样。 4、文字工具:用来向图像中输入文字。 5、钢笔工具:路径勾点工具,勾画出首尾相接的路径。(注意:路径并不是图像的一部分,它是独立于图像存在的,这点与选区不同。利用路径可以建立复杂的选区或绘制复杂的图形,还可以对路径灵活地进行修改和编辑,并可以在路径与选区之间进行切换。) 6、矩形工具:选择此工具,拖到光标可画出矩形。 7、吸管工具:将所取位置的点的颜色作为前景色,如同时按住“Alt”键,则选取背景色。使用时,光标会变成按钮上标示的图标样。

ps中各种工具的介绍

1、选框工具共有4种包括【矩形选框工具】、【椭圆选框工具】、 【单行选框工具】和【单列选框工具】。它们的功能十分相似,但也有各自不同的特长。 矩形选框工具 使用【矩形选框工具】可以方便的在图像中制作出长宽随意的矩形选区。 操作时,只要在图像窗口中按下鼠标左键同时移动鼠标, 拖动到合适的大小松开鼠标即可建立一个简单的矩形选区了。 椭圆选框工具 使用【椭圆选框工具】可以在图像中制作出半径随意的椭圆形选区。 它的使用方法和工具选项栏的设置与【矩形选框工具】的大致相同。 单行选框工具:使用【单行选框工具】可以在图像中制作出1个像素高的单行选区. 单列选框工具:与【单行选框工具】类似,使用【单列选框工具】可以在 图像中制作出1个像素宽的单列选区。 2、套索工具:快捷键:L 套索工具也是一种经常用到的制作选区的工具, 可以用来制作折线轮廓的选区或者徒手绘画不规则的选区轮廓。 套索工具共有3种,包括:套索工具、多边形套索工具、磁性套索工具。 套索工具 使用【套索工具】,我们可以用鼠标在图像中徒手描绘, 制作出轮廓随意的选区。通常用它来勾勒一些形状不规则的图像边缘。 多边形套索工具 【多边形套索工具】可以帮助我们在图像中制作折线轮廓的多边形选区。 使用时,先将鼠标移到图像中点击以确定折线的起点, 然后再陆续点击其它折点来确定每一条折线的位置。 最后当折线回到起点时,光标下会出现一个小圆圈, 表示选择区域已经封闭,这时再单击鼠标即可完成操作。 3、魔棒工具:快捷键:W 【魔棒工具】是Photoshop中一个有趣的工具, 它可以帮助大家方便的制作一些轮廓复杂的选区,这为我们了节省大量的精力。 该工具可以把图像中连续或者不连续的颜色相近的区域作为选区的范围, 以选择颜色相同或相近的色块。魔棒工具使用起来很简单, 只要用鼠标在图像中点击一下即可完成操作。 【魔棒工具】的选项栏中包括:选择方式、容差、消除锯齿、连续的和用于所有图层 ⑴选择方式:使用方法和原理与【矩形选框工具】提到的一样,这里就不再介绍了。 ⑵容差:用来控制【魔棒工具】在识别各像素色值差异时的容差范围。 可以输入0~255之间的数值,取值越大容差的范围越大;相反取值越小容差的范围越小。

ps的工作界面的介绍

ps的工作界面的介绍(界面介绍) 界面的组成 photoshop的界面是由以下6个部分组成的。 标题栏 标题栏左边显示photoshop的标志和软件名称。右边三个图标分别是最小化、最大化和关闭按钮。菜单栏 photoshop菜单栏包括文件、编辑、图像等9个菜单。 工具属性栏 主要用来显示工具箱中所选用工具的一些延展的选项。选择不同的工具时出现的相应选项也是不同的,具体内容在工具箱介绍中详细讲解。 工具箱 对图像的修饰以及绘图等工具,都从工具箱中调用。几乎每种工具都有相应键盘快捷键,工具箱很想画家的画箱。 调板窗 用来存放不常用的调板。调板在其中只显示名称,点击后才出现整个调板,这样可以有效利用空间。防止调板过多挤占了图像的空间。 浮动调板(调板区) 用来安放制作需要的各种常用的调板。也可以称为浮动面板或面板。 其余的区域称为工作区,用来显示制作中的图像。Photoshop可以同时打开多幅图像进行制作,图像之间还可以互相传送数据。 除了菜单的位置不可变动外,其余各部分都是可以自由移动的,我们可以根据自己的喜好去安排界面。并且调板在移动过程中有自动对齐其他调板的功能,这可以让界面看上去比较整洁。 ★.标题栏 标题栏左边显示软件标志和软件名称,通过标题栏可以确认软件的版本。

标题栏右边是最小化,最大化和关闭按钮。 ★.菜单栏 菜单栏是Photoshop CS2的重要组成部分,和其他应用程序一样,Photoshop CS2根据图像处理的各种要求,将所有的功能命令分类后,分别放在9个菜单中,如下图所示,在其中几乎可以实现Photoshop的全部功能。 ★.工具属性栏 在默认状态下,Photoshop CS2中的工具属性栏位于菜单栏的下方,在其中可详细设置所选工具的各种属性。选择不同的工具或者进行不同的操作时,其属性栏中的内容会随之变化。 ★.工具箱 用phtoshop处理图像,首先要熟悉工具的使用。工具面板如下图。 根据工具的作用和特性,可分为: 1.选择与切割类; 2.编辑类; 3.矢量与文字类; 4.辅助工具,四大类,中间用分隔栏分开. 此外我们发现在一部分工具图标的右下角有个黑色的小箭头,这表示这里是一个相类似的工具的集合,用鼠标按下一个工具的按钮不放稍停一下,就会展开下级菜单,显示该集合的全部工具。

PS基本工具详解

【PS基本工具详 解】 01.选框工具---快捷键【M】 01-1.问:如何快速切换选框工具列表? 答:按住键盘“ALT”键,鼠标左击选框工具。 01-2.问:圆形选框或矩形选框如何画出正圆形或正方形选区? 答:按住键盘“shift”键,鼠标左键在画布拖动即可。 01-3.问:如何精确设定选区的大小? 答:在选框工具状态下,属性栏中样式一栏,选择固定比例或者固定大小,然后在其后的宽高设置中输入固定数值,再点击画布即可。 01-4.问:选框工具的属性栏中,羽化是干嘛的? 答:羽化就是对选区边缘进行模糊处理,使其内外衔接有个自然的过渡。在创建选区之前设置好羽化数值,创建后就能看到效果。 01-5.问:矩形选框的属性栏中,消除锯齿项为什么是灰色不能用的? 答:因为矩形选框都是直线的,所以不存在锯齿。其选项只有在椭圆选框状态下才可以勾选。 01-6.问:单行选框和单列选框是干嘛的? 答:这两个工具是为了方便选择一个像素的行和列而设置的,多用于线条绘制。 01-7.问:为什么用了单行单列选框在画布上操作,画布却不显示选区?填充也看不见? 答:这是因为画布大小设置过大,一像素的选区或者填充后的内容与画布的比例过大,所以显示不了,这时只要放大画布就能显示了。 01-8.问:如何重复制作单列或单行选区?

答:按住键盘“SHIFT”键,在画布不同位置上左键点击即可。 01-9.问:选框工具做好的选区大小不适合,能不能自由更改选区大小? 答:当然可以。点击“选择-变换选区”就能自由变换选区的大小,调整合适后,确认即可。 01-10.问:如何退出选框工具创建的虚线选区? 答:快捷键" Ctrl+D "即可。 02.选择工具---快捷键【V】 02-1.问:什么是选择工具? 答:选择工具,顾名思义是用来移动一个图层或者移动选中的内容。 02-2.问:选择工具如何快速复制? 答:在选择工具状态下,按住键盘"ALT"键,就可以实现快速复制图层或者选中内容。 02-3.问:当图层内容很多的时候,如何用选择工具快速选择想要的画面而不是每次都要点选图层才能够做移动编辑操作? 答:选择“选择工具”,勾选属性栏的“自动选择”,就能够快速选中鼠标点选的画面。 03.套索工具---快捷键【L】 03-1.问:用套索工具做出选区抠出来的图有很多的锯齿怎么办? 答:可以在使用套索之前,在属性栏的“羽化”项里设置好羽化数值,再使用套索,就能平滑边缘锯齿。 03-2.问:用套索创建选区后有多余的选区部分或者没有选到的部分,要怎么处理? 答:套索工具状态下,按住键盘“ALT”键来减去多余的选区,按住键盘“SHIFT”键来添加不够的选区。 03-3.问:用磁性套索创建选区的时候不好控制,老是产生错误的偏离的边界点,怎么办? 答:使用磁性套索工具时,可以结合DELETE删除错误的边界点再单击重新创建。 03-4.问:磁性套索工具属性栏中的宽度、边对比度和频率有什么作用? 答:1>宽度。数值框可以输入0-40之间的数值,对于某一给定的数值,

photoshop(基本功能介绍)

PS面板介绍大全 “色板”面板:该面板用于保存经常用的颜色。单击相应的色块,该颜色就会内指定为前景色。 “通道”面板该面板用于管理颜色信息或者利用通道指定的选区。主要用于创建Alpha通道及有效管理颜色通道。 “图层”面板:在合成若干个图像时使用该面板。该面板提供图层的创建和删除功能,并且可以设置图像的不透明度、图层蒙版等。 “信息”面板:该面板以数值形式显示图像信息。将鼠标的光标移动到图层上,就会显示相关信息。 “颜色”面板:用于设置背景色和前景色。颜色可通过拖动滑块指定,也可以通过输入相应颜色值指定。 “样式”面板:用于制作立体图标。只要单击鼠标即可制作出一个用特性的图像。 “直方图”面板:在该面板中可以看到图像的所有色调的分布情况,图像的色调分为最亮的区域(高光)、中间区域(中间色调)和暗淡区域(暗调)三部分。 “字符样式”面板:在该面板中可以对文字进行字体、符号、文字间距特殊效果的设置,字符样式仅作用于选定的字符。 1.“3D”面板:可以为图像制作出立体可见的效果。选择3D图层后,“3D”面板中会显示与之关联的3D文件组件,面板的顶部列出了文件中的场景、网络、材质和光源,面板底部显示了在面板顶部选择的3D组件的相关选项。

2.“动作“面板:利用该面板可以一次完成多个操作过程。记录操作顺序后,在其他图像上可以一次性应用整个过程。 3.“导航器”面板:通过放大或缩小图像来查找指定区域。利用视图框便于搜索大图像。 4.“测量记录”面板:可以为记录中的列重新排序,删除行或列,或者将记录中的数据导出到逗号分隔的文件中。 5.“段落”面板:利用该面板可以设置与文本段落相关选项。可调整行间距,增加缩进或减少缩进等。 6.“调整”面板:该面板用于对图像进行破坏性的调整。 7.“仿制源”面板:具有用于仿制图章工具或修复画笔工具的选项。可以设置5个不同的样本源并快速选择所需要改为不同的样本源时重新取样。 8.“字符”面板:在编辑或修改文本是提供相关功能的面板。可设置的主要选项有文字大小和间距、颜色、字间距等。 9.“动画”面板:利用该面板便于进行动作操作。 10.“路径”面板:用于将选区转换为路径,或者将路径转换为选区。利用该面板可以应用各种路径相关功能。 11.“历史记录”面板:该面板用于恢复操作过程,将图像操作过程按顺序记录下来。 12.“工具预设”面板:在该面板中可保持常哦那个的工具。可以将相同的工具保存为不同的设置,由此刻提高操作效率。 photoshop里各工具的用法,用途 1、选框工具:快捷键:M 选框工具共有4种包括【矩形选框工具】、【椭圆选框工具】、【单行选框工具】和【单列选框工具】。它们的功能十分相似,但也有各自不同的特长。矩形选框工具使用【矩形选框工具】可以方便的在

PS基本工具介绍

选框工具:快捷键:M 1、选框工具共有4种包括【矩形选框工具】、【椭圆选框工具】、【单行选框工具】和【单列选框工具】。它们的功能十分相似,但也有各自不同的特长。矩形选框工具 使用【矩形选框工具】可以方便的在图像中制作出长宽随意的矩形选区。操作时,只要在图像窗口中按下鼠标左键同时移动鼠标,拖动到合适的大小松开鼠标即可建立一个简单的矩形选区了。 椭圆选框工具 使用【椭圆选框工具】可以在图像中制作出半径随意的椭圆形选区。 它的使用方法和工具选项栏的设置与【矩形选框工具】的大致相同。 单行选框工具:使用【单行选框工具】可以在图像中制作出1个像素高的单行选区. 单列选框工具:与【单行选框工具】类似,使用【单列选框工具】可以在图像中制作出1个像素宽的单列选区。 2、套索工具:快捷键 套索工具也是一种经常用到的制作选区的工具, 可以用来制作折线轮廓的选区或者徒手绘画不规则的选区轮廓。 套索工具共有3种,包括:套索工具、多边形套索工具、磁性套索工具。 套索工具 使用【套索工具】,我们可以用鼠标在图像中徒手描绘, 制作出轮廓随意的选区。通常用它来勾勒一些形状不规则的图像边缘。 多边形套索工具 【多边形套索工具】可以帮助我们在图像中制作折线轮廓的多边形选区。 使用时,先将鼠标移到图像中点击以确定折线的起点, 然后再陆续点击其它折点来确定每一条折线的位置。 最后当折线回到起点时,光标下会出现一个小圆圈, 表示选择区域已经封闭,这时再单击鼠标即可完成操作。 3、魔棒工具:快捷键:W 【魔棒工具】是Photoshop中一个有趣的工具, 它可以帮助大家方便的制作一些轮廓复杂的选区,这为我们了节省大量的精力。该工具可以把图像中连续或者不连续的颜色相近的区域作为选区的范围, 以选择颜色相同或相近的色块。魔棒工具使用起来很简单, 只要用鼠标在图像中点击一下即可完成操作。 【魔棒工具】的选项栏中包括:选择方式、容差、消除锯齿、连续的和用于所有

PS基础工具全面介绍

PS基础工具全面介绍!!! 位图:又称光栅图,一般用于照片品质的图像处理,是由许多像小方块一样的"像素"组成的图形。由其位置与颜色值表示,能表现出颜色阴影的变化。在PHOTOSHOP主要用于处理位图 矢量图:通常无法提供生成照片的图像物性,一般用于工程持术绘图。如灯光的质量效果很难在一幅矢量图表现出来。 分辩率:每单位长度上的像素叫做图像的分辩率,简单讲即是电脑的图像给读者自己观看的清晰与模糊,分辩率有很多种。如屏幕分辩率,扫描仪的分辩率,打印分辩率。 图像尺寸与图像大小及分辩率的关系:如图像尺寸大,分辩率大,文件较大,所占内存大,电脑处理速度会慢,相反,任意一个因素减少,处理速度都会加快。 通道:在PHOTOSHOP中,通道是指色彩的范围,一般情况下,一种基本色为一个通道。如RGB颜色,R为红色,所以R通道的范围为红色,G为绿色,B为蓝色。 图层:在PHOTOSHOP中,一般都是多是用到多个图层制作每一层好象是一张透明纸,叠放在一起就是一个完整的图像。对每一图层进行修改处理,对其它的图层不含造成任何的影响。 图像的色彩模式: 1)RGB彩色模式:又叫加色模式,是屏幕显示的最佳颜色,由红、绿、蓝三种颜色组成,每一种颜色可以有0-255的亮度变化。 2)CMYK彩色模式:由青色Cyan、洋红色Magenta、禁用语言Yellow。而K取的是black最后一个字母,之所以不取首字母,是为了避免与蓝色(Blue)混淆,又叫减色模式。一般打印输出及印刷都是这种模式,所以打印图片一般都采用CMYK模式。 3)HSB彩色模式:是将色彩分解为色调,饱和度及亮度通过调整色调,饱和度及亮度得到颜色和变化。 4)Lab彩色模式:这种模式通过一个光强和两个色调来描述一个色调叫a,另一个色调叫b。它主要影响着色调的明暗。一般RGB转换成CMYK 都先经Lab的转换。 5)索引颜色:这种颜色下图像像素用一个字节表示它最多包含有256色的色表储存并索引其所用的颜色,它图像质量不高,占空间较少。6)灰度模式:即只用黑色和白色显示图像,像素0值为黑色,像素255为白色。 7)位图模式:像素不是由字节表示,而是由二进制表示,即黑色和白色由二进制表示,从而占磁盘空间最小。 ___________工____________具_____________用_____________法_____________ 移动工具,可以对PHOTOSHOP里的图层进行移动图层。 矩形选择工具,可以对图像选一个矩形的选择范围,一般对规则的选择用多。 单列选择工具,可以对图像在垂直方向选择一列像素,一般对比较细微的选择用。 裁切工具,可以对图像进行剪裁,前裁选择后一般出现八个节点框,用户用鼠标对着节点进行缩放,用鼠标对着框外可以对选择框进行旋转,用鼠标对着选择框双击或打回车键即可以结束裁切。 套索工具,可任意按住鼠标不放并拖动进行选择一个不规则的选择范围,一般对于一些马虎的选择可用。 多边形套索工具,可用鼠标在图像上某点定一点,然后进行多线选中要选择的范围,没有圆弧的图像勾边可以用这个工具,但不能勾出弧度 磁性套索工具,这个工具似乎有磁力一样,不须按鼠标左键而直接移动鼠标,在工具头处会出现自动跟踪的线,这条线总是走向颜色与颜色边界处,边界越明显磁力越强,将首尾连接后可完成选择,一般用于颜色与颜色差别比较大的图像选择。 魔棒工具,用鼠标对图像中某颜色单击一下对图像颜色进行选择,选择的颜色范围要求是相同的颜色,其相同程度可对魔棒工具双击,在屏幕右上角上容差值处调整容差度,数值越大,表示魔棒所选择的颜色差别大,反之,颜色差别小。 喷枪工具,主要用来对图像上色,上色的压力可由右上角的选项调整压力,上色的大小可由右边的画笔处选择自已所须的笔头大小,上色的颜

PS工具介绍

Photoshop工具介绍 一、矩形选择/椭圆形选择(选取物体、限制编辑范围) (1)按Shift,由一边向另一边画正方形或正圆 (2)按Alt,由中心向两侧画对称的形状 (3)按Shift+Alt:由中心向外画正方形或正圆 (4)按Shift+Ctrl+I:反选 (5)取消选择: 按Ctrl+D或在空白处单击 羽化:使填充的颜色边界产生柔和虚化效果 二、移动工具: 移动已选择的图像,没选区的将移动整幅画面 三、套索工具 套索:按住鼠标拖动绘制出任意的选择范围 多边形套索:由直线连成选择范围 磁性套索:沿颜色的边界进行选择 四、魔棒工具:根据图像中颜色的相似度来选取图形 容差:值越小,选取的颜色范围越小,反之,范围大 五、裁切工具:裁切画面,删除不需要的图像 六、切片工具 七、 1、修复画笔:修复图像中的缺陷,并能使修复的结果自然溶入周围图像 方法:按ALT键取样,到目标点拖动 2、修补工具:可以从图像的其它区域或使用图案来修补当前选中的区域 源:将源图像选区拖至目标区,则源区域图像将被目标区域的图像覆盖目标:将选定区域作为目标区,用其覆盖共他区域 图案:用图案覆盖选定的区域 3、颜色替换工具:用于修改红眼 八、 1、画笔:用前景色在画布上绘画,模仿现实生活中的毛笔进行绘画, 创建柔和的彩色线条 +shift:画连接的直线不透明度:决定颜色的深浅 2、铅笔:用于创建硬边界的线条 自动抹掉:前景色、背景色相互转换 参数: 1、直径:笔刷的大小 2、角度:笔刷的旋转角度 3、圆度:控制笔刷的长短轴比例,以制作扁形笔刷 4、硬度:控制笔刷边界的柔和程度,值越小,越柔和 5、间距:两笔之间的距离

PS“计算工具”的功能与用法详解

PS“计算工具”的功能与用法详解(网摘) 初学Photoshop的同学,最早接触“计算”工具,应该是网上大堆大堆介绍通道磨皮法那几个步骤,“选择反差适中的绿色通道,然后高反差保留,然后计算,然后再高反差保留,然后再计算,然后再再高反差保留,然后再计算……”。为什么计算过程中我们选择的是绿通道副本到绿通道副本的“强光”混合模式而不是选择“变暗”或者其他的混合模式呢?我们只是按部就班记住了这些过程,然后得出了想要的效果,但为什么计算工具有如此美妙之处呢?其实,通道磨皮法是通道作为选区功能的一次彻底的应用,它通过计算工具选取出人物皮肤需要提亮美化的部分,“计算”是通道的一个选择手段或者工具。“计算”和“应用图像”以及“图层混合模式”既有关联又有区别,要了解计算的作用,并且熟练为自己所用,就需要对通道、图层混合有一个初步的认识。今天我们就来通过实例来剖析一下Photoshop的“计算”工具有些什么功能,是如何工作的。 第一、“计算”工具和“通道”紧密关联 它在通道中运算产生,形成新的ALPHA专用通道,ALPHA通道,又是为我们所需要的选取部分。通道就是选区,实际上,“计算”也是通道“选区”作用的一个产生工具而已。 第二、“应用图像”和“计算”的区别 “应用图像”是直接作用于本图层,是不可逆的,而“计算”是在通道中形成新的待选区域,是待选或备用的。

第三、“计算”和“图层混合模式”紧密相连 要了解图层混合模式,必须从色彩开始系统了解PS的基础应用,图层混合模式的应用是中级阶段PS学习应该熟练掌握和认识的,后面附件中有各个混合模式的比例或者公式。 第四、“图层混合”、“应用图像”与“计算” “图层混合”是图层与图层之间以某种模式进行某种比例的混合,“图层混合”是层与层之间发生关系。 “应用图像”是某一图层内通道到通道(包括RGB通道或者ALPHA通道)采用“图层混合”的混合模式进行直接作用——实质上是把通道(包括RGB通道或者ALPHA通道)看成为图层与图层的一种混合,只是这种混合的主题是图层内的单一通道或者RGB通道,是通道到通道发生作用而直接产生结果于单一图层,“应用图像”的结果是单一图层发生了改变。 “计算”的结果,既不像图层与图层混合那样产生图层混合的视觉上的变化,又不像“应用图像”那样让单一图层发生变化,“计算”工具实质是通道与通道间,采用“图层混合”的模式进行混合,产生新的选区,这个选区是为下一步操作所需要的。 明白了以上几点之后,下面我们通过实例了解计算是如何做出我们需要的选区,这是掌握“计算”法最最关键的一点。我们拿到一张准备PS的图片,要养成没处理之前,就要想到我们将会处理什么样子或者怎样去处理。而这里的关键,就是我们充分了解图层混合这样的光和色的比例关系。这是初级迈向高级的应用的一个坎。了解这个坎,借助“图层混合”,“应用图像”,“计算”这几个工具,飞跃这个坎。 附:图层混合模式的应用 图层混合模式可以将两个图层的色彩值紧密结合在一起,从而创造出大量的效果。 混合模式在Photoshop应用中非常广泛,大多数绘画工具或编辑调整工具都可以使用混合模式,所以正确、灵活使用各种混合模式,可以为图像的效果锦上添花。 单击图层混合模式的下拉组合框,将弹出25种混合模式命令的下拉列表菜单,选择不同的混合模式命令,就可以创建不同的混合效果;图层的混合模式是用于控制上下图层的混合效果,在设置混合效果时还需设置图层的不透明度,以下介绍混合模式选项说明的不透明度在100%的前提下。 正常:该选项可以使上方图层完全遮住下方图层。 溶解:如果上方图层具有柔和的关透明边缘,选择该项则可以创建像素点状效果。 变暗:两个图层中较暗的颜色将作为混合的颜色保留,比混合色亮的像素将被替换,而比混合色暗像素保持不变。 正片叠底:整体效果显示由上方图层和下方图层的像素值中较暗的像素合成的图像效果,任意颜色与黑色重叠时将产生黑色,任意颜色和白色重叠时颜色则保持不变。 颜色加深:选择该项将降低上方图层中除黑色外的其他区域的对比度,使图像的对比度下降,产生下方图层透过上方图层的投影效

PS基本用法工具介绍

P S基本用法工具介绍 Document serial number【UU89WT-UU98YT-UU8CB-UUUT-UUT108】

PS基本用法工具介绍 它是由Adobe公司开发的图形处理系列软件之一,主要应用于在图像处理、广告设计的一个电脑软件。最先它只是在Apple机(MAC)上使用,后来也开发出了forwindow的版本。 一、基本的概念。 位图:又称光栅图,一般用于照片品质的图像处理,是由许多像小方块一样的"像素"组成的图形。由其位置与颜色值表示,能表现出颜色阴影的变化。在PHOTOSHOP主要用于处理位图。 矢量图:通常无法提供生成照片的图像物性,一般用于工程持术绘图。如灯光的质量效果很难在一幅矢量图表现出来。 分辩率:每单位长度上的像素叫做图像的分辩率,简单讲即是电脑的图像给读者自己观看的清晰与模糊,分辩率有很多种。如屏幕分辩率,扫描仪的分辩率,打印分辩率。 图像尺寸与图像大小及分辩率的关系:如图像尺寸大,分辩率大,文件较大,所占内存大,电脑处理速度会慢,相反,任意一个因素减少,处理速度都会加快。 通道:在PHOTOSHOP中,通道是指色彩的范围,一般情况下,一种基本色为一个通道。如RGB颜色,R为红色,所以R通道的范围为红色,G为绿色,B为蓝色。 图层:在PHOTOSHOP中,一般都是多是用到多个图层制作每一层好象是一张透明纸,叠放在一起就是一个完整的图像。对每一图层进行修改处理,对其它的图层不含造成任何的影响。

二、图像的色彩模式 1)RGB彩色模式:又叫加色模式,是屏幕显示的最佳颜色,由红、绿、蓝三种颜色组成,每一种颜色可以有0-255的亮度变化。 2)、CMYK彩色模式:由品蓝,品红,品黄和黄色组成,又叫减色模式。一般打印输出及印刷都是这种模式,所以打印图片一般都采用CMYK模式。 3)、HSB彩色模式:是将色彩分解为色调,饱和度及亮度通过调整色调,饱和度及亮度得到颜色和变化。 4)、Lab彩色模式:这种模式通过一个光强和两个色调来描述一个色调叫a,另一个色调叫b。它主要影响着色调的明暗。一般RGB转换成CMYK都先经Lab的转换。 5)、索引颜色:这种颜色下图像像素用一个字节表示它最多包含有256色的色表储存并索引其所用的颜色,它图像质量不高,占空间较少。 6)、灰度模式:即只用黑色和白色显示图像,像素0值为黑色,像素255为白色。 7)、位图模式:像素不是由字节表示,而是由二进制表示,即黑色和白色由二进制表示,从而占磁盘空间最小。 三、工具介绍移动工具:可以对PHOTOSHOP里的图层进行移动图层。矩形选择工具:可以对图像选一个矩形的选择范围,一般对规则的选择用多。

相关文档