文档库 最新最全的文档下载
当前位置:文档库 › Gamma distribution - Wikipedia, the free encyclopedia

Gamma distribution - Wikipedia, the free encyclopedia

Gamma distribution - Wikipedia, the free encyclopedia
Gamma distribution - Wikipedia, the free encyclopedia

Gamma

Probability density function

Cumulative distribution function

Parameters

k > 0

shape

θ > 0

scale

α > 0

shape

β > 0 rate

Support x ∈ (0, ∞)

Probability

density

function (pdf)

Cumulative

distribution

function (CDF)

Mean

(see digamma

function)

(see digamma

function)

Median No simple

closed form

No simple closed

form

Mode(k?1)θ for k >

1

Variance

(see trigamma

function )

(see trigamma

function )

Skewness

Excess

Gamma distribution

From Wikipedia, the free encyclopedia

In probability theory and statistics, the gamma

distribution is a two-parameter family of continuous

probability distributions. There are three different

parameterizations in common use:

1. With a shape parameter k and a scale parameter θ.

2. With a shape parameter α = k and an inverse scale

parameter β = 1/θ, called a rate parameter.

3. With a shape parameter k and a mean parameter μ = k/β.

In each of these three forms, both parameters are positive

real numbers.

The parameterization with k and θ appears to be more common

in econometrics and certain other applied fields, where e.g.

the gamma distribution is frequently used to model waiting

times. For instance, in life testing, the waiting time until

death is a random variable that is frequently modeled with a

gamma distribution.[1]

The parameterization with α and β is more common in

Bayesian statistics, where the gamma distribution is used as

a conjugate prior distribution for various types of inverse

scale (aka rate) parameters, such as the λ of an exponential

distribution or a Poisson distribution – or for that matter,

the β of the gamma distribution itself. (The closely related

inverse gamma distribution is used as a conjugate prior for

scale parameters, such as the variance of a normal

distribution.)

If k is an integer, then the distribution represents an

Erlang distribution; i.e., the sum of k independent

exponentially distributed random variables, each of which has

a mean of θ (which is equivalent to a rate parameter of 1/

θ).

The gamma distribution is the maximum entropy probability

distribution for a random variable X for which E[X] = kθ =

α/β is fixed and greater than zero, and E[ln(X)] = ψ(k) +

ln(θ) = ψ(α) ? ln(β) is fixed (ψ is the digamma

function).[2]

Contents

1 Characterization using shape k and scale θ

1.1 Probability density function

1.2 Cumulative distribution function

2 Characterization using shape α and rate β

2.1 Probability density function

2.2 Cumulative distribution function

3 Properties

3.1 Skewness

3.2 Median calculation

3.3 Summation

3.4 Scaling

3.5 Exponential family

3.6 Logarithmic expectation

3.7 Information entropy

3.8 Kullback–Leibler divergence

3.9 Laplace transform

4 Parameter estimation

4.1 Maximum likelihood estimation

4.2 Bayesian minimum mean-squared error

5 Generating gamma-distributed random variables

6 Related distributions

6.1 Special cases

kurtosis Entropy Moment-generating function (mgf)Characteristic

function

Illustration of the Gamma PDF for parameter values over k and x with θ set to

1, 2, 3, 4, 5 and 6. One can see each θlayer by itself here [1]

(https://www.wendangku.net/doc/095367447.html,/wiki/File:Gamma -PDF-3D-by-k.png) as well as by k [2]

(https://www.wendangku.net/doc/095367447.html,/wiki/File:Gamma -PDF-3D-by-Theta.png) and x . [3]

(https://www.wendangku.net/doc/095367447.html,/wiki/File:Gamma -PDF-3D-by-x.png).

6.2 Conjugate prior 6.3 Compound gamma 6.4 Others 7 Applications 8 Notes

9 References

10 External links

Characterization using shape k and scale θ

A random variable X that is gamma-distributed with shape k and scale θ is denoted

Probability density function

The probability density function using the shape-scale parametrization is

Here Γ(k ) is the gamma function evaluated at k .

Cumulative distribution function

The cumulative distribution function is the regularized gamma function:

where γ(k , x /θ) is the lower incomplete gamma function.It can also be expressed as follows, if k is a positive integer (i.e., the distribution is an Erlang distribution):[3]

Characterization using shape α and rate β

Alternatively, the gamma distribution can be parameterized in terms of a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter. A random variable X that is gamma-distributed with shape αand rate β is denoted

Probability density function

The corresponding density function in the shape-rate parametrization is

Both parametrizations are common because either can be more convenient depending on the situation.

Cumulative distribution function

The cumulative distribution function is the regularized gamma function:

where γ(α, βx) is the lower incomplete gamma function.

If α is a positive integer (i.e., the distribution is an Erlang distribution), the cumulative distribution function has the following series expansion:[3]

Properties

Skewness

The skewness is equal to , it depends only on the shape parameter (k) and approaches a normal distribution when k is large (approximately when k > 10).

Median calculation

Unlike the mode and the mean which have readily calculable formulas based on the parameters, the median does not have an easy closed form equation. The median for this distribution is defined as the constant x0 such that

The ease of this calculation is dependent on the k parameter. This is best achieved by a computer since the calculations can quickly grow out of control.

A method of approximating the median (ν) for any Gamma distribution has been derived based on the ratio μ/(μ ?ν) which to a very good approximation is a linear function of the shape parameter α when α ≥ 1.[4] This gives this approximation

where μ is the mean.

Summation

If X i has a Gamma(k i, θ) distribution for i = 1, 2, ..., N (i.e., all distributions have the same scale parameter θ), then

provided all X i are independent.

For the cases where the X i are independent but have different scale parameters see Mathai (1982) and Moschopoulos (1984).

The gamma distribution exhibits infinite divisibility.

Scaling

If

then for any c > 0,

Hence the use of the term "scale parameter" to describe θ.

Equivalently, if

Illustration of the Kullback–Leibler (KL)divergence for two Gamma PDFs. Here β = β0 + 1 which are set to

1, 2, 3, 4, 5 and 6. The typical asymmetry

for the KL divergence is clearly visible.

then for any c > 0,

Hence the use of the term "inverse scale parameter" to describe β.

Exponential family

The Gamma distribution is a two-parameter exponential family with natural parameters

k ? 1 and ?

1/θ(equivalently, α ? 1 and ?β), and natural statistics X and ln(X ).

If the shape parameter k is held fixed, the resulting one-parameter family of distributions is a natural exponential family.

Logarithmic expectation

One can show that

or equivalently,

where ψ is the digamma function.

This can be derived using the exponential family formula for the moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is ln(x ).

Information entropy

The information entropy is

In the k , θ parameterization, the information entropy is given by

Kullback–Leibler divergence

The Kullback–Leibler divergence (KL-divergence), as with the information entropy and various other theoretical properties, are more commonly seen using the α, β parameterization because of their uses in Bayesian and other theoretical statistics frameworks.The KL-divergence of Gamma(αp , βp ) ("true" distribution) from Gamma(αq , βq ) ("approximating" distribution) is given by [5]

Written using the k , θ parameterization, the KL-divergence of Gamma(k p , θp ) from Gamma(k q , θq ) is given by

Laplace transform

The Laplace transform of the gamma PDF is

Parameter estimation

Maximum likelihood estimation

The likelihood function for N iid observations (x1, ..., x N) is

from which we calculate the log-likelihood function

Finding the maximum with respect to θ by taking the derivative and setting it equal to zero yields the maximum likelihood estimator of the θ parameter:

Substituting this into the log-likelihood function gives

Finding the maximum with respect to k by taking the derivative and setting it equal to zero yields

There is no closed-form solution for k. The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example, Newton's method. An initial value of k can be found either using the method of moments, or using the approximation

If we let

then k is approximately

which is within 1.5% of the correct value.[6] An explicit form for the Newton-Raphson update of this initial guess is:[7]

Bayesian minimum mean-squared error

With known k and unknown θ, the posterior density function for theta (using the standard scale-invariant prior for θ) is

Denoting

Integration over θ can be carried out using a change of variables, revealing that 1/θ is gamma-distributed with parameters α = Nk, β = y.

The moments can be computed by taking the ratio (m by m = 0)

which shows that the mean ± standard deviation estimate of the posterior distribution for theta is

Generating gamma-distributed random variables

Given the scaling property above, it is enough to generate gamma variables with θ = 1 as we can later convert to any value of β with simple division.

Using the fact that a Gamma(1, 1) distribution is the same as an Exp(1) distribution, and noting the method of generating exponential variables, we conclude that if U is uniformly distributed on (0, 1], then ?ln(U) is distributed Gamma(1, 1) Now, using the "α-addition" property of gamma distribution, we expand this result:

where U k are all uniformly distributed on (0, 1] and independent. All that is left now is to generate a variable distributed as Gamma(δ, 1) for 0 < δ < 1 and apply the "α-addition" property once more. This is the most difficult part.

Random generation of gamma variates is discussed in detail by Devroye,[8] noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid.[9] For

arbitrary values of the shape parameter, one can apply the Ahrens and Dieter[10] modified acceptance-rejection method Algorithm GD (shape k ≥ 1), or transformation method[11] when 0 < k < 1. Also see Cheng and Feast Algorithm GKM 3[12] or Marsaglia's squeeze method.[13]

The following is a version of the Ahrens-Dieter acceptance-rejection method:[10]

1. Let m be 1.

2. Generate V3m?2, V3m?1 and V3m as independent uniformly distributed on (0, 1] variables.

3. If , where , then go to step 4, else go to step 5.

4. Let . Go to step 6.

5. Let .

6. If , then increment m and go to step 2.

7. Assume ξ = ξm to be the realization of Γ(δ, 1).

A summary of this is

where

is the integral part of k,

ξ has been generated using the algorithm above with δ = {k} (the fractional part of k),

U k and V l are distributed as explained above and are all independent.

While the above approach is technically correct, Devroye notes that it is linear in the value of k and in general is not a good choice. Instead he recommends using either rejection-based or table-based methods, depending on context.[14]

Related distributions

Special cases

If X ~ Gamma(k = 1, θ = λ?1), then X has an exponential distribution with rate parameter λ.

If X ~ Gamma(k = ν/2, θ = 2), then X is identical to χ2(ν), the chi-squared distribution with ν degrees of freedom. Conversely, if Q ~ χ2(ν) and c is a positive constant, then .

If k is an integer, the gamma distribution is an Erlang distribution and is the probability distribution of the waiting time until the k-th "arrival" in a one-dimensional Poisson process with intensity 1/θ. If

and , then .

If X has a Maxwell-Boltzmann distribution with parameter a, then .

X ~ Gamma(k, θ), then follows a generalized gamma distribution with parameters p = 2, d = 2k, and [citation needed] .

, then ; i.e. an exponential distribution: see skew-logistic distribution. Conjugate prior

In Bayesian inference, the gamma distribution is the conjugate prior to many likelihood distributions: the Poisson, exponential, normal (with known mean), Pareto, gamma with known shape σ, inverse gamma with known shape parameter, and Gompertz with known scale parameter.

The Gamma distribution's conjugate prior is:[15]

Where Z is the normalizing constant, which has no closed form solution. The posterior distribution can be found by updating the parameters as follows.

Where n is the number of observations, and x i is the observation.

Compound gamma

If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse-scale forms a conjugate prior. The compound distribution, which results from integrating out the inverse-scale has a closed form solution, known as the compound gamma distribution.[16]

Others

If X ~ Gamma(k, θ) distribution, then 1/X has an inverse-gamma distribution with shape parameter k and scale parameter θ using the parameterization given by inverse-gamma distribution.

If X ~ Gamma(α, θ) and Y ~ Gamma(β, θ) are independently distributed, then X/(X + Y) has a beta

distribution with parameters α and β.

If X i are independently distributed Gamma(αi, 1) respectively, then the vector (X1/S, ..., X n/S), where S = X1 + ... + X n, follows a Dirichlet distribution with parameters α1, …, αn.

For large k the gamma distribution converges to Gaussian distribution with mean μ = kθ and variance σ2 = kθ2.

The Gamma distribution is the conjugate prior for the precision of the normal distribution with known mean.

The Wishart distribution is a multivariate generalization of the gamma distribution (samples are positive-definite matrices rather than positive real numbers).

The Gamma distribution is a special case of the generalized gamma distribution, the generalized integer gamma distribution, and the generalized inverse Gaussian distribution.

Among the discrete distributions, the negative binomial distribution is sometimes considered the discrete analogue of the Gamma distribution.

Tweedie distributions – the gamma distribution is a member of the family of Tweedie exponential dispersion

models.

Applications

The gamma distribution has been used to model the size of insurance claims[17] and rainfalls.[18] This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process. The gamma distribution is also used to model errors in multi-level Poisson regression models, because the combination of the Poisson distribution and a gamma distribution is a negative binomial distribution.

In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals.[19] Although in practice the gamma distribution often provides a good fit, there is no underlying biophysical motivation for using it.

In bacterial gene expression, the copy number of a constitutively expressed protein often follows the gamma distribution, where the scale and shape parameter are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced by a single mRNA during its lifetime.[20]

The gamma distribution is widely used as a conjugate prior in Bayesian statistics. It is the conjugate prior for the precision (i.e. inverse of the variance) of a normal distribution. It is also the conjugate prior for the exponential distribution.

Notes

1. ^ See Hogg and Craig (1978, Remark 3.3.1) for an explicit motivation

2. ^ Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model"

(https://www.wendangku.net/doc/095367447.html,/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf).

Journal of Econometrics (Elsevier): 219–230. Retrieved 2011-06-02.

3. ^ a b Papoulis, Pillai, Probability, Random Variables, and Stochastic Processes, Fourth Edition

4. ^ Banneheka BMSG, Ekanayake GEMUPD (2009) "A new point estimator for the median of Gamma distribution". Viyodaya J Science,

14:95-103

5. ^ W.D. Penny, KL-Divergences of Normal, Gamma, Dirichlet, and Wishart densities

6. ^ Minka, Thomas P. (2002) "Estimating a Gamma distribution". https://www.wendangku.net/doc/095367447.html,/en-

us/um/people/minka/papers/minka-gamma.pdf

7. ^ Choi, S.C.; Wette, R. (1969) "Maximum Likelihood Estimation of the Parameters of the Gamma Distribution and Their Bias",

Technometrics, 11(4) 683–690

8. ^ Luc Devroye (1986). Non-Uniform Random Variate Generation (https://www.wendangku.net/doc/095367447.html,/rnbookindex.html). New York: Springer-

Verlag. Text "." ignored (help) See Chapter 9, Section 3, pages 401–428.

9. ^ Devroye (1986), p. 406.

10. ^ a b Ahrens, J. H. and Dieter, U. (1982). Generating gamma variates by a modified rejection technique. Communications of

the ACM, 25, 47–54. Algorithm GD, p. 53.

11. ^ Ahrens, J. H.; Dieter, U. (1974). "Computer methods for sampling from gamma, beta, Poisson and binomial distributions".

Computing12: 223–246. CiteSeerX: 10.1.1.93.3828 (https://www.wendangku.net/doc/095367447.html,/viewdoc/summary?doi=10.1.1.93.3828).

12. ^ Cheng, R.C.H., and Feast, G.M. Some simple gamma variate generators. Appl. Stat. 28 (1979), 290-295.

13. ^ Marsaglia, G. The squeeze method for generating gamma variates. Comput, Math. Appl. 3 (1977), 321-325.

14. ^ Luc Devroye (1986). Non-Uniform Random Variate Generation (https://www.wendangku.net/doc/095367447.html,/rnbookindex.html). New York: Springer-

Verlag. See Chapter 9, Section 3, pages 401–428.

15. ^ Fink, D. 1995 A Compendium of Conjugate Priors (https://www.wendangku.net/doc/095367447.html,/~cook/movabletype/mlm/CONJINTRnew%2BTEX.pdf).

In progress report: Extension and enhancement of methods for setting data quality objectives. (DOE contract 95?831).

16. ^ Dubey, Satya D. (December 1970). "Compound gamma, beta and F distributions"

(https://www.wendangku.net/doc/095367447.html,/content/u750hg4630387205/). Metrika16: 27–31. doi:10.1007/BF02613934

(https://www.wendangku.net/doc/095367447.html,/10.1007%2FBF02613934).

17. ^ p. 43, Philip J. Boland, Statistical and Probabilistic Methods in Actuarial Science, Chapman & Hall CRC 2007

18. ^ Aksoy, H. (2000) "Use of Gamma Distribution in Hydrological Analysis"

(https://www.wendangku.net/doc/095367447.html,.tr/engineering/issues/muh-00-24-6/muh-24-6-7-9909-13.pdf), Turk J. Engin Environ Sci, 24, 419– 428.

19. ^ J. G. Robson and J. B. Troy, "Nature of the maintained discharge of Q, X, and Y retinal ganglion cells of the cat," J.

Opt. Soc. Am. A 4, 2301-2307 (1987)

20. ^ N. Friedman, L. Cai and X. S. Xie (2006) "Linking stochastic dynamics to population distribution: An analytical framework

of gene expression," Phys. Rev. Lett. 97, 168302.

References

R. V. Hogg and A. T. Craig (1978) Introduction to Mathematical Statistics, 4th edition. New York: Macmillan.

(See Section 3.3.)'

P. G. Moschopoulos (1985) The distribution of the sum of independent gamma random variables, Annals of the Institute of Statistical Mathematics, 37, 541-544

A. M. Mathai (1982) Storage capacity of a dam with gamma type inputs, Annals of the Institute of

Statistical Mathematics, 34, 591-597

External links

Hazewinkel, Michiel, ed. (2001), "Gamma-distribution" (https://www.wendangku.net/doc/095367447.html,/index.php?

title=p/g043300), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4

Weisstein, Eric W., "Gamma distribution (https://www.wendangku.net/doc/095367447.html,/GammaDistribution.html)", MathWorld.

Engineering Statistics Handbook (https://www.wendangku.net/doc/095367447.html,/div898/handbook/eda/section3/eda366b.htm) Retrieved from "https://www.wendangku.net/doc/095367447.html,/w/index.php?title=Gamma_distribution&oldid=559307728"

Categories: Continuous distributions Factorial and binomial topics Conjugate prior distributions

Exponential family distributions Infinitely divisible probability distributions

This page was last modified on 1 July 2013 at 20:51.

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy.

Wikipedia? is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

博客的发展及演变

博客的发展及 演变 学院:历史文化学院 专业:文化产业管理 班级:1201班 姓名:聂康康 学号:1205024107

摘要:无论是在国外还是国内,博客和博客文化正以“润物细无声”的方式深刻影响着人们的生活。博客是个人性和公共性的结合体。博客精神的核心并不是自娱自乐,也不仅是个人表达自由,准确地说,博客体现的是一种利他的共享精神,为他人提供帮助。个人日记和个人网站主要表现的还是“小我”,而博客表现的是“大我”。两者也许形式上很接近,但内在有着本质的差异。 关键词:博客、互联网 博客已经成为互联网文化不可分割的一部分。 如今网民几乎都会阅读博客,无论是传统新闻媒介的“官方”新闻博客、与自己爱好兴趣相关的话题性博客或是纯娱乐博客,几乎每一个人都会有一两个特别钟爱的博客。 但以前并不是这样。与互联网本身相比,博客的历史并不算长。博客真正兴起并成为互联网风景的重要组成,是在最近五到十年。 美国人工智能专家乔恩·巴杰(Jorn Barger)1997年12月在其网站上首次使用了weblog一词;2002年,博客开始引入中国,数量不足1万人;2002年7月,blog的中文“博客”由方兴东、王俊秀正式命名;2002年8月,方兴东、王俊秀开通博客中国(blogchina)

网站;2004年以来,博客主页(weblog或blog)——一种采用简便的软件生成个人主页、能够按照时间顺序不断更新、实现个人信息的历时积累和传播的互联网个人出版方式,在我国进入迅猛发展时期。 博客(blogger)概念解释为网络出版(Web Publishing)、发表和张贴(Post-这个字当名词用时就是指张贴的文章)文章,是个急速成长的网络活动,现在甚至出现了一个用来指称这种网络出版和发表文章的专有名词——Weblog或Blog。 Blogger即指撰写Blog的人。Blogger在很多时候也被翻译成为“博客”一词,而撰写Blog这种行为,有时候也被翻译成“博客”。因而,中文“博客”一词,既可作为名词,分别指代两种意思Blog (网志)和Blogger(撰写网志的人),也可作为动词,意思为撰写网志这种行为,只是在不同的场合分别表示不同的意思罢了。 Blog是一个网页,通常由简短且经常更新的帖子(Post,作为动词,表示张贴的意思,作为名字,指张贴的文章)构成,这些帖子一般是按照年份和日期倒序排列的。而作为Blog的内容,它可以是你纯粹个人的想法和心得,包括你对时事新闻、国家大事的个人看法,或者你对一日三餐、服饰打扮的精心料理等,也可以是在基于某一主题的情况下或是在某一共同领域内由一群人集体创作的内容。它并不等同于“网络日记”。作为网络日记是带有很明显的私人性质的,而Blog则是私人性和公共性的有效结合,它绝不仅仅是纯粹个人思想的表达和日常琐事的记录,它所提供的内容可以用来进行交流和为他人提供帮助,是可以包容整个互联网的,具有极高的共享精神和价值。

未来5年所有行业的发展趋势

未来5年所有行业的发展趋势 核心:预测未来5年所有16个行业的发展趋势!互联网最有价值的不是自己在产生很多新东西,而是对已有行业的潜力再次挖掘,用互联网的思维去重新提升传统行业。 那么从这个角度去观察,互联网影响传统行业的特点有几点: 第一,打破信息的不对称性格局,竭尽所能透明一切信息。 第二,对产生的大数据进行整合利用,使得资源利用最大化。 第三,互联网的群蜂意志拥有自我调节机制。我把人类群体思维模式称之为群蜂意志,你可以想象一个人类群体大脑记忆库的建立:最初的时候各个神经记忆节点的搜索路径是尚未建立的,当我们需要反复使用的时候就慢慢形成强的连接,在互联网诞生之前这些连接记忆节点的路径是微弱的,强连接是极少的,但是互联网出现之后这些路径瞬间全部亮起,所有记忆节点都可以在瞬间连接。这样就给了人类做整体未来决策有了超越以往的前所未有的体系支撑,基于这样的记忆模式,人类将重新改写各个行业,以及人类未来。 以下是对各行业的盘点,涉及面较多,有些部分笔者观察尚浅,还望多包涵。 1.零售业 传统零售业对于消费者来说最大的弊端在于信息的不对称性。在《无价》

一书中,心理实验表明外行人员对于某个行业的产品定价是心里根本没有底的,只需要抛出锚定价格,消费者就会被乖乖的牵着鼻子走。而C2C,B2C却完全打破这样的格局,将世界变平坦,将一件商品的真正定价变得透明。大大降低了消费者的信息获取成本。让每一个人都知道这件商品的真正价格区间,使得区域性价格垄断不再成为可能,消费者不再蒙在鼓里。 不仅如此,电子商务还制造了大量用户评论UGC。这些UGC真正意义上制造了互联网的信任机制。而这种良性循环,是传统零售业不可能拥有的优势。 预测未来的零售业, 第一,会变成线下与线上的结合,价格同步。 第二,同质化的强调功能性的产品将越来越没有竞争力,而那些拥有一流用户体验的产品会脱引而出。第三,配合互联网大数据,将进行个性化整合推送(现在亚马逊就已经将首页改版为个性化推送主页)。 2.批发业 传统批发业有极大的地域限制,一个想在北京开家小礼品店的店主需要大老远的跑到浙江去进货,不仅要面对长途跋涉并且还需要面对信任问题。所以对于进货者来说,每次批发实际上都是一次风险。当阿里的B2B 出现之后,这种风险被降到最低。一方面,小店主不需要长途跋涉去亲自检查货品,只需要让对方邮递样品即可。另一方面,阿里建立的信任问责制度,使得信任的建立不需要数次的见面才能对此人有很可靠的把

谷歌发展史

谷歌发展史 1,谷歌上线 公司成立肯定是谷歌搜索历史上最重大的里程碑事件。1997年到1998年间,谷歌联合创始人拉里·佩奇(Larry Page)和谢尔盖·布林(Sergey Brin)开始在美国加州门罗帕克的一间车库内筹备公司。成立数天后,公司注册了https://www.wendangku.net/doc/095367447.html,域名。这项服务背后的概念,也就是“无穷大”(googol)这个单词,显示出公司要用一种建设性的方式,组织万维网上无穷的信息,从而帮助用户找需要的答案。 2,在雅虎的帮助下,踏出成功的第一步

2000年之前,谷歌尚未成为搜索行业的主流。行业领头羊的地位属于1994年成立的老牌搜索引擎雅虎。也正是这家公司用一笔搜索合作交易让谷歌崭露头角——雅虎放弃了Inktomi,通过布林和佩奇的服务支持自己的原生搜索结果。真正有意思的是这桩交易15年后的结果。 两家公司最终出现了很多纠纷,其中就涉及到技术专利。2004年时,谷歌与雅虎和解了了专利纠纷,通过雅虎子公司Overture Services,谷歌向雅虎发行了270万股A类普通股。谷歌的广告产品AdWords 由此建立。 3,自助式的盈利搜索模式

在过去,Lycos AskJeeves、Excite等搜索引擎通过直显广告获得收入。这种广告模式在90年代末非常受欢迎,但一段时间后便没人再点击了。搜索公司和广告主遇到了收入难题。 谷歌在2000年推出了AdWords产品。这个产品的模式在当时看起来十分新奇——品牌不用再和广告机构沟通广告上的事宜,而可以自行管理,节省了时间、精力与成本。 更有吸引力的是,AdWords中有一项个性化的尝试,广告可以按照搜索查询的内容创建,而不再是随机出现。此外,谷歌还使用竞价模式,提高了收入:出价越高,顾客的文字广告的位置就越靠上。 雅虎与微软马上效仿这种做法。这种广告在谷歌收入中占比非常大——在2012年广告总营收中是425亿美元。 4,微软加雅虎大于谷歌? 由于对谷歌在搜索引擎的统治地位感到不满,雅虎和微软在2010年形成联盟,用微软的技术负责雅虎的搜索算法和付费搜索平台。同时,雅虎也变成了两家公司的“独家合作销售”,向两家公司的付费搜索广告主出售广告。 这桩交易影响巨大,需要得到美欧监管机构的批准,而这两地的监管者最终在2010年初放行了这桩交易。合作的效果究竟如何并不是很清楚,但雅虎在2012年4月1日起将合作延长了12个月。不过,当玛丽莎·梅耶尔(Marissa Mayer)执掌雅虎后,她曾表示,这份协议并没有带来承诺的市场份额与营收。 雅虎在今年早些时候与谷歌签订了“全球性的、非排他的内容关联广告交易”,证明它还是更喜欢谷歌多一点,也表明微软-雅虎合作正式终结。 对不起了,微软。 5,我的网站为什么排名低?

BBS的发展史

什么是BBS? BBS的英文全称是Bulletin Board System,翻译为中文就是“电子公告板”。BBS最早是用来公布股市价格等类信息的,当时BBS连文件传输的功能都没有,而且只能在苹果计算机上运行。早期的BBS与一般街头和校园内的公告板性质相同,只不过是通过电脑来传播或获得消息而已。一直到个人计算机开始普及之后,有些人尝试将苹果计算机上的BBS转移到个人计算机上,BBS才开始渐渐普及开来。近些年来,由于爱好者们的努力,BBS的功能得到了很大的扩充。 目前,通过BBS系统可随时取得各种最新的信息;也可以通过BBS系统来和别人讨论计算机软件、硬件、Internet、多媒体、程序设计以及生物学、医学等等各种有趣的话题;还可以利用BBS系统来发布一些“征友”、“廉价转让”、“招聘人才”及“求职应聘”等启事;更可以召集亲朋好友到聊天室内高谈阔论……这个精彩的天地就在你我的身旁,只要您在一台可以访问校园网的计算机旁,就可以进入这个交流平台,来享用它的种种服务BBS 维基百科,自由的百科全书 BBS是电子公告板系统(Bulletin Board System)之英文缩写,它通过在计算机上运行服务软件,允许用户使用终端程序通过电话调制解调器拨号或者Internet来进行连接,执行下载数据或程序、上传数据、阅读新闻、与其它用户交换消息等功能。许多BBS由站长(通常被称为SYSOP-SYStem OPerator)业余维护,而另一些则提供收费服务。 目前,有的时候BBS也泛指网络论坛或网络社群。 目录 [隐藏] * 1 BBS技术及常见软件 * 2 BBS人文文化 o 2.1 中国大陆BBS“系统维护”现象 o 2.2 BBS用语 * 3 参看 * 4 外部链接 [编辑] BBS技术及常见软件 因特网(Internet)之前,在20世纪80年代中叶就开始出现基于调制解调器(modem)和电话线通信的拨号BBS及其相互连接而成的BBS网络。 后来随着因特网的普及,拨号BBS和BBS网络已经日渐凋零,所剩无几。目前的BBS站点,多数是基于Internet的Telnet协议。在服务器端,采用Maple BBS或者FireBird BBS 系统。用户端通过Telnet软件如NetTerm、CTerm、FTerm等来登陆服务器,阅读发表文

搜索引擎发展历史

搜索引擎成为互联网的重要应用之一 ??? 从90年代末开始,互联网上的网站与网页数量飞速增长,网民的兴趣点也从屈指可数的几家综合门户类网站分散到特色各异的中小网站去了。人们想在互联网上找到五花八门的信息,但由于人工分类编辑网站目录的方法受到时效和收录量的限制,无法再满足人们对网上内容的检索需求,于是搜索引擎在2000年后开始大行其道。使用蜘蛛程序在互联网上自动抓取海量网页信息,索引并存储到庞大的数据库中,并通过特殊算法将相关性最好的结果瞬间呈现给搜索者,搜索引擎的便捷使其成为互联网最受欢迎的应用之一。以至于有相当多的人将浏览器的默认首页设为搜索引擎,甚至形成了将网站名称输入到搜索框中而非浏览器地址栏这样独特的网络导航习惯。 呼叫目录返回顶部 搜索成为人们思考行为的一部分 ??? 随着网上社区(SNS),博客(Blog),维基百科(Wikipedia)等如火如荼的发展,网民从单纯的信息获取者演变成信息发布者,人们通过网络分享自己的知识、体验、情感或见闻,使互联网上的内容越来越丰富多彩。例如,按照统计,目前中国网民在百度知道平台上的问题解决率高达97.9%,这些问题涉及科技、社会、文化、商业等各个方面,尤其对人们的衣食住行等日常生活问题,几乎都能从平台获得满意的答案。截至到09年7月的4年时间内,中文互动问答平台百度知道已经累计为中国网民解决了5650多万个问题,成为人们日常生活的最佳互动问答平台。社区内容上的无所不谈使搜索引擎的收录也变得无所不包,人们发现通过搜索引擎可以找到他想要的任何信息,从新闻热点到柴米油盐,从育儿百科到MBA课程。信息的便捷获取潜移默化的改变了人们的思考行为,搜索结果页上汇集了整个互联网的智慧,谁不想在苦思冥想前“搜索一下”呢? 呼叫目录返回顶部 搜索成为人们消费行为的重要环节 ??? 随着对搜索引擎的依赖加深,当人们有消费需求或看到感兴趣的商品时,“搜索一下”已经是已形成的“条件反射”。以前,消费者依靠“货比三家”来对抗“买的没有卖的精”这种与商家之间的信息不对称。现在,通过搜索引擎收集到的产品功能与使用情况弥补了消费者与推广商家间在知情权上的鸿沟,成为消费决策的重要依据。价格低的线上销售渠道也成为搜索热点,以至于现在出现了消费者为省钱而先到实体专卖店挑选合适型号大小货品再到网店付款下单的有趣现象。随着年轻一代消费能力的提高,从前仅限于图书音像和电子产品的网上购物正在向工作生活的各个层面迅速渗透,服装食品等日用消费品也逐渐成为网购的宠儿。 呼叫目录返回顶部 搜索营销原理 原理介绍 ????传统营销需要选择目标市场,通过创造、传递、传播优质的客户价值,获得、保持和发展优质客户。而在互联网时代,网站由于其内容丰富、查阅方便、不受时空限制、成本低等优势,广受网民和商家的喜爱,成为传递、传播价值的主要手段,并在获得、保持和发展客户方面呈现强大的潜力。所以,围绕网站的营销活动越来越丰富。

相关文档