文档库 最新最全的文档下载
当前位置:文档库 › R. Ott Effective tool support for the working semanticist

R. Ott Effective tool support for the working semanticist

R. Ott Effective tool support for the working semanticist
R. Ott Effective tool support for the working semanticist

Ott:Effective Tool Support for the Working Semanticist

Peter Sewell?Francesco Zappa Nardelli?Scott Owens?Gilles Peskine?

Thomas Ridge?Susmit Sarkar?Rok Strniˇs a?

?University of Cambridge?INRIA

https://www.wendangku.net/doc/d412797961.html,/users/pes20/ott

Abstract

It is rare to give a semantic de?nition of a full-scale programming language,despite the many potential bene?ts.Partly this is because the available metalanguages for expressing semantics—usually either L A T E X for informal mathematics,or the formal mathematics of a proof assistant—make it much harder than necessary to work with large de?nitions.

We present a metalanguage speci?cally designed for this prob-lem,and a tool,ott,that sanity-checks such de?nitions and com-piles them into proof assistant code for Coq,HOL,Isabelle,and(in progress)Twelf,together with L A T E X code for production-quality typesetting,and OCaml boilerplate.The main innovations are:(1) metalanguage design to make de?nitions concise,and easy to read and edit;(2)an expressive but intuitive metalanguage for specify-ing binding structures;and(3)compilation to proof assistant code.

This has been tested in substantial case studies,including mod-ular speci?cations of calculi from the TAPL text,a Lightweight Java with Java JSR277/294module system proposals,and a large fragment of OCaml(around306rules),with machine proofs of var-ious soundness results.Our aim with this work is to enable a phase change:making it feasible to work routinely,without heroic effort, with rigorous semantic de?nitions of realistic languages. Categories and Subject Descriptors D.3.1[Programming Lan-guages]:Formal De?nitions and Theory

General Terms Languages,Theory,Veri?cation,Standardization 1.Introduction

Problem Writing a precise semantic de?nition of a full-scale programming language is a challenging task that has been done only rarely,despite the many potential bene?ts.Indeed,Standard ML remains,17years after publication,the shining example of a language that is de?ned precisely and is at all widely used(Milner et al.1990).Even languages such as Haskell(Peyton Jones2003) and OCaml(Leroy et al.2005),though designed by PL researchers and in large part based on mathematical papers,rely on prose descriptions.

Precise semantic de?nitions are rare for several reasons,but one important reason is that the metalanguages that are available for ex-pressing semantic de?nitions are not designed for this application, making it much harder than necessary to work with large de?ni-tions.There are two main choices for a metalanguage:

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro?t or commercial advantage and that copies bear this notice and the full citation on the?rst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior speci?c permission and/or a fee.

ICFP’07,October1–3,2007,Freiburg,Germany.

Copyright c 2007ACM978-1-59593-815-2/07/0010...$5.00.(1)Informal mathematics,expressed in L A T E X(by far the most

common option).

(2)Formalised mathematics,in the language of a proof assistant

such as Coq,HOL,Isabelle,or Twelf(Coq;HOL;Isabelle;

Twelf).

For a small calculus either can be used without much dif?culty.

A full language de?nition,however,might easily be100pages or 10000lines.At this scale the syntactic overhead of L A T E X markup becomes very signi?cant,getting in the way of simply reading and writing the de?nition source.The absence of automatic checking of sanity properties becomes a severe problem—in our experi-ence with the Acute language(Sewell et al.2004),just keeping a large de?nition internally syntactically consistent during develop-ment is hard,and informal proof becomes quite unreliable.Further, there is no support for relating the de?nition to an implementation, either for generating parts of an implementation,or for testing con-formance.Accidental errors are almost inescapable(Kahrs1993; Rossberg2001).

Proof assistants help with automatic checking,but come with their own problems.The sources of de?nitions are still cluttered with syntactic noise,non-trivial encodings are often needed(e.g.to deal with subgrammars and binding,and to work around limita-tions of the available polymorphism and inductive de?nition sup-port),and facilities for parsing and pretty printing terms of the source language are limited.Typesetting of de?nitions is supported only partially and only in some proof assistants,so one may have the problem of maintaining machine-readable and human-readable versions of the speci?cation,and keeping them in sync.Moreover, each proof assistant has its own(steep)learning curve,the commu-nity is partitioned into schools(few people are?uent in more than one),and one has to commit to a particular proof assistant from the outset of a project.

A more subtle consequence of the limitations of the available metalanguages is that they obstruct re-use of de?nitions across the community,even of small calculi.Research groups each have their own private L A T E X macros and idioms—to build on a published calculus,one would typically re-typeset it(possibly introducing minor hopefully-inessential changes in the process).Proof assistant de?nitions are more often made available(e.g.in the Archive of Formal Proofs(AFP)),but are speci?c to a single proof assistant. Both styles of de?nition make it hard to compose semantics in a modular way,from fragments.

The Dream What,then,is the ideal?We would like to have a metalanguage that is designed for the working semanticist,support-ing common notations that have been developed over the years.In an email or working note one might write grammars for languages with complex binding structures,for example

t::=

|let p=t in t’bind binders(p)in t’

p::=

|x binders=x

|{l1=p1,...,ln=pn}binders=binders(p1...pn)

and informal semantic rules,for example as below.

G|-t1:T1...G|-tn:Tn

------------------------------------------

G|-{l1=t1,...,ln=tn}:{l1:T1,...,ln:Tn}

These are intuitively clear,concise,and easy to read and edit. Sadly,they lack both the precision of proof assistant de?nitions and the production-quality typesetting of L A T E X—but one might hope that only a modicum of information need be added to make them precise,and to automatically compile them to both targets. Contribution We realize this dream:we describe a metalan-guage speci?cally designed for writing semantic de?nitions,and a tool,ott,that sanity-checks such de?nitions and compiles them into proof assistant code,L A T E X code for production-quality type-setting,and OCaml boilerplate for implementations.The main in-novations are:

?Metalanguage design to make de?nitions concise and easy to read and edit(§2).The metalanguage lets one specify the syn-tax of an object language,together with rules de?ning inductive relations,for semantic judgements.Making these easy to express demands rather different syntactic choices to those of typical pro-gramming languages.The tool builds parsers and pretty-printers for symbolic and concrete terms of the object language.

?An expressive metalanguage(but one that remains simple and intuitive)for specifying binding(§3).Nontrivial object lan-guages often involve complex forms of binding:not just the sin-gle binders of lambda terms,which have received much attention, but also structured patterns,multiple mutually recursive let def-initions,or-patterns,dependent record patterns,etc.We introduce a metalanguage that can express all these but that remains close to informal practice.We give this two interpretations.Firstly,we de-?ne substitution and free variable functions for a“fully concrete”representation,not quotiented by alpha equivalence.This is not ap-propriate for all examples,but suf?ces for surprisingly many cases (including those below),and is what the tool currently implements. Secondly,we de?ne alpha equivalence and capture-avoiding substi-tution for arbitrary binding speci?cations,clarifying several issues. Implementing this is future work,but we prove(on paper)that un-der usable conditions the two notions of substitution coincide.?Compilation to proof assistant code(§4).From a single de?ni-tion in the metalanguage,the ott tool can generate proof assistant de?nitions in Coq,HOL,Isabelle,and(in progress)Twelf.These can then be used as a basis for formal proof and(where the proof assistant permits)code extraction and animation.

This compilation deals with the syntactic idiosyncrasies of the different targets and,more fundamentally,encodes features that are not directly translatable into each target.The main issues are: dependency analysis;support for common list idioms;generation and use of subgrammar predicates;generation of substitution and free variable functions;(for Isabelle)a tuple encoding for mutually primitive recursive functions,with auxiliary function de?nitions for nested pattern matching and for nested list types;(for Coq and Twelf)generation of auxiliary list types for the syntax and semantics;(for Coq)generation of useful induction principles when using native lists;(for HOL)a stronger de?nition library,and(for Twelf)translation of functions into relations.

We aim to generate well-formed and idiomatic de?nitions,with-out dangling proof obligations,and in good taste as a basis for user proof development.

?Substantial case studies(§5).The usefulness of the ott meta-language and tool has been tested in several case studies.We have de?ned type systems and operational semantics for:

metavar termvar,x::={{com term variable}}

{{isa string}}{{coq nat}}{{hol string}}{{coq-equality}} {{ocaml int}}{{lex alphanum

alphanum}}{{tex\mathit{[[termvar]]}}} grammar

t::’t_’::={{com term}} |x::::Var{{com variable}} |\x.t::::Lam(+bind x in t+){{com lambda}} |t t’::::App{{com app}} |(t)::M::Paren{{icho[[t]]}} |{t/x}t’::M::Tsub

{{icho(tsubst_t[[t]][[x]][[t’]])}} v::’v_’::={{com value}} |\x.t::::Lam{{com lambda}} terminals::’terminals_’::=

|\::::lambda{{tex\lambda}}

|-->::::red{{tex\longrightarrow}} subrules

v<::t

substitutions

single t x::tsubst

defns

Jop::’’::=

defn

t1-->t2::::reduce::’’{{com[[t1]]reduces to[[t2]]}}by

--------------------------::ax_app

(\x.t12)v2-->{v2/x}t12

t1-->t1’

--------------::ctx_app_fun

t1t-->t1’t

t1-->t1’

--------------::ctx_app_arg

v t1-->v t1’

Figure1.A small ott source?le,for an untyped CBV lambda calculus,with data for Coq,HOL,Isabelle,L A T E X,and OCaml.

(1)small lambda calculi:simply typed(*)and with ML polymor-

phism,both call-by-value(CBV);

(2)systems from TAPL(Pierce2002)including booleans,naturals,

functions,base types,units,seq,ascription,lets,?x,products, sums,tuples,records,and variants;(*)

(3)the path-based module system of Leroy(1996),with a term

language and operational semantics based on Owens and Flatt (2006);

(4)a language for rely-guarantee and separation logic Vafeiadis

and Parkinson(2007);(*)

(5)formalisation of the core ott binding speci?cations;

(6)Lightweight Java(LJ),a small imperative fragment of Java;(*)

(7)formalisations of Java module system proposals,based on JSR

277/294(including LJ,around140semantic rules);and

(8)a large core of OCaml,including record and datatype de?nitions

(around306semantic rules).(*)

For the starred systems soundness results have been proved or are well in progress,in one or more proof assistants.The TAPL and LJ examples show that a very simple form of modular seman-tics provided by ott can be effective—those TAPL features are de?ned in separate?les,roughly following the structure of the Tin-kerType repository used to build the original text(Levin and Pierce 2003).Most of the examples were not done by the tool developers, and(4–8)were primarily driven by other research goals,so we can

make a reasonable preliminary assessment of the user experience. So far it is positive.For small calculi it is easy to get started with the tool,and even for large de?nitions such as(7)and(8)one can fo-cus on the semantic content rather than the L A T E X or proof assistant markup.The proof assistant representations we generate are rea-sonably uniform,which should enable the development of reusable proof tactics,libraries,and idioms,speci?c to each proof assistant. Whilst we provide a common language for semantic de?nitions,we do not aim to do the same for proofs.

This paper describes the key ideas underlying the ott metalan-guage;it is not a user guide—but that,along with the tool itself and a number of examples,is available on the web(Sewell and Zappa Nardelli2007).User feedback is very welcome.There is a long history of related work in this area,discussed in§6,and we conclude in§7.

2.Overview and Metalanguage Design

In this section we give an overview of the metalanguage and tool, including its syntax and type structure.

2.1A small example

We begin with the example in Fig.1,which is a complete ott source?le for an untyped CBV lambda calculus,including the in-formation required to generate proof assistant de?nitions in Coq, HOL and Isabelle,OCaml boilerplate,and L A T E X.The typeset L A T E X is shown in Fig.2.This is a very small example,suf?cing to illus-trate some of the issues but not the key problems of dealing with the scale and complexity of a full language(or even a nontrivial calculus)which are our main motivation.We comment on those as we go,and invite the reader to imagine the development for their favorite programming language or calculus in parallel.

Core First consider Fig.1but ignore the data within{{}} and(++),and the terminals block.At the top of the?gure, the metavar declaration introduces metavariables termvar(with synonym x),for term variables.The following grammar introduces grammars for terms,with nonterminal root t,and for values v: t::’t_’::=

|x::::Var

|\x.t::::Lam

|t t’::::App

|(t)::M::Paren

|{t/x}t’::M::Tsub

v::’v_’::=

|\x.t::::Lam

This speci?es the concrete syntax of object-language terms,the ab-stract syntax representations for proof-assistant mathematics,and the syntax of symbolic terms to be used in semantic rules.The ter-minals of the grammar(\.(){}/-->)are inferred,as those to-kens that cannot be lexed as metavariables or nonterminals,avoid-ing the need to specify them explicitly.

Turn now to the defns block at the bottom of the?gure.This introduces a mutually recursive collection of judgments,here just a single judgement t1-->t2for the reduction relation,de?ned by three rules.Consider the innocent-looking CBV beta rule: --------------------------::ax_app

(\x.t12)v2-->{v2/x}t12

Here the conclusion is a term of the syntactic form of the judgement being de?ned,t1-->t2.Its two subterms(\x.t12)v2and {v2/x}t12are symbolic terms for the t grammar,not concrete terms of the object language.They involve some object-language constructors(instances of the Lam and App productions of the t grammar),just as concrete terms would,but also:

termvar,x term variable

t::=term

|x variable

|λx.t bind x in t lambda

|t t′app

v::=value

|λx.t lambda

t1?→t2t1reduces to t2

(λx.t12)v2?→{v2/x}t12AX APP

t1?→t′1

t1t?→t′

1

t

CTX APP FUN

t1?→t′1

v t1?→v t′1CTX APP ARG

Figure2.L A T E X output generated from the Fig.1source?le

?mention symbolic metavariables(x)and nonterminals(t12and v2),built from the metavariable and nonterminal roots(x,t, and v)by appending structured suf?xes—here just numbers;?depend on a subtype relationship between v and t(declared by the subrules v<::t,and checked by the tool)to allow v2 to appear in a position where a term of type t is expected;and ?involve syntax for parentheses and substitution.The concrete syntax for these is given by the Paren and Tsub productions of the t grammar,but these are metaproductions(?agged M),for which we do not want abstract syntax constructors.

The ax app rule does not have any premises,but the other two rules do,e.g.

t1-->t1’

--------------::ctx_app_arg

v t1-->v t1’

Here the premises are instances of the judgement being de?ned, but in general they may be symbolic terms of a formula grammar that includes all judgement forms by default,but can also contain arbitrary user-de?ned formula productions,for side-conditions.

This core information is already a well-formed ott source?le that can be processed by the tool,sanity-checking the de?nitions, and default typeset output can be generated.

Proof assistant code To generate proof assistant code we?rst need to specify the proof assistant representations ranged over by metavariables:the isa,coq and hol annotations of the metavar block specify that the Isabelle,Coq and HOL string,nat and string types be used.For Coq the coq-equality generates an equality decidability lemma and proof script for the type.

The proof assistant representation of abstract syntax is then generated from the grammar.For a very simple example,the Coq compilation for t generates a free type with three constructors: Inductive

t:Set:=

t_Var:termvar->t

|t_Lam:termvar->t->t

|t_App:t->t->t.

The general case,rather more complex than this,is discussed in§4, but for now note that the metaproductions do not give rise to proof assistant constructors.Instead,the user can specify an arbitrary translation for each.These translations(‘homs’)give clauses of

E?e1:t1...E?e n:t n

E??eld name1:t→t1...E??eld name n:t→t n

t=(t′1,...,t′

l )typeconstr name

E?typeconstr name?typeconstr name:kind{?eld name′1;...;?eld name′m}

?eld name1...?eld name n PERMUTES?eld name′1...?eld name′m

length(e1)...(e n)≥1

E?{?eld name1=e1;...;?eld name n=e n}:t JT E RECORD CONSTR E|-e1:t1...E|-en:tn

E|-field name1:t->t1...E|-field namen:t->tn

t=(t1’,...,tl’)typeconstr name

E|-typeconstr name gives typeconstr name:kind{field name1’;...;field namem’}

field name1...field namen PERMUTES field name1’...field namem’

length(e1)...(en)>=1

--------------------------------------------------------------------------::record constr

E|-{field name1=e1;...;field namen=en}:t

Figure3.A sample OCaml semantic rule,in L A T E X and ott source forms

functions from symbolic terms to the character string of generated proof-assistant code.In this example,the{{icho[[t]]}}hom for the Paren production says that(t)should be translated into just the translation of t,whereas the{{icho(tsubst t[[t]] [[x]][[t’]])}}hom for Tsub says that{t/x}t’should be translated into the proof-assistant application of tsubst t to the translations of t,x,and t’.The(admittedly terse)‘icho’speci-?es that these translations should be done uniformly for Isabelle, Coq,HOL,and OCaml output,and one can also specify different translations for each.

The tsubst t mentioned in the hom for Tsub above is a proof assistant identi?er for a function that calculates substitution over terms,automatically generated by the substitutions declara-tion.We return in§3to what this does,and to the meaning of the binding speci?cation(+bind x in t+)in the Lam production.

Homs can also be used to specify proof assistant types for nonterminals,in cases where one wants a speci?c proof assistant type expression rather than a type freely generated from the syntax. Tuned typesetting To?ne-tune the generated L A T E X,to produce the output of Fig.2,the user can add various data:(1)the{{tex \mathit{[[termvar]]}}}in the metavar declaration,speci-fying that termvar s be typeset in math italic;(2)the terminals grammar,overriding the default typesetting for terminals\and--> byλand?→;and(3){{com...}}comments,annotating pro-ductions and judgements.

One can also write tex annotations to override the default typesetting at the level of productions,not just tokens.For example, in F<:one might wish to typeset term abstractions withλand type abstractions withΛ,and?ne-tune the spacing,writing productions |\x:T.t::::Lam

{{tex\lambda[[x]]\mathord{:}[[T]].\,[[t]]}} |\X<:T.t::::TLam

{{tex\Lambda[[X]]\mathord{<:}[[T]].\,[[t]]}} to typeset terms such as(\X<:T11.\x:X.t12)[T2]as (ΛX<:T11.λx:X.t12)[T2].These annotations de?ne clauses of functions from symbolic terms to the character string of gen-erated L A T E X,overriding the built-in default clause.Similarly,one can control typesetting of symbolic metavariable and nonterminal roots,e.g.to typeset a nonterminal root G asΓ.

Concrete terms To fully specify the concrete syntax of the object language one need only add de?nitions for the lexical form of variables,concrete instances of metavariables,with the{{lex alphanum

alphanum}}hom in the metavar block.Here alphanum is a built-in regular expression.Concrete examples can then be parsed by the tool and pretty-printed into L A T E X or proof assistant code.OCaml boilerplate The tool can also generate OCaml boiler-plate:type de?nitions for the abstract syntax,and functions for sub-stitution etc.(but not the judgments).To do this one need specify only the OCaml representation of metavariables,by the ocaml hom in the metavar block,and OCaml homs for metaproductions,here already included in the uniform icho homs.

2.2List forms

For an example that is rather more typical of a large-scale seman-tics,consider the record typing rule shown in the top half of Fig.3, taken from our OCaml fragment de?nition.The?rst,second,and fourth premises are uses of judgement forms;the other premises are uses of formula productions with meanings de?ned by homs. The rule also involves several list forms,indicated with dots‘...’, as is common in informal mathematics.Lists are ubiquitous in pro-gramming language syntax,and this informal notation is widely used for good reasons,being concise and clear.We therefore sup-port it directly in the metalanguage,making it precise so that we can generate proof assistant de?nition clauses,together with the L A T E X shown.

The bottom half of Fig.3shows the source text for that rule—note the close correspondance to the typeset version,making it easy to read and edit.Looking at it more closely,we see index variables n,m,and l occuring in suf?xes.There are symbolic nonterminals

and metavariables indexed in three different ranges:e ,t ,and ?eld name

are indexed from1to n,?eld name′

is indexed

from1to m,and t′

is indexed1to l.To parse list forms involving dots,the tool?nds subterms which can be antiuni?ed by abstracting out components of suf?xes.

With direct support for lists,we need also direct support for symbolic terms involving list projection and concatenation,e.g.in the rules below(taken from a different case study).

{l′1=v1,..,l′n=v n}.l′j?→v j

P ROJ

t?→t′

{l1=v1,..,l m=v m,l=t,l′1=t′1,..,l′n=t′n}

?→{l1=v1,..,l m=v m,l=t′,l′1=t′1,..,l′n=t′n}

R EC

Lastly,one sometimes wants to write list comprehensions rather than dots,for compactness or as a matter of general style.We support comprehensions of several forms,e.g.with explicit index i and bounds0to n?1,as below,and with unspeci?ed or upper-only bounds.

Γ?t:{l i:T i i∈0..n?1}

Γ?t.l j:T j

P ROJ

Other types commonly used in semantics,e.g.?nite maps or sets, can often be described with this list syntax in conjunction with type and metaproduction homs to specify the proof assistant representa-tion.

2.3Syntactic Design

Some interlinked design choices keep the metalanguage general but syntactically lightweight.Issues of concrete syntax are often best avoided in semantic research,tending to lead to heated and un-productive debate.In designing a usable metalanguage,however, providing a lightweight syntax is important,just as it is in design-ing a usable programming language.We aim to let the working semanticist focus on the content of their de?nitions without being blinded by markup,inferring data that can reasonable be inferred while retaining enough redundancy that the tool can do useful error checking of the de?nitions.Further,the community has developed a variety of well-chosen concise notations;we support some(though not all)of these.The tradeoffs are rather different from those for conventional programming language syntax.

There are no built-in assumptions on the structure of the math-ematical de?nitions(e.g.,we do not assume that object languages have a syntactic category of expressions,or a small-step reduction relation).Instead,the tool supports de?nitions of arbitrary syntax and of inductive relations over it.Syntax de?nitions include the full syntax of the symbolic terms used in rules(e.g.with metaproduc-tions for whatever syntax is desired for substitution).Judgements can likewise have arbitrary syntax,as can formulae.

The tool accepts arbitrary context-free grammars,so the user need not go through the contortions required to make a non-ambiguous grammar(e.g.for yacc).Abstract syntax grammars, considered concretely,are often ambiguous,but the symbolic terms used in rules are generally rather small,so this ambiguity rarely arises in practice.Where it does,we let the user resolve it with production-name annotations in terms.The tool?nds all parses of symbolic terms,?agging errors where there are multiple possibili-ties.It uses scannerless memoized CPS’d parser combinators,tak-ing ideas from Johnson(1995),which is simple and suf?ciently ef?cient.

Naming conventions for symbolic nonterminals and metavari-ables are rigidly enforced—they must be composed of one of their roots and a suf?x.This makes many minor errors detectable,makes it possible to lex the suf?xes,and makes parsing much less ambigu-ous.

2.4Work?ow

To make the ott tool more usable in realistic work?ows,we have had to attend to some conceptually straightforward but pragmati-cally important engineering issues.We mention a few to give the ?avour:

?Both L A T E X and proof assistant?les can be?ltered,replacing de-limited ott-syntax symbolic terms(or concrete term examples) in documents,e.g.[[(\x.x x)x’-->t]],by their L A T E X or proof assistant rendering.Additionally,L A T E X and proof as-sistant code can be embedded within an ott source?le(and similarly?ltered).Typesetting style is indirected,so that it can be controlled by rede?ning L A T E X commands.

?The generated L A T E X is factored into L A T E X commands for in-dividual rules,the rules of individual defn s,etc.,up to the complete de?nition,so that parts or all of the de?nition can be quoted in other documents.

?The proof assistants each have their own support,more-or-less elaborate,for fancy syntax.For Isabelle the tool can generate these declarations from an ott source grammar,so that they

can be used in proof scripts and in the displayed goals during interactive proof.

?We support common pre?xes for rule names and production names(e.g.the t in Fig.1),and allow synonyms for nonter-minal and metavariable roots(e.g.if one wanted S,T,and U to range over a grammar of types).

3.Binding Speci?cations and Substitution

How to deal with binding,and the accompanying notions of substi-tution and free variables,is a key question in formalised program-ming language semantics.It involves two issues:one needs to?x on a class of binding structures being dealt with,and one needs proof-assistant representations for them.

The latter has been the subject of considerable attention,with representation techniques based on names,De Bruijn indices, higher-order abstract syntax(HOAS),locally nameless terms,nom-inal sets,and so forth,in various proof assistants.The annotated bibliography by Chargu′e raud(2006)collects around40papers on this,and it was a central focus of the POPLmark challenge(Ay-demir et al.2005).

Almost all of this work,however,deals only with the simplest class of binding structures,the single binders we saw in the lambda abstraction production of the§2example:

t::=

|λx.t bind x in t lambda

in which a single variable binds in a single subterm.Realistic pro-gramming languages often have much more complex binding struc-tures,e.g.structured patterns,multiple mutually recursive let def-initions,comprehensions,or-patterns,and dependent record pat-terns.We therefore turn our attention to the potential range of bind-ing structures.

3.1The ott binding metalanguage:syntax

We introduce a novel metalanguage for specifying binding struc-tures,expressive enough to cover all the above but remaining sim-ple and intuitive.It comprises two forms of annotation on produc-tions.The?rst,bind mse in nonterm,lets one specify that vari-ables bind in nonterminals of the production,as in the lambda pro-duction above.Here mse is a metavariable set expression,e.g.in that lambda production just the singleton metavariable x of the pro-duction.A variable can bind in multiple nonterminals,as in the ex-ample of a simple recursive let below.

t::=

|let rec f=t in t′bind f in t

bind f in t′

More complex examples require one to collect together sets of variables.For example,the grammar below has structured patterns, with a let p=t in t′production in which all the binders of the pattern p bind in the continuation t′.

t::=

|x

|(t1,t2)

|let p=t in t′bind binders(p)in t′

p::=

|binders={}

|x binders=x

|(p1,p2)binders=binders(p1)∪binders(p2) This is expressed with the second form of annotation:user-de?ned auxiliary functions such as the binders above.This is an auxiliary function de?ned over the p grammar that identi?es a set of vari-ables to be used in the bind annotation on the let production.

The syntax of a precise fragment of the binding metalanguage is given in Fig.4,where we have used ott to de?ne part of the ott metalanguage.A simple type system(not shown)enforces sanity properties,e.g.that each auxiliary function is only applied to nonterminals that it is de?ned over,and that metavariable set expressions are well-sorted.

Further to that fragment,the tool supports binding for the list forms of§2.2.Metavariable set expressions can include lists of metavariables and auxiliary functions applied to lists of nontermi-nals,e.g.as in the record patterns below.

p::=

|x b=x

|{l1=p1,..,l n=p n}b=b(p1..p n)

This suf?ces to express the binding structure of almost all the nat-ural examples we have come across,including de?nitions of mutu-ally recursive functions with multiple clauses for each,Join calcu-lus de?nitions(Fournet et al.1996),dependent record patterns,and many others.

Given a binding speci?cation,the tool can generate substitution functions automatically.Fig.1contained the block:

substitutions

single t x::tsubst

which causes ott to generate proof-assistant functions for single substitution of term variables x by terms t over all(non-subtype) types of the grammar—here just t,so a function named tsubst t is generated.Multiple substitutions can also be generated,and there is similar machinery for free variable functions.

3.2The ott binding metalanguage:semantics

We give meaning to these binding speci?cations in two ways.

The fully concrete representation The?rst semantics(and the only one that is currently supported by the tool)is what we term a fully concrete representation.Perhaps surprisingly,a reasonably wide range of programming language de?nitions can be expressed satisfactorily without introducing alpha equivalence.In typical call-by-value or call-by-name languages,there is no reduction under term variable binders.The substitutions that arise therefore only substitute closed terms,so there is no danger of capture.The fully concrete representation uses abstract syntax terms with concrete variable names,as in the example Coq type t of§2.1(Fig.5gives a general grammar of such concrete abstract syntax terms,cast s). Substitution is de?ned so as to not substitute for bound variables within their scopes,but without using any renaming.Continuing the§2.1example,ott generates Coq code essentially as below. Fixpoint tsubst_t(t2:t)(x1:termvar)(t2:t){struct t2}:t:= match t2with

|(t_Var x)=>

(if eq_termvar x x1then t2else(t_Var x)) |(t_Lam x t1)=>

t_Lam x(if eq_termvar x1x)then t1

else(tsubst_t t2x1t1))

|(t_App t1t’)=>

t_App(tsubst_t t2x1t1)(tsubst_t t2x1t’)

end.

Doing this in the general case highlights a subtlety:when substi-tuting(e.g.)t s for x s,the only occurrences of x that are substi-tutable are those in instances of productions of the t grammar that comprise just a singleton x(e.g.here the Var production),as only there will the result be obviously type correct.Other occurrences, e.g.the x in the Lam production,or the x in the pattern grammars above,are not substitutable,and,correspondingly,should not ap-pear in the results of free variable functions.In natural examples

metavars metavarroot,mvr nontermroot,ntr

terminal,t auxfn,f

prodname,pn variable,var

grammar

metavar,mv::=

|metavarroot su?x

nonterm,nt::=

|nontermroot su?x

element,e::=

|terminal

|metavar

|nonterm

metavar set expression,mse::=

|metavar

|auxfn(nonterm)

|mse union mse′

|{}

bindspec,bs::=

|bind mse in nonterm

|auxfn=mse

prod,p::=

||element1..element m::::prodname(+bs1..bs n+)

rule,r::=

|nontermroot::’’::=prod1..prod m

grammar rules,g::=

|grammar rule1..rule m

Figure4.Mini-Ott in Ott:the binding speci?cation metalanguage

concrete abstract syntax term,cast::=

|var:mvr

|prodname(cast1,..,cast m)

Figure5.Mini-Ott in Ott:concrete abstract syntax terms

one might expect all such occurrences to be bound at some point in the grammar.

A precise de?nition of this fully concrete representation is avail-able for the Mini-Ott of Fig.4,including de?nitions of substitution and free variables over the general concrete abstract syntax terms of Fig.5(Sewell and Zappa Nardelli2007),but for lack of space we do not include it here.

Alpha equivalence The fully concrete representation suf?ces for the case studies we describe here(notably including the OCaml fragment),but sometimes alpha equivalence really is needed—e.g.where there is substitution under binders,for dependent type environments1,or for compositional reasoning about terms.We have therefore de?ned notions of alpha equivalence and capture-avoiding substitution over concrete abstract syntax terms,again for an arbitrary Mini-Ott object language and binding speci?cation. These de?nitions are again precise,in Ott-Isabelle/HOL,and are available on the web(Sewell and Zappa Nardelli2007).Here we explain just the key points by two examples.

1The POPLmark F<:example is nicely expressible in ott as far as L A T E X output goes,but its dependent type environments would require explicit alpha conversion in the rules to capture the intended semantics using the fully concrete representation.

First,consider the OCaml or-patterns2p1|p2,e.g.with a pat-tern grammar

p::=

|x b=x

|(p1,p2)b=b(p1)∪b(p2)

|p1|p2b=b(p1)∪b(p2)

|None b={}

|Some p b=b(p)

This would be subject to the conditions(captured in type rules) that for a pair pattern(p1,p2)the two subpatterns have disjoint domain,whereas for an or-pattern p1|p2they have equal domain and types.One can then write example terms such as that below. let((None,Some x)|(Some x,None))=y in(x,x) Here there is no simple notion of‘binding occurrence’.Instead, one should think of the two occurrences of x in the pattern,and the two occurrences of x in the continuation,as all liable to alpha-vary together.This can be captured by de?ning,inductively on concrete abstract syntax terms cast,partial equivalence relations e cast over the occurrences of variables within them.In the example it would relate all four occurrences of x to each other,as below,but leave y unrelated.

let((None,Some x)|(Some x,None))=y in(x,x) Given this,one can de?ne two terms to be alpha equivalent if their equivalence classes of occurrences can be freshly renamed to make them identical.

For the second example,consider a system such as F<:with type environmentsΓas below.

Γ::=

|?

|Γ,X<:T

|Γ,x:T

In setting up such a system,it is common to treat the terms and types up to alpha equivalence.There is then a technical choice about whether the judgements are also taken up to alpha equiva-lence:in typing judgementsΓ?t:T,one can either treatΓcon-cretely or declare the domain ofΓto bind in t and in T.Suppose one takes the second approach,and further has each element ofΓ(X<:T or x:T)binding(X or x)in the succeeding elements. (All these options can be expressed in the ott bindspec metalan-guage.)For a complete judgement such as

?,X<:Top,Y<:X→X,x:X,y:Y?y x:X

it is then easy to see what the binding structure is,and we can depict the e cast as below.

?,X<:Top,

Y<:X→X,x:X,y:Y?y x:X

For that type environment in isolation,however,i.e.

?,X<:Top,Y<:X→X,x:X,y:Y

while in some sense the X<:Top binds in the tail,it must not be alpha-varied—that only becomes possible when it is put in the complete context of a judgement.Our de?nitions capture this phenomenon by de?ning for each term cast not just an e cast rela-tion for‘closed’binding but also a similar o cast partial equivalence relation for‘open’binding,relating occurrences which potentially may alpha-vary together if this term is placed in a larger,binding, context.The o cast is not directly involved in the de?nition of alpha 2Similar binding occurs in the Join calculus,where a join de?nition may mention the‘bound’names arbitrarily often on the left.equivalence,but is(compositionally)used to calculate the e cast.It is shown for this example below.

?,X<:Top,Y<:X→X,x:X,y:Y

Nontrivial open binding also occurs in languages with dependent patterns,e.g.those with pattern matching for existential types.

We increase con?dence in these de?nitions by proving a theo-rem that,under reasonable conditions,substitution of closed terms in the fully concrete representation coincides with capture-avoiding substitution for our notion of alpha equivalence for arbitrary bind-ing speci?cations.The conditions involve the types of the desired substitution and the auxiliary functions present—to a?rst approx-imation,that the types of substitutions(e.g.t for x),are distinct from the domains and results of auxiliary functions(e.g.binders, collecting,from patterns p,variables x).In the absence of a widely accepted alternative class of binding speci?cations,there is no way to even formulate‘correctness’of that notion in general,but for speci?c examples one can show that it coincides with a standard representation.We did that(a routine exercise),for the untyped lambda calculus.At present both of these are hand proofs,though above mechanized de?nitions—we aim to mechanize them in due course.

Generating proof assistant code that respects this notion of alpha equivalence,for arbitrary binding speci?cations,is a substantial question for future work.It could be addressed directly,in which case one has to understand how to generalise the existing proof assistant representations,and what kind of induction schemes to produce,or via a uniform translation into single binders—perhaps introducing proof-assistant binders at each bind mse point in the grammar.A more tractable(but still rather expressive)subclass of binding speci?cations can be obtained by simple static conditions that guarantee that there is no‘open’binding.

https://www.wendangku.net/doc/d412797961.html,pilation to Proof Assistant Code

Our compilation generates proof-assistant de?nitions:of types;of functions,for subgrammar predicates,for the binding auxiliaries of§3,for single and multiple substitution,and for free variables; and of relations,for the semantic judgements.We generate well-formed proof assistant code,without dangling proof obligations, and try also to make it idiomatic for each proof assistant,to provide a good basis for mechanized proof.All this is for Coq,HOL,and Isabelle/HOL,and work on compilation to Twelf is in progress. 4.1Types

Each metavariable declaration gives rise simply to a proof assistant type abbreviation,for example types termvar="string"in the Isabelle generated from Fig.1.For each nonterminal root of the user’s grammar,if(a)it is a maximal element of the subrule order,and(b)no type hom has been speci?ed,then we generate a free type with a constructor for each non-meta production of the grammar(as in the simple§2.1example of t).Nonterminal roots with type homs give rise to type abbreviations.Nonterminal roots that are not maximal,e.g.the v of Fig.1,are represented using the type generated for the(unique)maximal element above them.For these we also generate and use subgrammar predicates that carve out the relevant part of that type,as discussed below.

In general there may be a complex pattern of mutual recursion among these types.Coq,HOL and Isabelle all support mutually recursive type de?nitions(with Inductive,Hol_datatype,and datatype respectively),but it is desirable to make each mutually recursive block as small as possible,to simplify the resulting induc-tion principle.Accordingly,we topologically sort the rules accord-ing to a dependency order,generating mutually recursive blocks for each connected component and inserting any(singleton)type abbreviations where they?t.

We also have to choose a representation for productions involv-ing list forms.For example,for a language with records one might write

metavar label,l::={{hol string}}{{coq nat}} indexvar index,n::={{hol num}}{{coq nat}} grammar

t::E_::=

|{l1=t1,..,ln=tn}::::record

In HOL and Isabelle we represent these simply with contructors whose argument types involve proof-assistant native list types,e.g. val_=Hol_datatype‘

t=E_record of(label#t)list‘;

For Coq,however,we provide two alternatives:one can either use native lists or lists can be translated away,depending on taste.In the former case we generate an appropriate induction principle using nested?xpoints,as the default principle produced by Coq is too weak to be useful.In the latter case we synthesise an additional type for each type of lists-of-tuples that arises in the grammar.In the example,we need a type of lists of pairs of a label and a t: Inductive

list_label_t:Set:=

Nil_list_label_t:list_label_t

|Cons_list_label_t:label->t->list_label_t ->list_label_t

with t:Set:=

E_record:list_label_t->t.

These are included in the topological sort,and utility functions, e.g.to make and unmake lists,are synthesised.A similar translation will be needed for Twelf,as it has no polymorphic list type.We also generate,on request,default Coq proofs that there is a decidable equality on various types.

4.2Functions

The generated functions are de?ned by pattern-matching and re-cursion.The patterns are generated by building canonical symbolic terms from the productions of the grammar.The recursion is es-sentially primitive recursion:for Coq we produce Fixpoint s or Definition s as appropriate;for HOL we use an ottDefine vari-ant of the Define package;and for Isabelle we produce primrec s. In general we have to deal both with the type dependency(the topo-logically sorted mutually recursive types described above)and with function dependency—for subgrammar predicates and binding auxiliaries we may have multiple mutually recursive functions over the same type.

Subgrammar Predicates We generate subgrammar predicates to carve out the subsets of each free proof assistant type(from the maximal elements of the subrule order)that represent the non-free rules of the grammar.The non-free grammar rules are the least subset of the rules that either(1)occur on the left of a subrule(<::)declaration,or(2)have a(non-meta)production that mentions a non-free rule.Note that these can include rules that are maximal elements of the subrule order,e.g.if an expression grammar included a production involving packaged values.The subgrammar predicate for a type is de?ned by pattern matching over constructors of the maximal type above it—for each non-meta production of the maximal type it calculates a disjunction over all the productions of the lower type that are subproductions of it, invoking other subrule predicates as appropriate.

Binding Auxiliaries These functions calculate the intuitive fully concrete interpretations of auxiliary functions de?ned in bindspecs, as in§3.2,giving proof assistant sets or lists,of metavariables or nonterminals,over each type for which the auxiliary is de?ned.Substitutions and free variables The generated substitution functions also walk over the structure of the free proof assistant types.For each production,for each occurrence of a nonterminal nt within it,we?rst calculate the things(of whatever type is in question)binding in that nt,i.e.those that should be removed from the domain of any substitution pushed down into it.In simple cases these are just the interpretation of the mse′(of the right type)from any bind mse′in nt of the production.The substitution function clause for a production is then of one of two forms:either(1)the production comprises a single element,of the metavariable that we are substituting for,and this is within the rule of the nonterminal that it is being replaced by,or(2)all other cases.For(1)the ele-ment is compared with the domain of the substitution,and replaced by the corresponding value from the range if it is found.For(2) the substitution functions are mapped over the subelements,hav-ing?rst removed any bound things from the domain of the substi-tution.(Substitution does not descend through nonterminals with type homs,as they may be arbitrarily complex,so these should generally only be used at upper levels of a syntax,e.g.to use?-nite maps for type environments.)The fully concrete interpretation also lets us de?ne substitution for nonterminals,e.g.to substitute for compound identi?ers such as a dot-form M.x.This is all done similarly,but with differences in detail,for single and for multi-ple substitutions,and for the corresponding free variable functions. For all these we simplify the generated functions by using the de-pendency analysis of the syntax,only propagating recursive calls where needed.

Dealing with the proof assistants Each proof assistant intro-duced its own further dif?culties.Leaving aside the purely syntactic idiosyncrasies(which are far from trivial,but not very interesting): For Coq,when translating lists away,generation of functions over productions that involve list types must respect that transla-tion.We therefore generate auxiliary functions that recurse over those list types.Coq also needs an exact dependency analysis.

For HOL,the standard Define package tries an automatic ter-mination proof.This does not suf?ce for all cases of our gener-ated functions involving list types,so we developed an ottDefine variant,with stronger support for proving termination of de?nitions involving list operators.

For Isabelle,we chose the primrec package,to avoid any dan-ger of leaving dangling proof obligations,and because our func-tions are all intuitively primitive recursive.Unfortunately,in the released(Isabelle2005)version,primrec does not support de?ni-tions involving several mutually recursive functions over the same type.For these we generate single functions calculating tuples of results,de?ne the intended functions as projections of these,and generate lemmas(and simple proof scripts)characterising them in terms of the intended de?nitions.Further,it does not support pat-tern matching involving nested constructors.We therefore generate auxiliary functions for productions with embedded list types.Is-abelle tuples are treated as iterated pairs,so we do the same for productions with tuples of size3or more.Isabelle also requires a function de?nition for each recursive type.In the case where there are multiple uses of the same type(e.g.several uses of t list in different productions)all the functions we wish to generate need identical auxiliaries,so identical copies must be generated.In retro-spect,the choice to use primrec is debatable,and it has been sug-gested that future versions of Isabelle will have a more robust def-inition package for general functions,which should subsume some of the above.

4.3Relations

The semantic relations are de?ned with the proof-assistant induc-tive relations packages,Inductive,Hol_reln,and inductive, respectively.Each defns block gives rise to a potentially mutu-

symterm,st::=

|stnb

|nonterm

symterm node body,stnb::=

|prodname(ste1,..,ste m)

symterm element,ste::=

|st

|metavar

|var:mvr

Figure6.Mini-Ott in Ott:symbolic terms

ally recursive de?nition of each defn inside it(it seems clearer not

to do a topological sort here).De?nition rules are expressed inter-nally with symbolic terms.We give a simpli?ed grammar thereof

in Fig.6,omitting the symbolic terms for list forms.A symbolic term st for a nonterminal root is either an explicit nonterminal or

a node,the latter labelled with a production name and containing

a list of symterm element s,which in turn are either symbolic terms,metavariables,or variables.Each de?nition rule gives rise

to an implicational clause,essentially that the premises(ott sym-bolic terms of the formula grammar)imply the conclusion(an ott symbolic term of whichever judgement is being de?ned).Symbolic terms are compiled in several different ways:

?Nodes of non-meta productions are output as applications of the appropriate proof-assistant constructor(and,for a subrule, promoted to the corresponding constructor of a maximal rule).?Nodes of meta productions are transformed with the user-speci?ed homomorphism.

?Nodes of judgement forms are represented as applications of the de?ned relation in Coq and HOL,and as set-membership assertions in Isabelle.

Further,for each nonterminal of a non-free grammar rule,e.g.a us-age of v’where v<::t,an additional premise invoking the gener-ated subrule predicate for the non-free rule is added,e.g.is_v v’. For Coq and HOL,explicit quanti?ers are introduced for all vari-ables mentioned in the rule.

Supporting list forms requires some additional analysis.For example,consider the record typing rule below.

Γ?t0:T0..Γ?t n?1:T n?1

Γ?{l0=t0,..,l n?1=t n?1}:{l0:T0,..,l n?1:T n?1}T Y R CD We analyse the symbolic terms in the premises and conclusion

to identify lists of nonterminals and metavariables with the same bounds—here t0..t n?1,T0..T n?1,and l0..l n?1all have bounds 0..n?1.To make the fact that they have the same length im-mediate in the generated code,we introduce a single proof as-sistant variable for each such collection,with appropriate pro-jections and list maps/foralls at the usage points.For exam-ple,the HOL for the above is essentially as follows,with an

l_t_Typ_list:(label#t#Typ)list.

(*Ty_Rcd*)!(l_t_Typ_list:(label#t#Typ)list)(G:G). (EVERY(\b.b)

(MAP(\(l_,t_,Typ_).(Ty G t_Typ_))l_t_Typ_list))

==>

(Ty

G

(E_record(MAP(\(l_,t_,Typ_).(l_,t_))l_t_Typ_list)) (T_Rec(MAP(\(l_,t_,Typ_).(l_,Typ_))l_t_Typ_list))) This seems to be a better idiom for later proof development than the alternative of three different list variables coupled with assertions that they have the same length.The HOL code for the R EC rules we saw in§2.2is below—note the list-lifted usage of the is_v_of_t predicate,and the list appends(++)in the conclusion.

(*reduce_Rec*)!(l’_t’_list:(label#t)list)

(l_v_list:(label#t)list)(l:label)(t:t)(t’:t). ((EVERY(\(l_,v_).is_v_of_t v_)l_v_list)/\

((reduce t t’)))

==>

((reduce(t_Rec(l_v_list++[(l,t)]++l’_t’_list)) (t_Rec(l_v_list++[(l,t’)]++l’_t’_list)))) For the P ROJ typing rule we need a speci?c projection(the HOL EL)to pick out the j’th element:

(*Ty_Proj*)!(l_Typ_list:(label#Typ)list)

(j:index)(G:G)(t:t).

(((Ty G t(T_Rec(l_Typ_list)))))

==>

((Ty

G

(t_Proj t((\(l_,Typ_).l_)(EL j l_Typ_list)))

((\(l_,Typ_).Typ_)(EL j l_Typ_list))))

For Coq,when translating away lists,we have to introduce yet more list types for these proof assistant variables,in addition to the obvious translation of symbolic terms,and,more substantially, to introduce additional inductive relation de?nitions to induct over them.

As outlined here,the analysis and code generation performed by ott is reasonably complex(the tool is around17000lines of OCaml).It is therefore quite possible that the generated code is not what is intended,either because of soundness bugs in the tool (though none such are known at present)or through misunderstand-ing of its semantics,and one should not treat the tool as part of a trusted chain—it is necessary in principle to look over the gen-erated de?nitions.In any proof effort,however,one will have to become intimately familiar with those de?nitions in any case,so we do not regard this as a problem.

5.Case Studies

Our primary goal is to provide effective tool support for the work-ing semanticist.Assessing whether this has been achieved needs substantial case studies.Accordingly,we have speci?ed various languages in ott,de?ning their type systems and operational se-mantics,as below.

System rules L A T E X Coq HOL Isabelle

defnmt defn mt defn mt untyped CBV lambda(Fig.1)3√√√√simply typed CBV lambda6√√√√√√√ML polymorphism22√√√√TAPL full simple63√√√√√√√POPLmark F<:with records48√

Leroy JFP96module system67√√

RG-Sep language22√√√

Mini-Ott-in-Ott55√√√2 LJ:Lightweight Java34√√(3) LJAM:Java Module System140√√OCaml fragment306√√√√1√1see below.2hand proofs.3in progress.

These range in scale from toy calculi to a large fragment of OCaml. They also vary in kind:some are post-facto formalizations of ex-isting systems,and some use ott as a tool in the service of other research goals.For most we use the tool to generate de?nitions in one or more of Coq,HOL,and Isabelle,indicated by the ticks in the‘defn’columns below,together with the typeset L A T E X.We have tested whether these de?nitions form a good basis for mechanized proof by machine-checked proofs of metatheoretic results(gener-ally type preservation and progress),indicated by ticks in the‘mt’

grammar

t::Tm::={{com terms:}}

|let x=t in t’::::Let(+bind x in t’+)

{{com let binding}}

defns

Jop::’’::=

defn

t-->t’::::red::E{{com Evaluation}}by

-----------------------------::LetV

let x=v1in t2-->[x|->v1]t2

t1-->t1’

----------------------------------::Let

let x=t1in t2-->let x=t1’in t2

defns

Jtype::’’::=

defn

G|-t:T::::typing::T{{com Typing}}by

G|-t1:T1

G,x:T1|-t2:T2

------------------------::Let

G|-let x=t1in t2:T2

Figure7.An ott source?le for the let fragment of TAPL columns below.The‘rules’column gives the number of seman-tic rules in each system,as a crude measure of its complexity.The sources,generated code,and proof scripts for most of these systems are available(Sewell and Zappa Nardelli2007).

TAPL full simple This covers most of the simple features,up to variants,from TAPL(Pierce2002).It demonstrates the util-ity of a very simple form of modularity provided by ott,allow-ing clauses of grammars and semantic relations to be split be-tween?les.The original TAPL languages were produced using Tin-kerType(Levin and Pierce2003)to compose features and check for con?icts.Here we build a system,similar to the TinkerType sys-fullsimple,from ott source?les that correspond roughly to the various TinkerType components,each with syntax and se-mantic rules for a single feature.The ott source for let is shown in Fig.7,to which we add:bool,bool typing,nat,nat typing, arrow typing,basety,unit,seq,ascribe,product,sum, fix,tuple,and variant,togther with infrastructure common, common index,common labels,and common typing.

It also proved easy to largely reproduce the TAPL visual style and(though we did no proof)to add subtyping.

Leroy JFP96module system This formalizes the path-based type system of Leroy(1996,§4),extended with a term language and an operational semantics.

RG-Sep language This is a concurrent while language used for ongoing work combining Rely-Guarantee reasoning with Separa-tion Logic,de?ned and proved sound by Vafeiadis and Parkinson (2007).

Mini-Ott-in-Ott This precisely de?nes the ott binding speci-?cations(without list forms)with their fully concrete representa-tion and alpha equivalence.The metatheory here is a proof that for closed substitutions the two coincide.To date only a hand proof has been completed;we plan to mechanize it in due course.

LJ and LJAM LJ,by Strniˇs a and Parkinson,is an imperative fragment of Java.LJAM extends that(again using ott modularity) with a formalization of the core part of JSR-277and a proposal for JSR-294,which together form a proposal for a Java module system(Strniˇs a et al.2007).OCaml fragment This covers a substantial core of OCaml—to a?rst approximation,all except subtyping,objects,and mod-ules.Notable features that are handled are:ML-style polymor-phism;pattern matching;mutable references;?niteness of the in-teger type;generative de?nitions of record and variant types;and generative exception de?nitions.It does not cover much of the stan-dard library,mutable records,arrays,pattern matching guards,la-bels,polymorphic variants,objects,or modules.

We have tried to make our de?nition mirror the behaviour of the OCaml system rather closely.The OCaml manual(Leroy et al. 2005)de?nes the syntax with a BNF;our syntax is based on that. It describes the semantics in prose;our semantics is based on a combination of that and our experience with the language. Experience Our experience with these examples has been very positive.The tool does make it easy to work with these de?nitions, allowing one to focus on the content rather than the proof assistant or L A T E X markup.We have not had to hand-edit the Ott output.

For our most substantial example,the OCaml fragment,we have proved type preservation and progress for the expression language, all machine-checked in HOL.The need for alpha-equivalence-aware reasoning arises only for type variables and type schemes. We use a de Bruijn encoding of type variables to support the for-mal proof effort.Since Ott does not currently support the automatic generation of such representations,we deal directly with the index shifting functions in the Ott source.This proof effort has taken only around3man-months,and the preceeding de?nition effort was only another few https://www.wendangku.net/doc/d412797961.html,pared with our previous experiences this is remarkably lightweight:it has been possible to develop this as an example,rather than requiring a major research project in its own right.Apart from ott,the work has been aided by HOL’s powerful?rst-order reasoning automation and its inductive de?ni-tion package,and by the use of the concrete representation.

6.Related Work

As Strachey(1966)writes in the Proceedings of the?rst IFIP Working Conference,Formal Language Description Languages:

A programming language is a rather large body of new

and somewhat arbitrary mathematical notation introduced

in the hope of making the problem of controlling computing

machines somewhat simpler.

and the problem of dealing precisely with this notation,with the need for machine support in doing so,has spawned an extensive literature,of which we touch only on the most related points.

The proof assistants that we build on,Coq,HOL,Isabelle,and Twelf,are perhaps the most directly related work(Coq;HOL;Is-abelle;Twelf).Ever since original LCF(Milner1972),one of the main intended applications of these and related systems has been reasoning about programs and programming languages,and they have been vastly improved over the years to make this possible.Re-cently they have been used for a variety of substantial languages, including for example the verifying compiler work of Blazy et al. (2006)(Coq),a C expression semantics by Norrish(1999)(HOL), work on Java by Klein and Nipkow(2006)(Isabelle),and an inter-nal language for SML by Lee et al.(2007)(Twelf).They are,how-ever,all more-or-less general-purpose tools—by adding front-end support that is speci?c to the problem of de?ning programming lan-guage syntax and semantics,we believe ott can signi?cantly ease the problems of working with large language de?nitions.

Several projects have aimed at automatically generating pro-gramming environments and/or compilers from language descrip-tions,including early work on the Synthesiser Generator(Reps and Teitelbaum1984).Kahn’s CENTAUR system(Borras et al. 1988)supported natural-semantics descriptions in the TYPOL lan-

guage,compiling them to Prolog for execution,together with a rich user interface including an editor,and a language METAL to de?ne abstract and concrete syntax(Terrasse(1995)also con-sidered compilation of TYPOL to Coq).Related work by Klint (1993)and colleagues produced the ASF+SDF Meta-environment. Here SDF provides rich support for de?ning syntax,while ASF al-lows for de?nitions in an algebraic speci?cation style.Again it is a programming environment,with a generic syntax-directed edi-tor.The ERGO Support System(Lee et al.1988)also had a strong user-interface component,but targeted(among others)ADT-OBJ andλProlog.Mosses’s work on Action Semantics and Modular SOS(Mosses2002)has been supported by various tools,but makes strong assumptions on the form of the semantic relations being de-?ned.Moving closer in goals to ott,ClaReT(Boulton1997)took a sophisticated description of syntax and pretty printing,and a de-notational semantics,and generated HOL de?nitions.

In contrast to the programming environments above,ott is a more lightweight stand-alone tool for de?nitions,designed to?t in with existing editing,L A T E X and proof-assistant work?ows and requiring less initial investment and commitment to use.(Its sup-port for production parsing and pretty printing is less developed than several of the above,however.)Moreover,in contrast to CEN-TAUR and to research on automatic compiler generation,ott is not focussed on producing executable de?nitions—one can de-?ne arbitrary semantic relations which may or may not be algorith-mic.The generality of these arbitrary inductive relation de?nitions means that ott should be well-suited to much present-day seman-tics work,for type systems and/or operational semantics.

PLTredex(Matthews et al.2004)is a domain-speci?c language for expressing reduction relation de?nitions and animating them. It is currently being used on a‘full-language’scale,for an R6RS Scheme de?nition(Findler and Matthews2007),but is by de-sign restricted to animation of reduction semantics.The Ruler sys-tem(Dijkstra and Swierstra2006)provides a language for express-ing type rules,generating L A T E X and implementations but not proof assistant de?nitions,used for a Haskell-like language.

Turning to direct support for binding,Twelf is suited to HOAS representations.FreshML(Shinwell et al.2003),Alpha Prolog(Ch-eney and Urban2004)and MLSOS(Lakin and Pitts2007)both use nominal logic-and functional programming approaches,the latter two with a view to prototyping of semantics.Cαml(Pottier2006) is the most substantial other work we are aware of that introduces a large and precisely de?ned class of binding speci?cations,from which it generates OCaml code for type de?nitions and substitu-tions.Types can be annotated with sets of atom sorts,with occur-rences of atoms of those sorts treated as binding within them.inner and outer annotations let one specify that subterms are either inside or outside an enclosing binder.This seems to us less intuitive than the ott binding speci?cations.We conjecture that the two have mutually incomparable expressiveness.

Representing binding within proof assistants was a key aspect of the POPLmark challenge(Aydemir et al.2005),and several comparisons have been produced,including those of Aydemir et al. (2007)and Berghofer and Urban(2006).Owens(1995)discusses pattern binding using locally nameless representations in Isabelle.

The work on concise concrete syntax by Tse and Zdancewic (2006)has similar lightweight syntax de?nition goals to ott,tak-ing a concise description of a grammar but producing the conven-tional object-language parsing and pretty printing tools.

It is interesting to contrast our OCaml fragment example with attempts to verify aspects of the SML De?nition.Early attempts, by Syme(1993),VanInwegen(1996),and Gunter and Maharaj (1995),faced severe dif?culties,both from the mathematical style of the De?nition and the limitations of HOL at the time whereas, using ott and HOL4,we have found our example reasonably straightforward.Lee et al.(2007)take a rather different approach. They factor their(Twelf)de?nition into an internal language,and (yet to appear)a substantial elaboration from a source language to that.They thus deal with a much more sophisticated type theory (aimed at supporting source features that we do not cover,including modules),so the proof effort is hard to compare,but their semantic rules are further removed from source-language programs.

7.Conclusion

Summary We have introduced the ott metalanguage and tool for expressing semantics,incorporating metalanguage design to make de?nitions easy to read and edit,a novel and expressive metalanguage for expressing binding,and compilation to multiple proof assistants.

We hope that this work will enable a phase change:from the current state,in which working with fully-rigorous de?nitions of real programming languages requires heroic effort,to a world in which that is routine.

The ott tool can be used in several different ways.Most sim-ply,it can aid informal L A T E X mathematics,permitting de?nitions, and terms in proofs and exposition,to be written without syntac-tic noise.By parsing(and so sort-checking)this input it quickly catches a range of simple errors,e.g.inconsistent use of judgement forms.There is then a smooth path to fully-rigorous proof assis-tant de?nitions:those ott de?nitions can be annotated with the additional information required to generate proof assistant code.In general one may also want to restructure the de?nitions to suit the formalization.Our experience so far suggests this is not a major issue,and hence that one can avoid early commitment to a par-ticular proof assistant.The tool can be used at different scales:it aims to be suf?ciently lightweight to be used for small calculi,but it is also designed and engineered with the pragmatics of working with full-scale programming languages in mind.Our case studies suggest that it achieves both goals.Furthermore,we hope it will make it easy to re-use de?nitions of calculi and languages,and also fragments thereof,across the community.Widely accepted de facto standard de?nitions would make it possible to discuss pro-posed changes to existing languages in terms of changes to those de?nitions,rather than solely in terms of toy calculi.

Future work There are many interesting directions for future work.First,while the fully concrete representation of binding is surprisingly widely applicable,it is far from expressing all one would like to do.We plan to explore proof assistant representations for arbitrary binding speci?cations,as outlined in§3.2.Another mathematical question is to consider in what sense the de?nitions ott generates for the different target proof assistants have the same meaning.This is intuitively plausible,but the targets are based on different logics,so it is far from trivial.

The Twelf code generation remains to be completed,and a num-ber of other features would be useful:support for function de?ni-tions(not just inductive relations);support for contexts,with auto-matically generated context application and composition functions; support for generating multiple overlapping languages from a sin-gle source(e.g.sugared and non-sugared);and generation of pro-duction parsers.

With more experience using the tool,we aim also to polish the generated proof-assistant de?nitions and improve the available proof automation—for example,to make proof scripts less depen-dent on the precise structure and ordering of the de?nitions.

Being able to easily generate de?ntions for multiple proof as-sistants also opens up new possibilities for(semi-)automatically testing conformance between semantic de?nitions and produc-tion implementations,above the various proof assistant support

for proof search,tactic-based symbolic evaluation,code extraction from proofs,and code generation from de?nitions.

Finally,we look forward to further experience and user feed-back from the tool.

Acknowledgements We thank the other members of the POPLmark team,especially Benjamin Pierce,Stephanie Weirich and Steve Zdancewic, for interesting discussions on this work,James Leifer for comments on a draft,our early adopters for user feedback,and Keith Wansbrough,Matthew Fairbairn and Tom Wilkie for their work on various ott predecessors.We acknowledge the support of EPSRC grants GR/T11715and EP/C510712, and a Royal Society University Research Fellowship(Sewell). References

AFP.The archive of formal proofs.https://www.wendangku.net/doc/d412797961.html,.

B.Aydemir,A.Chargu′e raud,B.

C.Pierce,R.Pollack,and S.Weirich.

Engineering formal metatheory,2007.http://www.chargueraud.

org/arthur/research/2007/binders/.

B.E.Aydemir,A.Bohannon,M.Fairbairn,J.N.Foster,B.

C.Pierce,

P.Sewell,D.Vytiniotis,G.Washburn,S.Weirich,and S.Zdancewic.

Mechanized metatheory for the masses:The POPLmark Challenge.In Proc.TPHOLs,LNCS3603,2005.

S.Berghofer and C.Urban.A head-to-head comparison of de Bruijn indices and names.In Proc.Int.Workshop on Logical Frameworks and Meta-Languages:Theory and Practice,pages46–59,2006.

S.Blazy,Z.Dargaye,and X.Leroy.Formal veri?cation of a C compiler front-end.In Int.Symp.on Formal Methods,LNCS2085,2006.

P.Borras,D.Clement,T.Despeyroux,J.Incerpi,G.Kahn,https://www.wendangku.net/doc/d412797961.html,ng,and V.Pascual.Centaur:the system.In Proc.SDE3,pages14–24,1988. R.J.Boulton.A tool to support formal reasoning about computer lan-guages.In Proc.TACAS,LNCS1217,pages81–95,1997.

A.Chargu′e raud.Annotated bibliography for formalization of

lambda-calculus and type theory.http://fling-l.seas.

https://www.wendangku.net/doc/d412797961.html,/~plclub/cgi-bin/poplmark/index.php?title= Annotated Bibliography,July2006.

J.Cheney and C.Urban.Alpha-Prolog:A logic programming language with names,binding and alpha-equivalence.In Proc.ICLP,LNCS3132, pages269–283,2004.

Coq.The Coq proof assistant,v.8.0.http://coq.inria.fr/.

A.Dijkstra and S.D.Swierstra.Ruler:Programming type rules.In

Proc.Functional and Logic Programming,LNCS3945,2006.

R.B.Findler and J.Matthews.Revised5.92report on the algorithmic language Scheme,Chapter10,Formal Semantics,Jan.2007.

C.Fournet,G.Gonthier,J.-J.L′e vy,L.Maranget,and

D.R′e my.A calculus

of mobile agents.In Proc.CONCUR’96,LNCS1119,1996.

E.Gunter and S.Maharaj.Studying the ML module system in HOL.The

Computer Journal:Special Issue on Theorem Proving in Higher Order Logics,1995.

HOL.The HOL4system,Kananaskis-3release.http://hol.

https://www.wendangku.net/doc/d412797961.html,/.

Isabelle.Isabelle2005.http://isabelle.in.tum.de/.

M.Johnson.Memoization in top-down https://www.wendangku.net/doc/d412797961.html,put.Linguist.,21(3): 405–417,1995.

S.Kahrs.Mistakes and ambiguities in the de?nition of Standard ML.

Technical Report ECS-LFCS-93-257,University of Edinburgh,1993.

G.Klein and T.Nipkow.A machine-checked model for a Java-like lan-

guage,virtual machine,and compiler.TOPLAS,28(4):619–695,2006. P.Klint.A meta-environment for generating programming environments.

ACM Trans.on Soft.Eng.and Methodology,2(2):176–201,April1993. https://www.wendangku.net/doc/d412797961.html,kin and A.M.Pitts.A metalanguage for structural operational semantics.In Symposium on Trends in Functional Programming,2007.

D.K.Lee,K.Crary,and R.Harper.Towards a mechanized metatheory of

Standard ML.In Proc.POPL,January2007.P.Lee,F.Pfenning,G.Rollins,and W.Scherlis.The Ergo Support System: An integrated set of tools for prototyping integrated environments.In Proc.SDE3,1988.

X.Leroy.A syntactic theory of type generativity and sharing.Journal of Functional Programming,6(5):667–698,1996.

X.Leroy et al.The Objective Caml system release3.09documentation and user’s manual,Oct.2005.

M.Y.Levin and B.C.Pierce.Tinkertype:A language for playing with formal systems.Journal of Functional Programming,13(2),Mar.2003. J.Matthews,R.B.Findler,M.Flatt,and M.Felleisen.A visual environment for developing context-sensitive term rewriting systems.In Proc.RTA, 2004.

https://www.wendangku.net/doc/d412797961.html,ner.Implementation and applications of Scott’s logic for computable functions.In Proc.ACM conference on Proving assertions about pro-grams,pages1–6,1972.

https://www.wendangku.net/doc/d412797961.html,ner,M.Tofte,and R.Harper.The De?nition of Standard ML.MIT Press,1990.

P.D.Mosses.Pragmatics of Modular SOS.In Proc.AMAST,LNCS2442, pages21–40,2002.

M.Norrish.Deterministic expressions in C.In Proc.8th ESOP(ETAPS), LNCS1576,pages147–161,1999.

C.Owens.Coding binding and substitution explicitly in Isabelle.In

Proceedings of the First Isabelle Users Workshop,pages36–52,1995. S.Owens and M.Flatt.From structures and functors to modules and units.

In Proc.ICFP,2006.

S.Peyton Jones,editor.Haskell98Language and Libraries.The Revised Report.CUP,2003.

B.C.Pierce.Types and Programming Languages.MIT Press,2002.

F.Pottier.An overview of Cαml.In ACM Workshop on ML,ENTCS148(2),

pages27–52,Mar.2006.

T.Reps and T.Teitelbaum.The synthesizer generator.In Proc.SDE1, pages42–48,1984.

A.Rossberg.Defects in the revised de?nition of Standard ML.Technical

report,Saarland University,2001.Updated2007/01/22.

P.Sewell and F.Zappa Nardelli.Ott,2007.https://www.wendangku.net/doc/d412797961.html,/ users/pes20/ott/.

P.Sewell,J.J.Leifer,K.Wansbrough,M.Allen-Williams,

F.Zappa Nardelli,P.Habouzit,and V.Vafeiadis.Acute:High-

level programming language design for distributed computation.design rationale and language de?nition.Technical Report UCAM-CL-TR-605,University of Cambridge Computer Laboratory,Oct.2004.See also the ICFP’05paper.

M.R.Shinwell,A.M.Pitts,and M.J.Gabbay.FreshML:Programming with binders made simple.In Proc.ICFP,2003.

C.Strachey.Towards a formal semantics.In Formal Language Description

Languages for Computer Programming.North Holland,1966.

R.Strniˇs a,P.Sewell,and M.Parkinson.The Java Module System:core design and semantic de?nition.In Proc.OOPSLA,2007.To appear. D.Syme.Reasoning with the formal de?nition of Standard ML in HOL.In

TPHOLs,LNCS780,pages43–59,1993.

D.Terrasse.Encoding Natural Semantics in Coq.In Proc.AMAST,LNCS

936,pages230–244,1995.

S.Tse and S.Zdancewic.Concise concrete syntax,2006.Submitted.

https://www.wendangku.net/doc/d412797961.html,/~stse/javac.

Twelf.Twelf1.5.https://www.wendangku.net/doc/d412797961.html,/~twelf/.

V.Vafeiadis and M.Parkinson.A marriage of rely/guarantee and separation logic.In Proc.CONCUR,2007.

M.VanInwegen.The Machine-Assisted Proof of Programming Language Properties.PhD thesis,Univ.of Pennsylvania,https://www.wendangku.net/doc/d412797961.html,puter and Information Science Tech Report MS-CIS-96-31.

广电数字电视接入集团用户解决方案

广电网络公司用户增长的最后一搏任小林(RenXiaoLin) 成都德芯数字科技股份有限公司任小林

1. 广电网络市场及发展形势趋紧 目前大部分广电网络的窘迫状况用"王小二过年一年不如一年"来形容不算为过。在国内无论是沿海还是内地;无论是经济发达地区还是边缘贫穷地区;用户季度报表都是下滑的,一个字就是掉!掉!掉!情况稍好的地区也有10%左右的用户流失。情况最差的几乎拦腰斩50%。全国几乎没有正增长的例子(除开发区集体建设除外)。那就更不要说发展了,能够保持住用户不掉已经是当地广电的最大喜讯了! 原因分析 A.新技术发展决定所致(主因),一个阶段=一个时代=一代技术 广电经历过初期共用天线,同轴电缆和有线光纤,全光纤的高速发展。在当时对应的时期取得了很大的成功,甚至可以说是辉煌。那个时候电信还是拨号上网络.人民真正的娱乐就是吃饭看电视(广场舞坝坝舞估计都难以想象)。那个时代准确来说是电视时代。而网络在当时遥不可及甚至无法想象。因此片面的把目前责任归网络公司发展不力有失公允;时代造就了,没有一成不变的,唯一不变的就是改变!而目前几乎都是互联网和移动的时代;信息技术的发展改变和影响了人们的消费和使用享受习惯。试问我们作为广电人,自己每天看了多少电视?想一下这个道理就明白了.技术发展所致这个应该是主要的. B.广电体制决定了其效率不高. 电信和广电网络性质一样,差不多同为央企.但一个作的风生水起,一个日落西山;除了网络发展的原因之外主要还是与人和体制有关.激励机制不够导致效率不高、主动性不够、危机使命感不强、不善于挖掘客户、享受集体严重等等。有想法做且技术可行的往往得不到落实,与其可能承担还不如退而避之。 C.技术上求高大上,但往往华而不实,无法推进。 对技术要求太高,往往超出了普通用户的需求,更超出了集团用户的基本需求。需要的更是“高、精、尖”的东西,这样下来不但成本高而且费时费力。差不多样板做好后市场已“轻舟已过万重山”简单、高效、实用、快速是目前市场的主旋律。这块其他运营商做的比广电好。

有线电视数字机顶盒破解原理方法和软件工具

有线电视数字机顶盒破解原理方法和软件工具把杀毒软件删除了下载的 数源机顶盒程序烧写步骤 此步骤为机顶盒串口升级,利用一条串口线把PC机与机顶盒相连。 第一步:打开“迷你终端MINICOMM”; 第二步:在“通讯”中,点“建立连接”,串口设置键附录1串口设置; 第三步:一直按住“Enter回车键”; 第四步:打开机顶盒电源; 第五步:等迷你终端出现:testtool>; 第六步:输入“update boot”,然后按回车键,出现:等待从串口发送boot文件.........;第七步:在“通讯”菜单中,点“发送文件”,弹出目录框,找到要烧写的程序(boot_rs232.bin),打开即可。然后等待(大概持续8分钟)。直到出现testtool>。说明boot烧写完毕;第八步:重启机顶盒(或者敲命令update reset)。 第九步:使用升级工具“STBUpdate-Search.exe”升级程序 (STM5105_program_update.bin)、字库(SHUYUAN_font_update.bin)、loader后即可正常使用; 把杀毒软件删除了下载的 数源机顶盒程序烧写步骤 此步骤为机顶盒串口升级,利用一条串口线把PC机与机顶盒相连。 第一步:打开“迷你终端MINICOMM”; 第二步:在“通讯”中,点“建立连接”,串口设置键附录1串口设置; 第三步:一直按住“Enter回车键”;

第四步:打开机顶盒电源; 第五步:等迷你终端出现:testtool>; 第六步:输入“update boot”,然后按回车键,出现:等待从串口发送boot文件.........;第七步:在“通讯”菜单中,点“发送文件”,弹出目录框,找到要烧写的程序(boot_rs232.bin),打开即可。然后等待(大概持续8分钟)。直到出现testtool>。说明boot烧写完毕;第八步:重启机顶盒(或者敲命令update reset)。 第九步:使用升级工具“STBUpdate-Search.exe”升级程序 (STM5105_program_update.bin)、字库(SHUYUAN_font_update.bin)、loader后即可正常使用; 机顶盒加密系统称之为条件接受系统(ContionalAccess),广电说的CA系统就是它了。付费频道实际上就是CA加密的频道,必须经过广电授权才能够解密。MPEG2码流经过通用加扰器加扰后,需要密钥进行还原,这个密钥就是CW(Control Word)。不同家CA的解密过程就是对CW复原,并把它传送到机顶盒解扰器。 图1广电机顶盒解码原理 在图1中,机顶盒通过高频头收下来是加扰并复用过的TS,首先机顶盒对它进行解复用(Demux),并提取出其中的加密过的ECM、EMM传送给IC卡,IC卡计算出CW控制字后传送给解扰器,解扰器根据控制字还原加扰码流,输出清流,也就是没有加密的码流传递给解码器,供解码器进行MPEG2正常解码。CW一般为8个字节,每隔5-10秒钟更换一次。从上面的CA解密原理中可以看出其中可能存在的安全漏洞。在IC卡的条件接收系统传递CW的这个过程中,是可能被攻破的。 一般来说,IC向解扰器传递CW有三种可能: 第一种情况:直接传递明文。 这种情况最简单,只要把CW捕捉下来传递给其他机顶盒即可。

三网融合传输网络解决方案

三网融合传输网络处理方案 三网融合是指电信网、广播电视网、互联网分别在向下一代电信网、下一代广播电视网、下一代互联网发展和演进过程中,网络功效趋于一致、业务范围趋于相同,全部能够为用户提供打电话、上网和看电视等多个服务。三网融合本质是未来电信网、广电网和互联网全部能够承载多个信息化业务,发明出更多个融合业务,而不是三张网合成一张网,所以三网融合不是三网合一。三网融合可能发展方向以下图所表示,由技术融合,业务融合,行业融合到最终终端融合及网络融合。 图1 三网融合可能发展方向 IPTV/CATV是三网融合起步和关键 三网融合关键在于电信和广电业务融合。电信运行商和广电运行商业务相互渗透,全部将成为全业务运行商,经过全业务绑定及价格策略相互争夺用户;而电信运行商经过铺设超带宽城域网来提供IPTV业务,抢夺有线电视运行商TV业务,有线电视运行商可能自建或MVNO方法进入传统电信运行商移动领域,行程全业务竞争,双方全部面临前所未有竞争压力,基于全业务进行全方位竞争,而IPTV/CATV是三网融合起步和关键。 现在网络承载IPTV还面临着很多困难:首先,现在网络不满足端到端提速:高清入户需要10~20M,目前入户带宽普遍低于4M,要实现端到端提速,部分接入设备、BRAS和关键路由器需要改造;其次,业务体验难以保障:宽带IP城域网绝大部分流量为互联网业务,现网计划、设计和建设基础上未考虑IPTV业务承载,而IPTV频道切换比传统电视体验差,而且对丢包很敏感,严重影响用户体验;第三,缺乏端到端管理手段:承载网络运维包含到IPTV业务系统、承载网络和机顶盒,运维部门包含到多领域多个部门,难以快速定界,快速定位;最终,网络层次较多会成为质量瓶颈,现在城域网通常为5跳,IPTV相关承载技术难以统一布署,若直接用此架构网络承载IPTV,网络建设成本、网络业务控制、网络服务质量等方面全部会带来更大问题,所以三网融合对接入网提出了新要求。 接入网扁平化发展 首先是接入网络融合和统一,分离网络向多业务综合接入转变,PON网络为中小型企业、楼宇微基站、住宅用户统一接入提供可能,而业务融合、设备统一深入促进了FTTX网络融合,降低网络节点,简化网络层次,降低网络TCO成本;其次,面向运行商全业务管理,现行网络维护由不一样垂直业务部门管理各自设备,而在三网融合环境下,统一OLT必需面对不一样部门管理,实现管理虚拟化。最终要面向最终用户差异化,OLT 设备必需面对二维网络,即不一样业务+不一样用户群,提供不一样QoS保障,真正实现用户差异化。 面对三网融合下广电运行商,接入网需要向扁平化趋势发展,以下图所表示,光进铜退现网逐点改造架构是一个5级架构,网络节点众多;而光进铜退全新技术组合架构是一

广电网络双向网改造方案

广电网络双向网改造方案 一.广电双向网改造 2008年国务院办公厅08年1号文件《关于鼓励数字电视产业发展若干政策的通知》中指出:加强宽带通信网、数字电视网和下一代互联网等信息网络资源的统筹规划和管理,促进网络和信息资源共享;在确保广播电视安全传输的前提下,建立和完善适应“三网融合”发展要求的运营服务机制。鼓励广播电视机构利用国家公用通信网和广播电视网等信息网络提供数字电视服务和增值电信业务。在符合国家有关投融资政策的前提下,支持包括国有电信企业在内的国有资本参与数字电视接入网络建设和电视接收端数字化改造。 ?普通的广电HFC网是一个单向的、广播式的网络 ?广电发,用户收,用户没法上传 ?双向网,就是用户既可以接收数据,也可以上传数据 ?上传的数据采用TCP/IP协议,也就是互联网所采用的协议方式 双向网改造的意义:一句话“创造更多的APRU值”。 二.广电双向网改造对于三奥科技的意义 ?双向机顶盒营销 ?双向网上可开展的增值业务 三.三奥科技针对广电网络双向改造解决方案 作为新一代的网络综合业务运营商,网络双向化、宽带化是广电运营商开展增值业务的基本条件;因此,三奥科技在兼顾广电网络原有改造的基础上,充分考虑到网络发展的最新趋势和网络公司的实际需求,推出了支持EPON技术、CMTS两种广电网络双向改造解决方案。 如下图所述:

四.技术比较 1.EPON+LAN EPON+LAN技术特点 优点: 运营商不承担用户终端的投入,网络升级改造方便; 网络接入带宽:1000M到小区,100M到楼道,10M到户,接入带宽高,可扩充性好,可以 承载全业务运营; 采用外交互方式,不占用同轴电缆的频率资源,光传输采用EPON技术,传输链路中实现 没有有源设备,维护方便,二张网同时运营,单网故障相互不影响。 目前的LAN产品异常丰富,价格也非常低;EPON产品支持厂家众多,相关产品兼容性好, 价格也在大幅降低。 缺点: 需重新入户施工,施工量及施工难度都较大; 二张网络分开运营,维护人员素质要求高。 由于需要额外施工,造成户均成本的升高 由于不能实现物理隔离,网络维护难度和成本加大 2.CMTS+CM技术

一种IPTV机顶盒的软件在线升级方案设计与实现

文章编号:167121742(2006)0620816204 一种IPTV 机顶盒的软件在线升级方案设计与实现 杨崇江, 孙世新 (电子科技大学计算机科学与工程学院,四川成都610054) 摘要:介绍了一种基于嵌入式Linux 系统的IPTV 机顶盒的软件在线升级实现方案。该方案的机顶盒Flash 中,除存储了运行正常业务的内核和文件系统外,额外存储了一套内核和文件系统,从而构成双操作系统,以支持 网络在线升级。利用嵌入式Linux 系统的引导加载系统功能和M TD 技术,在一款使用MIPS 处理器的机顶盒上, 实现了安全可靠且灵活的在线升级功能。 关 键 词:嵌入式Linux ;引导转载系统;内核;文件系统;机顶盒 中图分类号:TP311152 文献标识码:A 收稿日期:2005212215 1 概述 IPTV 即交互式网络电视,是一种利用宽带有线电视网,集互联网、多媒体、通讯等多种技术于一体,向家庭用户提供包括数字电视在内的多种交互式服务的崭新技术。用户在家中通过网络机顶盒+普通电视机享受IPTV 业务。IPTV 机顶盒是扩展电视机功能的一种新型家用电器,它把互联网的数字信号转换成模拟电视机可以接收的信号,使观众可以在现存模拟电视机上观看数字电视节目,进行交互式数字化娱乐、教育和商业化活动的消费类电子产品。 机顶盒作为一个嵌入式计算机系统,软件在其中起了重要作用。商业应用中的机顶盒,如需解决软件中存在的Bug ,提高运行性能,修改某项业务或应用等,都要对软件进行升级。目前,机顶盒的软件升级有两种方式:一种是在本地利用PC 机通过串口将软件下载到机顶盒,完成软件的升级,另一种是利用网络进行在线升级。前者只能在本地进行软件升级操作,不适合用户操作,而后者通过升级服务器与机顶盒的配合,用户很少参与就能完成升级,故使用较为广泛。通过在线升级,能尽量延长机顶盒的使用寿命和支持新业务的能力。软件在线升级已成为机顶盒的一项基本功能,因此设计一种安全可靠的软件在线升级方案,是IPTV 机顶盒软件设计中一个非常重要环节。 2 基于嵌入式Linux 的IPTV 机顶盒设计 机顶盒使用了双处理器的硬件设计方案,一个多媒体DSP 加上一个M IPS 架构网络处理器。DSP 负责音、视频解码、播放与显示功能,而网络处理器负责与用户交互,接入网络、通过网络与相关服务器进行数据交互并将视频流媒体数据交给DSP 处理。机顶盒需要一个固态存储设备,使用Flash 作为固态存储设备,作为网络处理器系统的一部分。机顶盒的网络处理器系统的选择使用嵌入式Linux 操作系统,在线升级功能在网络处理器上实现。 IPTV 机顶盒,首先需要在电视机上播放数字电视节目;其次具有交互性,用户能够通过外设进行节目浏览,切换电视节目,互动游戏等;第三是基于网络的,需要支持TCP/UDP/IP 协议族,完成互操作信息的网络传输,接收及处理IP 数据和视频流媒体数据。因此其软件主要由以下4个部分组成: (1)播放器模块; (2)浏览器模块; (3)红外遥控的接收处理模块; (4)网络接入与网络数据交互模块。 一个完整的嵌入式Linux 系统通常由引导装载系统、内核、文件系统3部分组成。引导装载系统的选用与设第21卷第6期 2006年12月成 都 信 息 工 程 学 院 学 报JOURNAL OF CHEN G DU UNIV ERSITY OF INFORMATION TECHNOLO GY Vol.21No.6Dec.2006

机顶盒的主要构成

机顶盒的主要构成 硬件结构 从数字电视机顶盒的构成上看,主要包括硬件和软件两大部分。 从结构上看,机顶盒一般由主芯片、内存、调谐解调器、回传通道、CA(Conditional Access)接口、外部存储控制器以及视音频输出等几大部分构成。 调谐解调器 调谐解调器部分的作用是将传输过来的调制数字信号解调还原成传输流,调谐解调器的不同就构成了不同的数字机顶盒,例如用于QPSK解调的卫星机顶盒(DVB-S),用于QAM 解调的有线数字机顶盒(DVB-C)以及用于OFDM解调的地面传输数字机顶盒(DVB-T)。目前市场上比较流行的调谐解调器的生产厂商有Thomson、Sharp等,国内也有一些厂商生产解调器,例如澜起,国芯等,占有一定的市场份额。 主芯片 随着芯片技术的发展,越来越多的厂家将机顶盒的功能更多地集成在一个主芯片里,例如现在大部分厂商都将CPU、解码器、解复用器、图形处理器与视音频处理器集成在芯片中,甚至一些以Philips为代表的芯片厂商将调谐解调器也集成在芯片中,形成一体化的芯片解决方案,有效地降低了器件成本并提高了可靠性。 在主芯片中,首先根据传输流所传递的标志信息对接收到的传输流进行解复用,然后根据CA智能卡所传递的解扰信息对节目流进行解扰,解扰后的TS流送到视音频解码器中分别对其进行解码,还原成AV信号进行输出,同时,也分离出复用在TS流中的各类系统数据表,送给机顶盒处理器分别输出。 另外,由于在主芯片中集成了CPU和图形管理器,使机顶盒可以完成更多的功能,它可以运行各种软件完成诸多任务,例如股票接收、网页浏览等,也可以通过图形管理器实现2D甚至3D的图形处理,为用户提供更美观的界面,实现交互式游戏等各种高画质应用。 由于CPU是主芯片的核心,因此通常情况下CPU的性能就决定了主芯片的性能。CPU

【免费下载】广电传输网络解决方案

广电传输网络解决方案三网融合是指电信网、广播电视网、互联网分别在向下一代电信网、下一代广播电视网、下一代互联网的发展和演进过程中,网络的功能趋于一致、业务范围趋于相同,都可以为用户提供打电话、上网和看电视等多种服务。 三网融合的本质是未来的电信网、广电网和互联网都可以承载多种信息化业务,创造出更多种融合业务,而不是三张网合成一张网,因此三网融合不是三网合一。三网融合可能的发展方向如下图所示,由技术融合,业务融合,行业融合到最终的终端融合及网络融合。 图1 三网融合可能的发展方向 IPTV/CATV 是三网融合的起步和关键 三网融合的关键在于电信和广电业务的融合。电信运营商和广电运营商业务互相渗透,都将成为全业务运营商,通过全业务绑定及价格策略互相争夺客户;而电信运营商通过铺设超带宽城域网来提供IPTV 业务,抢夺有线电视运营商TV 业务,有线电视运营商可能自建或MVNO 的方式进入传统电信运营商的移动领域,行程全业务竞争,双方都面临前所未有的竞争压力,基于全业务进行全方位竞争,而IPTV/CATV 是三网融合的起步和关键。 目前网络承载IPTV 还面临着很多的困难:首先,目前网络不满足端到端提速:高清入户需要10~20M ,当前入户带宽普遍低于4M ,要实现端到端提速,部分接入设备、BRAS 和核心路由器需要改造;其次,业务体验难以保障:宽带IP 城域网绝大部分流量为互联网业务,现网规划、设计和建设基本上未考虑IPTV 业务的承载,而IPTV 频道的切换比传统电视体验差,并且对丢包非常敏感,严重影响用户体验;第三,缺少端到端管理手段:承载网络的运维涉及到IPTV 业务系统、承载网络和机顶盒,运维部门涉及到多领域的多个部门,难以快速定界,快速定位;最后,网络层次较多会成为质量的瓶颈,目前城域网一般为5跳, IPTV 相关的承载技术难以统一部署,若直接用此架构的网络承载IPTV ,网络建设成本、网络业务控制、网络服务质量等方面都会带来更大的问题,因此三网融合对接入网提出了新的要求。 接入网扁平化发展 首先是接入网络的融合和统一,分离的网络向多业务综合接入转变,PON 网络为中小型企业、楼宇微基站、住宅用户的统一接入提供可能,而业务的融合、设备的统一进一步促进了FTTX 网络的融合,减少网络节点,简化网络层次,降低网络TCO 成本;其次,面向运营商的全业务管理,现行网络的维护由不同垂直业务部门管理各自的设备,而在三网融合环境下,统一OLT 必须面对不同部门的管理,实现管理的虚拟化。最后要面向最终用户的差异化,OLT 设备必须面对二维的网络,即不同的业务+不同的客户群,提供不同的QoS 保障,真正实现客户的差异化。、管路敷设技术通过管线敷设技术,不仅可以解决吊顶层配置不规范问题,而且可保障各类管路习题到位。在管路敷设过程中,要加强看护关于管路高中资料试卷连接管口处理高中资料试卷弯扁度固定盒位置保护层防腐跨接地线弯曲半径标高等,要求技术交底。管线敷设技术中包含线槽、管架等多项方式,为解决高中语文电气课件中管壁薄、接口不严等问题,合理利用管线敷设技术。线缆敷设原则:在分线盒处,当不同电压回路交叉时,应采用金属隔板进行隔开处理;同一线槽内,强电回路须同时切断习题电源,线缆敷设完毕,要进行检查和检测处理。、电气课件中调试对全部高中资料试卷电气设备,在安装过程中以及安装结束后进行高中资料试卷调整试验;通电检查所有设备高中资料试卷相互作用与相互关系,根据生产工艺高中资料试卷要求,对电气设备进行空载与带负荷下高中资料试卷调控试验;对设备进行调整使其在正常工况下与过度工作下都可以正常工作;对于继电保护进行整核对定值,审核与校对图纸,编写复杂设备与装置高中资料试卷调试方案,编写重要设备高中资料试卷试验方案以及系统启动方案;对整套启动过程中高中资料试卷电气设备进行调试工作并且进行过关运行高中资料试卷技术指导。对于调试过程中高中资料试卷技术问题,作为调试人员,需要在事前掌握图纸资料、设备制造厂家出具高中资料试卷试验报告与相关技术资料,并且了解现场设备高中资料试卷布置情况与有关高中资料试卷电气系统接线等情况,然后根据规范与规程规定,制定设备调试高中资料试卷方案。 、电气设备调试高中资料试卷技术电力保护装置调试技术,电力保护高中资料试卷配置技术是指机组在进行继电保护高中资料试卷总体配置时,需要在最大限度内来确保机组高中资料试卷安全,并且尽可能地缩小故障高中资料试卷破坏范围,或者对某些异常高中资料试卷工况进行自动处理,尤其要避免错误高中资料试卷保护装置动作,并且拒绝动作,来避免不必要高中资料试卷突然停机。因此,电力高中资料试卷保护装置调试技术,要求电力保护装置做到准确灵活。对于差动保护装置高中资料试卷调试技术是指发电机一变压器组在发生内部故障时,需要进行外部电源高中资料试卷切除从而采用高中资料试卷主要保护装置。

STB机顶盒上Linux软件系统解决方案word参考模板

STB 机顶盒上Linux软件系统解决方案 近年来,国内巨大的STB潜在市场,吸引了包括芯片厂商、CA厂商、中间件供应商、机顶盒生产厂商等厂家的热情,围绕STB的竞争全面展开。 机顶盒是一种专用设备,包括软件模块和硬件模块。软件模块包括系统引导程序、嵌入式操作系统和应用软件。硬件模块根据CPU的不同,分为ASIC专用芯片组、多媒体DSP、X86三种。低档机顶盒一般采用专用芯片组或多媒体DSP,只支持较少的几种媒体编解码标准,业务扩展能力较弱,价格比较低。高档机顶盒大多基于X86硬件,使用WinCE或Linux操作系统,甚至配有硬盘,与PC机非常相似,业务功能丰富,当然,价格也不菲。 事实上,随着IPTV的兴起,对机顶盒的性能要求也越来越高,采用X86硬件提升机顶盒的性能已是大势所趋。但是,在提供更丰富业务功能的同时,大幅降低机顶盒成本,也是机顶盒技术发展的一个必然方向。本文将讲述基于兼容X86的STB上Linux软件系统解决方案,不涉及技术开发细节。主要将从以下方面进行介绍: 一、硬件环境介绍 二、Bootloader开发 三、Linux裁减与移植 四、浏览器程序选择 五、JVM移植 六、其它应用程序扩展 七、媒体播放软件实现 八、VOD实现 九、软件远程更新问题 十、小结 一、硬件环境介绍 本机顶盒核心部件包括:STPC CPU,2M FLASH,64M DOM和32M RAM。 其中FLASH中存放Bootloader代码,DOM中存放嵌入式Linux操作系统。当然,机顶盒需要一些外围硬件电路,比如网络接口、遥控器接口等等。 二、Bootloader开发 Bootloader 是跟底层硬件关联性非常强的一层软件,也称为BSP。开发人员需要相当熟悉硬件结构,一般来说,可以由硬件人员完成,也可以由软硬件人员共同协助完成。 Bootloader 除了初始化CPU,RAM和基本的串口输出等功能外,还需要一些扩展功能,比如实现通过网络接口或者串口烧录操作系统等。u- boot是一个开源的面向多个目标平台(ppc, mips, arm, x86)的bootloader,并且功能强大,目前广泛应用在嵌入式系统开发中,只要根据硬件平台做有限的修改,就可以顺利实现。本篇不讲述u- boot的具体移植过程,移植细节可参见《u-boot在PowerPC 860上的移植》一文。 三、嵌入式Linux裁减与移植 嵌入式Linux我们选用Peewee Linux进行裁减,Peewee Linux的介绍和基本配置可参考《Embedded Linux 嵌入式系统原理与实务》一书。Peewee Linux的裁减仅仅是通过配置界面选择某些组件,但需要注意的是要保留X窗口,浏览器需要X窗口支持才能启动图形化浏览器。但是,通过Peewee Linux裁减的Linux系统通常比较大,还需要做更进一步的裁减。

XX广电数据中心网络解决方案

广电数据中心网络解决方案 1广电数据中心面临的挑战 “数据中心”建设是三网融合建设的重要组成部分,是广电网络运营商进行多业

视频业务、互动增值业务的发展产生积极作用,同时为广电的支撑系统提供运维保障。 当前广电数据中心面临的挑战是: 1)广电宽带用户需要的互联网内容与信息服务大部分在电信和联通的网内,导致 广电巨大的入网流量带宽成本。需要建设数据中心并部署一定规模的P2P、应用缓存系统,从而尽可能将本地宽带用户的内容和服务请求终结在广电网内。 2)互动数字高清视频业务是广电运营商的核心业务,其发展方向是互动、高清、 3D,该业务不仅需要大量丰富的高清互动视频媒体资源,同时还需要完善的视频内容分发网络(),作为数据中心一个业务域,对大流量环境下的高带宽、高可靠、高稳定、易管理、节能环保要求较高。 3)业务是运营商业务领域的重要组成部分,是信息化服务的趋势,相关业务除了 传统的系统托管业务、服务器和带宽等资源租赁业务、网络安全增值业务外,还将面向公众、企业等客户的云计算服务,承载这些业务,完善的数据中心基础设施是不可或缺的。 4)随着业务的发展,广电运营商需要逐步建设完备的支撑系统,包括相关的业务 平台管理系统、内部管理系统,现阶段大部分的广电支撑系统还处在不断完善、持续建设的阶段,如果广电马上建设完善独立的支撑系统数据中心将是一种硬件资源、运维资源的潜在浪费,需要将其纳入到多业务数据中心,保障该系统的独立性和安全隔离,以逐步完善支撑系统。 根据对广电行业业务对数据中心的建设需求,汉柏科技提出了广电数据中心网络解决方案,对数据中心网络的总体架构、网络功能、可靠性、安全设计、服务质量设计进行了详细的描述,以指导建设一个高度可靠、安全、快速、可扩展的数据中心网络平台。 汉柏认为数据中心网络建设目标是:在统筹广电数据中心项目业务需求和发展的基础上,兼顾远期发展目标,建设一个高度可靠、安全、快速、可扩展的基础网络平台,为各项业务提供优质高效的网络服务。

广电传输网络建设

2014年,中国的广播电视的发展重点将是广电宽带业务,应"宽带中国"战略要求,继续深入网络双向化改造,以及加快广电城域骨干网建设,采用超高速光纤和同轴光缆传输技术建设下一代广播电视宽带网。随着广电业务的发展,广电OTT、智能终端、融合终端、结合智能手机的视频多媒体应用都将成为热点。 广电网络为重要重要的发展契机,宽带化已经成为广电网发展的必然趋势,要支撑高清视频点播、视频会议等业务,广电网络的带宽必须相应地提高。对此,广电网络与众多光通信企业寻求合作。目前,烽火、华为、思科等等顶尖光通信企业都在参与广电网络建设,其中以烽火为最。 烽火网络在今年的CCBN展上就展出了其新一代大容量PEOTN产品FONST6000U60,其定位高端,适用国干、省干、城域核心等各网络层次,满足12.8T及以上交叉容量节点需求,是业界口碑较好的100G/100G+应用平台,为广电运营商构建大管道、灵活调度、统一交换的OTN/WDM提供解决方案。其面向广电传输网络打造的OTN、PTN、MSTP一体化承载PEONT 解决方案FONST U系列产品,可为广电运营商网络建设提供端到端的业务传输能力,满足广电运营商全方位、全业务运营需求。其中100G速率及以上的OTN产品是目前国内最为领先的骨干网光传输设备。 在第二十二届中国国际广播电视信息网络展览会(CCBN2014)上,烽火带来了最新的高速光网络技术和产品,以及面向业务运营的家庭视频业务、企业数据中心等解决方案,提供了很多面向广电业务发展的定制化产品和解决方案。展示的产品涵盖传送层面的OTN、PTN、数据层面的ACR、核心、汇聚、接入及IDC交换机系列、接入层面的XPON、EOC、C-DOCSIS、家庭网关以及OSP系列产品。 个人观点:面对广电网络建设这块大蛋糕,要想抢下一快,光通信企业必须要具备完善的解决方案及高端可靠的终端设备。

视易机顶盒软件系统问题解答

1、星云软件 1.1、软件的安装 1.1.1、视易点歌系统安装前需要注意那些事项? 1.1.2、装完SQL数据库软件后,没有重新启动计算机会出现什么错误? 1.1.3、安装完星云系统后更换网卡会出现什么现象? 1.1.4、为什么在安装完星云系统后,机顶盒在启动时总是提示“皮肤包文件大小有误” 1.2、加密狗 1.2.1、在多服务器的情况下加密狗需要安装在那台服务器上? 1.2.2、为什么星云系统试用到期的时间和真实使用的时间不相符? 1.2.3、如果软件注册后,从哪里可以体现出来? 1.2.4、服务器端弹出一个窗口显示“Err”的原因是? 1.2.5、一个软件狗最多可注册几次? 1.2.6、为什么在营业的高峰期,所有的包厢每经过半个小时就卡死一次,重启动服务器后还是出现改问题? 1.2.7、加密狗在进行注册时候,需要发给总部的信息是什么? 1.3、软件的配置 I、网络集中管理II代软件 1.3.1、屏保出现的时间和时间隔的时间从哪里进行设置? 1.3.2、屏保图片的尺寸和屏保文件的大小是多少? 1.3.3、户自己添加的屏保文件要如何进行配置? 1.3.4、网络启动集中管理软件是否支持多网段? 1.3.5、星云系统——机顶盒启动后,出现带有视易字样的蓝色背景画面,提示”Can not run autoexec”的对话框(即不能自动进入KTV系统)时的解决方法? 7 1.3.6、如何让4200v的机顶盒和4200LS/AR的机顶盒混合使用? II、服务器配置工具 1.3.8、硬盘搜歌后,机顶盒这端歌曲列表为空的,重启机顶盒后还是为空的解决方法? 1.3.9、如何在硬盘搜歌时把歌曲加入到相应的新增歌星目录下? 1.3.10、清空歌库后,再重新搜歌时发现歌库数量不对。 1.3.11、为什么在进行硬盘搜歌后一些歌曲只能点,但是无法播放? III、包厢管理工具 1.3.12、机顶盒显示“皮肤包不存在”的解决方法? 1.3.13、在添加皮肤后,会出现不但皮肤没添加成功反而将要添加的文件删除,添加广告图片时也出现? 1.3.14、公播能否停用? 1.3.15、广告图片的尺寸和广告文件的大小是多少? 1.3.16、为什么重装了星云系统后,机顶盒开机进入拉幕阶段时,会不断地重新拉幕? 1.3.17、刚装完星云系统后,发现不能换肤。 1.3.18、在星云系统中,换肤后,经常会发现显示器上走马灯会有花屏。 1.3.19、如何制作包房逃生图? 1.3.20、如何制作拉幕广告? IV、曲库管理工具 1.3.21、VCD、DVD的新歌是否可以直接拷贝到硬盘加入? 1.3.22、为什么在曲库管理工具中对歌曲信息进行修改后,在点歌过程中会发现这首歌的信息并没有改变?

广电及传媒行业网络安全解决方案

广电及传媒行业解决方案 行业背景 广电行业的业务类型主要分为两大块,主要是以数字电视业务为核心,另外则是固网宽带业务。数字电视业务主要是由4个服务器集群组成,认证服务器集群,页面服务器集群,APP应用服务器集群,酒店应用服务器集群。而广电的固网业务则是广电依托数字电视机顶盒业务的覆盖面,与传统运营商电信,新联通争取宽带用户。2010年1月13日,国务院总理温家宝主持召开国务院常务会议,决定加快推进电信网、广播电视网和互联网三网融合。会议上明确了三网融合的时间表,对广电行业是一个巨大的机遇。 行业现状 数字电视多个服务器集群,分布式的结构需要负载均衡设备提高系统的可靠性,部署的灵活性。如何优化dhcp,dns系统的可靠性。 固网资源缺乏导致用户上网体验欠佳,网间结算费用大,如何提升用户上网体验减少期间结算?监管部门的要求广电对固网用户上网行为进行准确追溯,如何对用户上网行为记录。城域网机房带外集中维护该如何管理?以上种种问题对客户的IT应用系统提出更高的要求。 建议网络架构 下载后可见 需求及解决方案要点 本地服务器负载均衡、应用优化:通过负载均衡设备对认证服务器集群,页面服务器集群,APP应用服务器集群,酒店应用服务器集群进行应用优化,均衡负载,负载均衡设备能够准确检测出后台服务器的健康状态,以及其快速的双机切换,大大提高系统的可靠性与稳定性,并且加大了该系统部署的灵活性。 上网行为日志管理:能够高效的收集和保护数据,无需任何代理或过滤器。通过简明实现,大大减低了客户成本,无需担心安全和遵规的复杂性。

缓存加速:WEB中大量的流量是冗余的,基于这个事实,缓存系统可以将重复访问的内容缓存在本地,使绝大部分的流量在本地完成,提升用户的上网体验,也大大减少了网间的结算。 带外管理系统:低成本、高质量解决各种网络设备的日常维护,无需到现场即可解决设备故障。带外管理不受连接的ip网络影响。 入侵防御系统防护:主动式入侵防御系统可以有效识别攻击行为,对攻击行为及时做出判断并且阻断,以保护核心业务免受互联网的非法攻击。 部分成功客户 杭州市政府、吉林省政府、南京市政府、青岛市政府、陕西电子政务网、云南省卫生厅、浙江省政府、长沙市政府、福田区政府、象山市政府、青岛市统计局、青岛市环保局、北京市财政局、北京市工商管理局、北京市国家税务局、甘肃省地方税务局、甘肃省国家税务局、广东省财政厅、吉林省国家税务局、济南市国家税务局、北京市公安局、北京市检察院、广东省公安厅、广西公安出入境管理局、北京市地震局、国家气象局、中央气象台、贵州省劳动保障局、北京国土资源局、新疆房产局、北京海关、海关总署、山东出入境检验检疫局、青岛市数字化城管、山东省水利局、北京市知识产权局

广电安全解决方案

广电安全解决方案 一、 需求分析 由于广电网络面临来自机顶盒、互联网、单位内部网络、办公网、无线网等多维度的网络安全威胁,迫切需要对现有的广电网络进行多层级的安全防护,永新视博结合多年来在广电网络中的技术积累和经验为广电体系下的内容安全、系统安全、网络安全、设备安全提供整体化的解决方案。永新视博统一安全系统完全符合总局283号文要求,同时严格参考了等级保护、ISO27001等国内外标准设计了符合广电行业的信息安全系统建设框架。系统涵盖了网络边界防护、网络安全加固、系统安全加固、机顶盒安全加固、广电网络内网安全加固、音视频水印添加、广电网络分级边界保护、以及广电网络安全培训等各个方面的安全内容。 二、 方案架构 针对广电现有网络架构和业务安全需求,数盾科技推出相应的华为9700系列FC 高速存储加密机、加密模组、光锁传输系统、安全加密U 盘,来全面解决贵单位面临多维度网络安全问题。贵单位可根据业务吞吐量及安全重要性增加设备数量,以及热备、负载部署保障业务7×24小时为公众模式。可在其他业务网复制部署安全设备,来增强企业网络安全性。数盾安全设备物理部署方式如下图: S9700系列 加密模组 S9700系列 加密模组 日常办公网文档服务器办公服务器光锁系统光锁系统直播服务器点播服务器PC 终端办公PC 办公PC 安全加密安全加密密钥管理中心安全管理中心

三、方案描述 广电核心网部署FC存储加密机,保证应用数据存储在磁盘阵列中的数据是密文存储,防止其它不法分子恶意窃取机密数据。此部署方式不改变企业原有拓扑,直接串行接入企业网络。 广电核心网与日常办公网之间网络出口交换机部署加密模组,可保证数据在两业务之间传输时的加密保护,通过密管策略及安全性需求,配置明文、密文传输及密钥周期更新时间。 广电核心网与点播、直播系统网部署光锁系统,保证光纤线路上数据是安全保密传输,从而提高物理线路安全性。 四、方案优势 ?存储加密具有高性能的加解密读写速度,SM4加解密速率双向可达6Gbps,密通状态损耗仅相当于明通状态的3.5%,满足及高于业务集中存储的存储需求和速率需 求。 ?加密交换机网络通用性强,改造成本低,且密钥更新、分发、销毁时间周期灵活多变。明文、密文根据业务重要性随时改变。 ?光锁系统部署具有高速率、高安全、高可靠、部署灵活等特点。 五、成功案例 工业和信息化部 南京人民银行

高清视频编解码网络传输解决方案

随着高清信号的广泛应用,机场、地铁、商场、会议室等场所都统统换上了高清设备,因此,高清视频的距离传输并成了一个关键的问题。为了解决这样的用户高清视频长距离传输,目前最好的的解决方案就是通过Smartair技术。 该方案采用TCP/IP技术进行数字信号传输,只需单根网线就可以延长HDMI信号,用户可以把显示屏幕放到距离主机100米以上的地方使用。该方案已经在视频广告,数字家庭,多媒体教学广电,投影会议等得到广泛应用,其稳定性得到了验证,适中的价格更是使客户很容易接受。 规格特点: 1、采用单网线,标准的TCP/IP 传输HDMI高清信号传输设备。 2、可采用点对点、点对多点、级联传输的方式传输HDMI数字信号、立体声音频信号。 3、此设备是把高清音视频信号进行数字化处理,通过网络方式进行传输,传输线缆为超五类网线组成的局域网,一台编码器可以带上百台解码器,通过这种传输方式可以把一个广告播放区域内的所有液晶屏采用一根网线解决了高清音视频信号的传输。同样也可以适应网络的各种联接方式,中间可采用交换机等网络设备延长传输距离。 4、图像清晰度高,可直接输入1080P高清视频,图像不失真,图像抗干扰能力强,稳定性 高。 5、该方案同时支持IR和RS232的控制传输 规格 分辨率Resolution Frame rate Ratio 640x480P60Hz4:3 720x480P59.94Hz/60Hz4:3/16:9 1280x720P 50Hz/59.94Hz/60Hz16:9 720x576P 50Hz 4:3/16:9 1920x1080P 23.97Hz/24Hz/25Hz16:9 29.97Hz/30Hz/50Hz 1920x1080P 50Hz/59.94Hz/60Hz16:9 音频LPCM up to 8 channels Sampling rate: 32kHz, 44.1kHz and 48kHz Sample depth: 24bit, 20bit and 16bit HDMI HDMI 1.3 HDCP HDCP 1.2

Hillstone广电网络安全解决方案

Hillstone广电网络安全解决方案

Hillstone山石网科广电网多链路出口安全 解决方案 ——应用Hillstone山石网科助力广电网 多链路负载均衡&安全管理解决方案 山石网科通信技术(北京)有限公司 2010年4月

一、广电网出口网络现状描述 1项目背景 广电网,通常是各地有线电视网络公司(台)负责建设运营的。其职能类似于ISP,可向政府,企业,网吧或普通用户提供宽带接入。但广电自己没有独立的Internet接入出口。所以,广电会向电信,网通等ISP租用带宽,而后再通过光纤或HFC网向用户提供宽带接入服务,其职能类似于2级ISP。通常情况下,为了保证互联网访问的服务质量,缓解中国存在的南北两家ISP带来的互联互通问题,广电采用同时租用多家ISP链路的办法以保障INTERNET链路出口的稳定性和访问质量。 通过分析,广电网络中通常存在如下问题: 1)需要高性价比的多链路负载均衡解决方案 为了保证出口链路对互联网访问的质量,广电会同时租用多家ISP的链路以保障INTERNET链路出口的稳定性和访问质量。通常情况下,广电至少租用电信和网通2大ISP的链路,部分地区会进一步接入联通,铁通和教育网的链路。 多链路的接入,扩充了带宽,但也给广电带来了新的挑战-各种出口链路的流量负载分配的问题。选择F5等负载均衡厂商产品来解决出口网络的均衡问题是一种方法,但该类设备的价格昂贵,处理性能不理想,且安全性较低。所以,广电迫切需要一性价比更高的多链路负载均衡的解决方案。 2)日益增长的业务访问量给防火墙带来巨大压力 广电网从建立之初就考虑到了安全问题,采用防火墙设备做为广电网网络出口安全防护已经比较普遍。但广电网现有的防火墙部署时间较早,一般都是采用传统的X86架构或ASIC架构。其网络性能往往不能继续支撑日益增长的业务访问量。随着跨网应用的骤增,广电网网络环境基于X86或者ASIC的传统架构的防火墙会导致网络传输延迟增大,部分的防火墙设备已成为整个网络传输的瓶颈之一,严重影响整个广电网出口的正常业务需求。为了满足网络持续发展的需要,需要选择一款高性能、高吞吐、易于扩展性能和功能的防火墙或

XX广电数据中心网络解决方案

XX广电数据中心网络解决方案 1XX广电数据中心面临的挑战 “数据中心”建设是三网融合建设的重要组成部分,是广电网络运营商进行多业

数字电视视频业务、互动增值业务的发展产生积极作用,同时为广电的IT支撑系统提供运维保障。 当前广电数据中心面临的挑战是: 1)广电宽带用户需要的互联网内容与信息服务大部分在电信和联通的IP网内,导 致广电巨大的入网流量带宽成本。需要建设数据中心并部署一定规模的P2P、WEB应用缓存系统,从而尽可能将本地宽带用户的内容和服务请求终结在广电网内。 2)互动数字高清视频业务是广电运营商的核心业务,其发展方向是互动、高清、 3D,该业务不仅需要大量丰富的高清互动视频媒体资源,同时还需要完善的视频内容分发网络(CDN),作为数据中心一个业务域,对大流量环境下的高带宽、高可靠、高稳定、易管理、节能环保要求较高。 3)IDC业务是运营商业务领域的重要组成部分,是信息化服务的趋势,IDC相关 业务除了传统的系统托管业务、服务器和带宽等资源租赁业务、网络安全增值业务外,还将面向公众、企业等客户的云计算服务,承载这些业务,完善的数据中心基础设施是不可或缺的。 4)随着业务的发展,广电运营商需要逐步建设完备的IT支撑系统,包括相关的业 务平台管理系统、内部管理系统,现阶段大部分的广电IT支撑系统还处在不断完善、持续建设的阶段,如果广电马上建设完善独立的IT支撑系统数据中心将是一种硬件资源、运维资源的潜在浪费,需要将其纳入到多业务数据中心,保障该系统的独立性和安全隔离,以逐步完善IT支撑系统。 根据对广电行业业务对数据中心的建设需求,汉柏科技提出了广电数据中心网络解决方案,对数据中心网络的总体架构、网络功能、可靠性、安全设计、服务质量设计进行了详细的描述,以指导建设一个高度可靠、安全、快速、可扩展的数据中心网络平台。

广电有线网络整改方案

广电有线网络整改方案 各位读友大家好!你有你的木棉,我有我的文章,为了你的木棉,应读我的文章!若为比翼双飞鸟,定是人间有情人!若读此篇优秀文,必成天上比翼鸟! 根据中央、省市关于深化文化体制改革,加快推进广播电视有线网络整合发展的部署要求,结合实际,现就广播电视有线网络“一省一网”整合发展工作,制定如下实施方案。一、指导思想认真贯彻落实中央、省市关于深化文化体制改革的精神,按照省委、省政府关于加快推进广播电视有线网络“一省一网”整合发展的工作部署,着眼于做大做强广播电视有线网络文化产业,不断提高广播电视数字化应用水平,进一步丰富人民群众的精神文化生活。按照“统一规划建设、统一技术标准、统一集控平台、统一品牌形象、统一运行管理”的总体要求,共同参与组建一体化运行的华数广电网络股份有限公司,为推动文化大发

展大繁荣和促进“三网融合”、推进信息化建设作出积极贡献。二、工作目标按照文化体制改革要求,通过行政推动、市场运作,完成区广电有线网络加入华数广电网络股份有限公司全面合作,推动全省广电有线网络“一省一网”发展及一体化运行,进一步加快广电有线网络产业发展,率先打造成为全省乃至全国文化共建共享先进地区。三、实施步骤广电有线网络“一省一网”整合工作分二个阶段实施,第一阶段为今年4月底前,完成广电有线网络公司化改造和清产核资工作;第二阶段为今年6月底前,完成加入华数合作运营工作。第一阶段:公司化改造和清产核资 1.设立公司。按照“台控网络、台网经营分离、网络企业化运作”的原则,区广播电视台以货币形式注册成立华数广电网络有限公司,注册资本为人民币500000元。2.管理架构。按照现代企业管理方式,公司设立董事会和监事会。董事会成员3名,其中2名由广播电视台任命,1名由职工代表大

相关文档