副标题:无

作   者:

分类号:

ISBN:9787506272582

微信扫一扫,移动浏览光盘

简介

  An early experiment that conceives the basic idea of Monte Carlo compu-tatios is known as "Buffon'needle",first stated by Georges Louis Leclerc Comte de Buffon in 1777.In this well-known experiment,on throws a needle of length l onto a flat surface with a grid of parallel lines with spacing.It is easy to compute that,under ideal conditions,the chance that the needle will intersect one of the lines in .Thus,if we lep pN be the Proportion of "intersects"in N throws,we can have an estimate of π as wjocj will"converge"to π as N increases to infinity.        此书为英文版!  

目录

preface

1 introduction and examples

1.1 the need of monte carlo techniques

1.2 scope and outline of the book

1.3 computations in statistical physics

1.4 molecular structure simulation

1.5 bioinformatics: finding weak repetitive patterns

1.6 nonlinear dynamic system: target tracking

1.7 hypothesis testing for astronomical observations

1.8 bayesian inference of multilevel models

1.9 monte carlo and missing data problems

basic principles: rejection, weighting, and others

2.1 generating simple random variables

2.2 the rejection method

2.3 variance reduction methods

2.4 exact methods for chain-structured models

2.4.1 dynamic programming

2.4.2 exact simulation

2.5 importance sampling and weighted sample

2.5.1 an example

.2.5.2 the basic idea

2.5.3 the "rule of thumb" for importance sampling

2.5.4 concept of the weighted sample

2.5.5 marginalization in importance sampling

2.5.6 example: solving a linear system

2.5.7 example: a bayesian missing data problem

2.6 advanced importance sampling techniques

2.6.1 adaptive importance sampling

2.6.2 rejection and weighting

2.6.3 sequential importance sampling

2.6.4 rejection control in sequential importance sampling

2.7 application of sis in population genetics

2.8 problems

theory of sequential monte carlo

3.1 early developments: growing a polymer

3.1.1 a simple model of polymer: self-avoid walk

3.1.2 growing a polymer on the square lattice

3.1.3 limitations of the growth method

3.2 sequential imputation for statistical missing data problems

3.2.1 likelihood computation

3.2.2 bayesian computation

3.3 nonlinear filtering

3.4 a general framework

3.4.1 the choice of the sampling distribution

3.4.2 normalizing constant

3.4.3 pruning, enrichment, and resampling

3.4.4 more about resampling

3.4.5 partial rejection control

3.4.6 marginalization, look-ahead, and delayed estimate

3.5 problems

sequential monte carlo in action

4.1 some biological problems

4.1.1 molecular simulation

4.1.2 inference in population genetics

4.1.3 finding motif patterns in dna sequences

4.2 approximating permanents

4.3 counting 0-1 tables with fixed margins

4.4 bayesian missing data problems

4.4.1 murray's data

4.4.2 nonparametric bayes analysis of binomial data

4.5 problems in signal processing

4.5.1 target tracking in clutter and mixture kalman filter

4.5.2 digital signal extraction in fading channels

4.6 problems

metropolis algorithm and beyond

5.1 the metropolis algorithm

5.2 mathematical formulation and hastings's generalization

5.3 why does the metropolis algorithm work?

5.4 some special algorithms

5.4.1 random-walk metropolis

5.4.2 metropolized independence sampler

5.4.3 configurational bias monte carlo

5.5 multipoint metropolis methods

5.5.1 multiple independent proposals

5.5.2 correlated multipoint proposals

5.6 reversible jumping rule

5.7 dynamic weighting

5.8 output analysis and algorithm efficiency

5.9 problems

the gibbs sampler

6.1 gibbs sampling algorithms

6.2 illustrative examples

6.3 some special samplers

6.3.1 slice sampler

6.3.2 metropolized gibbs sampler

6.3.3 hit-and-run algorithm

6.4 data augmentation algorithm

6.4.1 bayesian missing data problem

6.4.2 the original da algorithm

6.4.3 connection with the gibbs sampler

6.4.4 an example: hierarchical bayes model

6.5 finding repetitive motifs in biological sequences

6.5.1 a gibbs sampler for detecting subtle motifs

6.5.2 alignment and classification

6.6 covariance structures of the gibbs sampler

6.6.1 data augmentation

6.6.2 autocovariances for the random-scan gibbs sampler

6.6.3 more efficient use of monte carlo samples

6.7 collapsing and grouping in a gibbs sampler

6.8 problems

7 cluster algorithms for the ising model

7.1 ising and potts model revisit

7.2 the swendsen-wang algorithm as data augmentation

7.3 convergence analysis and generalization

7.4 the modification by wolff

7.5 further generalization

7.6 discussion

7.7 problems

general conditional sampling

8.1 partial resampling

8.2 case studies for partial resampling

8.2.1 gaussian random field model

8.2.2 texture synthesis

8.2.3 inference with multivariate t-distribution

8.3 transformation grsup and generalized gibbs

8.4 application: parameter expansion for data augmentation

8.5 some examples in bayesian inference

8.5.1 probit regression

8.5.2 monte carlo bridging for stochastic differential equa-tion

8.6 problems

9 molecular dynamics and hybrid monte carlo

9.1 basics of newtonian mechanics

9.2 molecular dynamics simulation

9.3 hybrid monte carlo

9.4 algorithms related to hmc

9.4.1 langevin-euler moves

9.4.2 generalized hybrid monte carlo

9.4.3 surrogate transition method

9.5 multipoint strategies for hybrid monte carlo

9.5.1 neal's window method

9.5.2 multipoint method

9.6 application of hmc in statistics

9.6.1 indirect observation model

9.6.2 estimation in the stochastic volatility model

10 multilevel sampling and optimization methods

10.1 umbrella sampling

10.2 simulated annealing

10.3 simulated tempering

10.4 parallel tempering

10.5 generalized ensemble simulation

10.5.1 multicanonical sampling

10.5.2 the 1/k-ensemble method

10.5.3 comparison of algorithms

10.6 tempering with dynamic weighting

10.6.1 ising model simulation at sub-critical temperature

10.6.2 neural network training

11 population-based monte carlo methods

11.1 adaptive direction sampling: snooker algorithm

11.2 conjugate gradient monte carlo

11.3 evolutionary monte carlo

11.3.1 evolutionary movements in binary-coded space

11.3.2 evolutionary movements in continuous space

11.4 some further thoughts

11.5 numerical examples

11.5.1 simulating from a bimodal distribution

11.5.2 comparing algorithms for a multimodal example

11.5.3 variable selection with binary-coded emc

11.5.4 bayesian neural network training

11.6 problems

12 markov chains and their convergence

12.1 basic properties of a markov chain

12.1.1 chapman-kolmogorov equation

12.1.2 convergence to stationarity

12.2 coupling method for card shuffling

12.2.1 random-to-top shuffling

12.2.2 riffle shuffling

12.3 convergence theorem for finite-state markov chains

12.4 coupling method for general markov chain

12.5 geometric inequalities

12.5.1 basic setup

12.5.2 poincare inequality

12.5.3 example: simple random walk on a graph

12.5.4 cheeger's inequality

12.6 functional analysis for markov chains

12.6.1 forward and backward operators

12.6.2 convergence rate of markov chains

12.6.3 maximal correlation

12.7 behavior of the averages

13 selected theoretical topics

13.1 mcmc convergence and convergence diagnostics

13.2 iterative conditional sampling

13.2.1 data augmentation

13.2.2 random-scan gibbs sampler

13.3 comparison of metropolis-type algorithms

13.3.1 peskun's ordering

13.3.2 comparing schemes using peskun's ordering

13.4 eigenvalue analysis for the independence sampler

13.5 perfect simulation

13.6 a theory for dynamic weighting

13.6.1 definitions

13.6.2 weight behavior under different scenarios

13.6.3 estimation with weighted samples

13.6.4 a simulation study

a basics in probability and statistics

a.1 basic probability theory

a.1.1 experiments, events, and probability

a.1.2 univariate random variables and their properties

a.1.3 multivariate random variable

a.1.4 convergence of random variables

a.2 statistical modeling and inference

a.2.1 parametric statistical modeling

a.2.2 frequentist approach to statistical inference

a.2.3 bayesian methodology

a.3 bayes procedure and missing data formalism

a.3.1 the joint and posterior distributions

a.3.2 the missing data problem

a.4 the expectation-maximization algorithm

references

author index

subject index111


已确认勘误

次印刷

页码 勘误内容 提交人 修订印次

    • 名称
    • 类型
    • 大小

    光盘服务联系方式: 020-38250260    客服QQ:4006604884

    意见反馈

    14:15

    关闭

    云图客服:

    尊敬的用户,您好!您有任何提议或者建议都可以在此提出来,我们会谦虚地接受任何意见。

    或者您是想咨询:

    用户发送的提问,这种方式就需要有位在线客服来回答用户的问题,这种 就属于对话式的,问题是这种提问是否需要用户登录才能提问

    Video Player
    ×
    Audio Player
    ×
    pdf Player
    ×
    Current View

    看过该图书的还喜欢

    some pictures

    解忧杂货店

    东野圭吾 (作者), 李盈春 (译者)

    loading icon