Table Of Contents

1 Methodology

1.1 What is Performance Evaluation
1.1.1 Load
1.1.2 Metric
1.1.3 The Different Goals of Performance Evaluation
1.2 Factors
1.2.1 The Hidden Factor Paradox
1.2.2 Simpson's Paradox
1.3 Evaluation Methods
1.4 The Scientific Method
1.5 Performance Patterns
1.5.1 Bottlenecks
1.5.2 Congestion Collapse
1.5.3 Competition Side Effect
1.5.4 Latent Congestion Collapse
1.6 Review
1.6.1 Check-List
1.6.2 Review Questions

2 Summarizing Performance Data, Confidence Intervals

2.1 Summarized Performance Data
2.1.1 Histogram and Empirical CDF
2.1.2 Mean, Median and Quantiles
2.1.3 Coefficient of Variation and Lorenz Curve Gap
2.1.4 Fairness Indices
2.2 Confidence Intervals
2.2.1 What is a Confidence Interval
2.2.2 Confidence Interval for Median and Other Quantiles
2.2.3 Confidence Interval for the Mean
2.2.4 Confidence Intervals for Fairness Indices and The Bootstrap
2.2.5 Confidence Interval for Success Probability
2.3 The Independence Assumption
2.3.1 What does iid mean
2.3.2 How do I know in Practice if the iid Assumption is Valid
2.3.3 What Happens If The IID Assumption Does Not Hold
2.4 Prediction Interval
2.4.1 Prediction for an IID Sample based on Order Statistic
2.4.2 Prediction for a Normal IID Sample
2.4.3 The Normal Assumption
2.5 Which Summarization To Use
2.5.1 Robustness
2.5.2 Compactness
2.6 Other Aspects of Confidence/Prediction Intervals
2.6.1 Intersection of Confidence/Prediction Intervals
2.6.2 The Meaning of Confidence
2.7 Proofs
2.8 Review
2.8.1 Summary
2.8.2 Review Questions

3 Model Fitting

3.1 Model Fitting Criteria
3.1.1 What is Model Fitting
3.1.2 Least Squares Correspond to Gaussian, Same Variance
3.1.3 L1 Norm Minimization Corresponds to Laplace Noise
3.2 Linear Regression
3.3 Linear Regression with L1 Norm Minimization
3.4 Choosing a Distribution
3.4.1 Shape
3.4.2 Skewness and Kurtosis
3.4.3 Power Laws, Pareto Distribution and Zipf's Law
3.4.4 Hazard Rate
3.4.5 Fitting A Distribution
3.4.6 Censored Data
3.4.7 Combinations of Distributions
3.5 Heavy Tail
3.5.1 Definition
3.5.2 Heavy Tail and Stable Distributions
3.5.3 Heavy Tail in Practice
3.5.4 Testing For Heavy Tail
3.5.5 Application Example: The Workload Generator SURGE
3.6 Proofs
3.7 Review
3.7.1 Review Questions
3.7.2 Useful Matlab Commands

4 Tests

4.1 The Neyman Pearson Framework
4.1.1 The Null Hypothesis and The Alternative
4.1.2 Critical Region, Size and Power
4.1.3 p-value of a Test.
4.1.4 Tests Are Just Tests
4.2 Likelihood Ratio Tests
4.2.1 Definition of Likelihood Ratio Test
4.2.2 Student Test for Single Sample (or Paired Data)
4.2.3 The Simple Goodness of Fit Test
4.3.1 Analysis of Variance (ANOVA) and $F$-tests
4.3.2 Testing for a Common Variance
4.4 Asymptotic Results
4.4.1 Likelihood Ratio Statistic
4.4.2 Pearson Chi-squared Statistic and Goodness of Fit
4.4.3 Test of Independence
4.5 Other Tests
4.5.1 Goodness of Fit Tests based on Ad-Hoc Pivots
4.5.2 Robust Tests
4.6 Proofs
4.7 Review
4.7.1 Tests Are Just Tests
4.7.2 Review Questions

5 Forecasting

5.1 What is Forecasting
5.2 Linear Regression
5.3 The Overfitting Problem
5.3.1 Use of Test Data
5.3.2 Information Criterion
5.4 Differencing the Data
5.4.1 Differencing and De-seasonalizing Filters
5.4.2 Computing Point Prediction
5.4.3 Computing Prediction Intervals
5.5 Fitting Differenced Data to an ARMA Model
5.5.1 Stationary but non IID Differenced Data
5.5.2 ARMA and ARIMA Processes
5.5.3 Fitting an ARMA Model
5.5.4 Forecasting
5.6 Sparse ARMA and ARIMA Models
5.6.1 Constrained ARMA Models
5.6.2 Holt-Winters Models
5.7 Proofs
5.8 Review Questions

6 Discrete Event Simulation

6.1 What is a Simulation
6.1.1 Simulated Time and Real Time
6.1.2 Simulation Types
6.2 Simulation Techniques
6.2.1 Discrete Event Simulation
6.2.2 Stochastic Recurrence
6.3 Computing the Accuracy of Stochastic Simulations
6.3.1 Independent Replications
6.3.2 Computing Confidence Intervals
6.3.3 Non-Terminating Simulations
6.4 Monte Carlo Simulation
6.5 Random Number Generators
6.6 How to Sample from a Distribution
6.6.1 CDF Inversion
6.6.2 Rejection Sampling
6.6.3 Ad-Hoc Methods
6.7 Importance Sampling
6.7.1 Motivation
6.7.2 The Importance Sampling Framework
6.7.3 Selecting An Importance Sampling Distribution
6.8 Proofs
6.9 Review

7 Palm Calculus, or the Importance of the Viewpoint

7.1 An Informal Introduction
7.1.1 Event versus Time Averages
7.1.2 The Large Time Heuristic
7.1.3 Two Event Clocks
7.1.4 Arbitrary Sampling Methods
7.2 Palm Calculus
7.2.1 Hypotheses
7.2.2 Definitions
7.2.3 Interpretation as Time and Event Averages
7.2.4 The Inversion and Intensity Formulas
7.3 Other Useful Palm Calculus Results
7.3.1 Residual Time and Feller's Paradox
7.3.2 The Rate Conservation Law and Little's Formula
7.3.3 Two Event Clocks
7.4 Simulation Defined as Stochastic Recurrence
7.4.1 Stochastic Recurrence, Modulated Process
7.4.2 Freezing Simulations
7.4.3 Perfect Simulation of Stochastic Recurrence
7.5 Application to Markov Chain Models and the PASTA Property
7.5.1 Embedded Sub-Chain
7.5.2 PASTA
7.6 Appendix: Quick Review of Markov Chains
7.6.1 Markov Chain in Discrete Time
7.6.2 Markov Chain in Continuous Time
7.6.3 Poisson and Bernoulli
7.7 Proofs
7.8 Review Questions

8 Queuing Theory for Those Who Cannot Wait

8.1 Deterministic Analysis
8.1.1 Description of a Queuing System with Cumulative Functions
8.1.2 Reich's Formula
8.2 Operational Laws For Queuing Systems
8.2.1 Departures and Arrivals See Same Averages (DASSA)
8.2.2 Little's Law and Applications
8.2.3 Networks and Forced Flows
8.2.4 Bottleneck Analysis
8.3 Classical Results for a Single Queue
8.3.1 Kendall's Notation
8.3.2 The Single Server Queue
8.3.3 The Processor Sharing Queue, M/GI/1/PS
8.3.4 Single Queue with $B$ Servers
8.4 Definitions for Queuing Networks
8.4.1 Classes, Chains and Markov Routing
8.4.2 Catalog of Service Stations
8.4.3 The Station Function
8.5 The Product-Form Theorem
8.5.1 Product Form
8.5.2 Stability Conditions
8.6 Computational Aspects
8.6.1 Convolution
8.6.2 Throughput
8.6.3 Equivalent Service Rate
8.6.4 Suppression of Open Chains
8.6.5 Arrival Theorem and MVA Version 1
8.6.6 Network Decomposition
8.6.7 MVA Version 2
8.7 What This Tells Us
8.7.1 Insensitivity
8.7.2 The Importance of Modelling Closed Populations
8.8 Mathematical Details About Product-Form Queuing Networks
8.8.1 Phase Type Distributions
8.8.2 Micro and Macro States
8.8.3 Micro to Macro: Aggregation Condition
8.8.4 Local Balance In Isolation
8.8.5 The Product Form Theorem
8.8.6 Networks with Blocking
8.9 Case Study
8.9.1 Deterministic Analysis
8.9.2 Single Queue Analysis
8.9.3 Operational Analysis
8.9.4 Queuing Network Analysis
8.9.5 Conclusions
8.10 Proofs
8.11 Review
8.11.1 Review Questions
8.11.2 Summary of Notation

A Tables

B Parametric Estimation, Large Sample Theory

B.1 Parametric Estimation Theory
B.1.1 The Parametric Estimation Framework.
B.1.2 Maximum Likelihood Estimator (MLE)
B.1.3 Efficiency and Fisher Information
B.2 Asymptotic Confidence Intervals
B.3 Confidence Interval in Presence of Nuisance Parameters

C Gaussian Random Vectors in Rd

C.1 Notation and a Few Results of Linear Algebra
C.1.1 Notation
C.1.2 Linear Algebra
C.2 Covariance Matrix of a Random Vector
C.2.1 Definitions
C.2.2 Properties of Covariance Matrix
C.2.3 Choleski's Factorization
C.2.4 Degrees of Freedom
C.3 Gaussian Random Vector
C.3.1 Definition and Main Properties
C.3.2 Diagonal Form
C.4 Foundations of ANOVA
C.4.1 Homoscedastic Gaussian Vector
C.4.2 Maximum Likelihood Estimation for Homoscedastic Gaussian Vectors
C.5 Conditional Gaussian Distribution
C.5.1 Schur Complement
C.5.2 Distribution of X1 Given X2
C.5.3 Partial Correlation
C.6 Proofs

D Digital Filters

D.1 Calculus of Digital Filters
D.1.1 Backshift Operator
D.1.2 Filters
D.1.3 Impulse response and Dirac Sequence
D.1.4 Composition of Filters, Commutativity
D.1.5 Inverse of Filter
D.1.6 AR(inf) Representation
D.1.7 Calculus of Filters
D.1.8 z Transform
D.2 Stability
D.3 Filters with Rational Transfer Function
D.3.1 Definition
D.3.2 Poles and Zeroes
D.4 Predictions
D.4.1 Conditional Distribution Lemma
D.4.2 Predictions
D.5 Log Likelihood of Innovation
D.6 Matlab Commands
D.7 Proofs