Fast Convergence of Regularized Learning in Games
|
|
- Sharon Shaw
- 5 years ago
- Views:
Transcription
1 Fast Convergence of Regularized Learning in Games Vasilis Syrgkanis Alekh Agarwal Haipeng Luo Robert Schapire Microsoft Research NYC Microsoft Research NYC Princeton University Microsoft Research NYC
2 Strategic Interactions in Computer Systems Game theoretic scenarios in modern computer systems
3 Strategic Interactions in Computer Systems Game theoretic scenarios in modern computer systems $ $ $ internet routing advertising auctions
4 How Do Players Behave? Classical game theory: players play according to Nash Equilibrium How do players converge to equilibrium? Nash Equilibrium is computationally hard Most scenarios: repeated strategic interactions Simple adaptive game playing more natural Learn to play well over time from past experience e.g. Dynamic bid optimization tools in online ad auctions internet routing advertising auctions
5 How Do Players Behave? Classical game theory: players play according to Nash Equilibrium How do players converge to equilibrium? Nash Equilibrium is computationally hard Caveats! Most scenarios: repeated strategic interactions Simple adaptive game playing more natural Learn to play well over time from past experience e.g. Dynamic bid optimization tools in online ad auctions internet routing advertising auctions
6 How Do Players Behave? Classical game theory: players play according to Nash Equilibrium How do players converge to equilibrium? Nash Equilibrium is computationally hard Caveats! Most scenarios: repeated strategic interactions Simple adaptive game playing more natural Learn to play well over time from past experience e.g. Dynamic bid optimization tools in online ad auctions internet routing advertising auctions
7 How Do Players Behave? Classical game theory: players play according to Nash Equilibrium How do players converge to equilibrium? Nash Equilibrium is computationally hard Caveats! Most scenarios: repeated strategic interactions Simple adaptive game playing more natural Learn to play well over time from past experience e.g. Dynamic bid optimization tools in online ad auctions internet routing advertising auctions
8 No-Regret Learning in Games Each player uses no-regret learning algorithm with good regret against adaptive adversaries Regret of each player against any fixed strategy converges to zero Many simple algorithms achieve no-regret MWU, Regret Matching, Follow the Regularized/Perturbed Leader, Mirror Descent [Freund and Schapire 1995, Foster and Vohra 1997, Hart and Mas-Collel 2000, Cesa-Bianchi and Lugosi 2006, ]
9 No-Regret Learning in Games Each player uses no-regret learning algorithm with good regret against adaptive adversaries Regret of each player against any fixed strategy converges to zero Many simple algorithms achieve no-regret MWU, Regret Matching, Follow the Regularized/Perturbed Leader, Mirror Descent [Freund and Schapire 1995, Foster and Vohra 1997, Hart and Mas-Collel 2000, Cesa-Bianchi and Lugosi 2006, ]
10 No-Regret Learning in Games Each player uses no-regret learning algorithm with good regret against adaptive adversaries Regret of each player against any fixed strategy converges to zero Many simple algorithms achieve no-regret MWU, Regret Matching, Follow the Regularized/Perturbed Leader, Mirror Descent [Freund and Schapire 1995, Foster and Vohra 1997, Hart and Mas-Collel 2000, Cesa-Bianchi and Lugosi 2006, ]
11 Known Convergence Results Empirical distribution converges to generalization of Nash equilibrium: Coarse Correlated Equilibrium [Young2004] Sum of player utilities Welfare - converges to approximate optimality (under structural conditions on the game) [Roughgarden2010] Convergence rate inherited from adversarial analysis: typically O 1/ T O 1/ T impossible to improve against adversaries
12 Known Convergence Results Empirical distribution converges to generalization of Nash equilibrium: Coarse Correlated Equilibrium [Young2004] Sum of player utilities Welfare - converges to approximate optimality (under structural conditions on the game) [Roughgarden2010] Convergence rate inherited from adversarial analysis: typically O 1/ T O 1/ T impossible to improve against adversaries
13 Known Convergence Results Empirical distribution converges to generalization of Nash equilibrium: Coarse Correlated Equilibrium [Young2004] Sum of player utilities Welfare - converges to approximate optimality (under structural conditions on the game) [Roughgarden2010] Convergence rate inherited from adversarial analysis: typically O 1/ T O 1/ T impossible to improve against adversaries
14 Known Convergence Results Empirical distribution converges to generalization of Nash equilibrium: Coarse Correlated Equilibrium [Young2004] Sum of player utilities Welfare - converges to approximate optimality (under structural conditions on the game) [Roughgarden2010] Convergence rate inherited from adversarial analysis: typically O 1/ T O 1/ T impossible to improve against adversaries
15 Main Question When all players invoke no-regret learning: Is convergence faster than adversarial rate?
16 High-Level Main Results Yes! We prove that if each player s algorithm: 1. Makes stable predictions 2. Regret bounded by stability of the environment Then convergence faster than 1/ T Can be achieved by regularization and recency bias.
17 High-Level Main Results Yes! We prove that if each player s algorithm: 1. Makes stable predictions 2. Regret bounded by stability of the environment Then convergence faster than 1/ T Can be achieved by regularization and recency bias.
18 High-Level Main Results Yes! We prove that if each player s algorithm: 1. Makes stable predictions 2. Regret bounded by stability of the environment Then convergence faster than 1/ T Can be achieved by regularization and recency bias.
19 Model and Technical Results
20 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy space S i and utility U i : S 1 S n 0,1 At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
21 Repeated Game Model n players play a normal form game for T time steps set of available actions reward Each player i has strategy space S i and utility U i : S 1 S n 0,1 At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
22 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy bids in an space auctions i and utility U i : S 1 S n 0,1 utility from winning an item at some price At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
23 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy paths bids in in an a space network auctions i and utility Ulatency i : S 1 of chosen Spath n 0,1 utility from winning an item at some price At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
24 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy space S i and utility U i : S 1 S n 0,1 At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
25 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy space S i and utility U i : S 1 S n 0,1 At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
26 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy space S i and utility U i : S 1 S n 0,1 At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
27 Repeated Game Model n players play a normal form game for T time steps Each player i has strategy space S i and utility U i : S 1 S n 0,1 At each step t = 1 T, each player i : Picks a distribution over strategies p i t Δ S i Receives expected utility: E U i s 1 t,, s n t and draws s i t p i t Observes expected utility vector u t i, where coordinate u t i [s i ] is expected utility had strategy s i been played u t i s i = E U i s t t 1,, s i,, s n
28 Regret of Learning Algorithm Regret: gain from switching to best fixed strategy T T Regret i = max s i S i E t=1 U i s 1 t,, s i,, s n t E t=1 U i s 1 t,, s i t,, s n t Many algorithms achieve regret O T
29 Regret of Learning Algorithm Regret: gain from switching to best fixed strategy Expected utility if played s i all the time Expected utility of algorithm T T Regret i = max s i S i E t=1 U i s 1 t,, s i,, s n t E t=1 U i s 1 t,, s i t,, s n t Many algorithms achieve regret O T
30 Regret of Learning Algorithm Regret: gain from switching to best fixed strategy Expected utility if played s i all the time Expected utility of algorithm T T Regret i = max s i S i E t=1 U i s 1 t,, s i,, s n t E t=1 U i s 1 t,, s i t,, s n t Many algorithms achieve regret O T
31 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
32 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
33 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
34 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
35 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Defined by [Roughgarden 2010] Main tool for proving welfare bounds (price of anarchy) Applicable to e.g. congestion games, ad auctions Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
36 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Defined by [Roughgarden 2010] Main tool for proving welfare bounds (price of anarchy) Applicable to e.g. congestion games, ad auctions Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
37 Related Work [Daskalakis, Deckelbaum, Kim 2011] Two player zero sum games Marginal empirical distributions converge to Nash at rate O 1 T Unnatural coordinated dynamics [Rakhlin, Sridharan 2013] Two player zero sum games Optimistic Mirror Descent and Optimistic Hedge Sum of regrets O 1 => Marginal empirical dist. converge to Nash at O 1 T
38 Related Work [Daskalakis, Deckelbaum, Kim 2011] Two player zero sum games Marginal empirical distributions This converge work: to Nash at rate O 1 T Unnatural coordinated dynamics General multi-player games vs. 2-player zero-sum Convergence to CCE and Welfare vs. Nash Sufficient conditions vs. specific algorithms Give new example algorithms (e.g. general recency Optimistic Mirror Descent and Optimistic bias, Optimistic HedgeFTRL) [Rakhlin, Sridharan 2013] Two player zero sum games Sum of regrets O 1 => Marginal empirical dist. converge to Nash at O 1 T
39 Main Results We give general sufficient conditions on learning algorithms (and concrete example families of algorithms) such that: Thm1. Each player s regret is O T 1/4 Convergence to CCE at O T 3/4 Thm2. Sum of player regrets is O 1 Corollary. Convergence to approximately optimal welfare if the game is smooth at O T 1 Thm3. Give generic way to modify an algorithm to achieve small regret against nice algorithms and O T against any adversarial sequence
40 Why Expect Faster Rates? If your opponent doesn t change mixed strategy a lot p t t 1 2 p small 2 1 Your expected utility from a strategy is approximately the same between iterations Last iteration s utility good proxy for next iteration s utility u 1 t u 1 t 1 small
41 Why Expect Faster Rates? If your opponent doesn t change mixed strategy a lot p t t 1 2 p small 2 1 Your expected utility from a strategy is approximately the same between iterations Last iteration s utility good proxy for next iteration s utility u 1 t u 1 t 1 small
42 Why Expect Faster Rates? If your opponent doesn t change mixed strategy a lot p t t 1 2 p small 2 1 Your expected utility from a strategy is approximately the same between iterations Last iteration s utility good proxy for next iteration s utility u 1 t u 1 t 1 small
43 Why Expect Faster Rates? If your opponent doesn t change mixed strategy a lot p t t 1 2 p small 2 1 Your expected utility from a strategy is approximately the same between iterations Last iteration s utility good proxy for next iteration s utility u 1 t u 1 t 1 small
44 Why Expect Faster Rates? If your opponent doesn t change mixed strategy a lot p t t 1 2 p small 2 1 Your expected utility from a strategy is approximately the same between iterations Last iteration s utility good proxy for next iteration s utility u 1 t u 1 t 1 small
45 Simplified Sufficient Conditions for Fast Convergence 1. Stability of Mixed Strategies p t t 1 i p i η γ 2. Regret Bounded by Stability of Utility Sequence Regret i α η + η β t=1 T u i t u i t 1 2 Then, for η = O T 1/4, each player s regret is O T 1/4
46 Simplified Sufficient Conditions for Fast Convergence 1. Stability of Mixed Strategies p t t 1 i p i η γ 2. Regret Bounded by Stability of Utility Sequence Regret i α η + η β t=1 T u i t u i t 1 2 Then, for η = O T 1/4, each player s regret is O T 1/4
47 Simplified Sufficient Conditions for Fast Convergence 1. Stability of Mixed Strategies p i t p i t 1 A 2. Regret Bounded by Stability of Utility Sequence Regret i B + C T t=1 u i t u i t 1 Plus conditions on constants A,B,C,D T 2 D t=1 p i t p i t 1 2
48 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i e η t τ=1 u i τ si Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. 1. Uses last iteration as predictor for next iteration 2. Equivalent to Follow the Regularized Leader; Regularization Stability
49 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i e η t τ=1 u i τ si Past cumulative performance of action Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. 1. Uses last iteration as predictor for next iteration 2. Equivalent to Follow the Regularized Leader; Regularization Stability
50 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i Optimistic Hedge [Rakhlin-Sridharan 13] e η t τ τ=1 u i si p t+1 i s i e η t τ t τ=1 u i si +u i si Past cumulative performance of action Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. 1. Uses last iteration as predictor for next iteration 2. Equivalent to Follow the Regularized Leader; Regularization Stability
51 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i Optimistic Hedge [Rakhlin-Sridharan 13] e η t τ τ=1 u i si p t+1 i s i e η t τ t τ=1 u i si +u i si Past cumulative performance of action Past performance double counting last iteration Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. 1. Uses last iteration as predictor for next iteration 2. Equivalent to Follow the Regularized Leader; Regularization Stability
52 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i Optimistic Hedge [Rakhlin-Sridharan 13] e η t τ τ=1 u i si p t+1 i s i e η t τ t τ=1 u i si +u i si Past cumulative performance of action Past performance double counting last iteration Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. 1. Uses last iteration as predictor for next iteration 2. Equivalent to Follow the Regularized Leader; Regularization Stability
53 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i Optimistic Hedge [Rakhlin-Sridharan 13] e η t τ τ=1 u i si p t+1 i s i e η t τ t τ=1 u i si +u i si Past cumulative performance of action Past performance double counting last iteration Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. Corollary. If all players in a game use optimistic Hedge with step size 1. Uses last iteration as predictor for next iteration O T 1/4 then each player s regret is O T 1/4 2. Equivalent to Follow the Regularized Leader; Regularization Stability
54 Example Algorithm Hedge [Littlestone-Warmuth 94, Freund-Schapire 97] p i t+1 s i e η t τ=1 u i τ si Past cumulative performance of action Optimistic Hedge [Rakhlin-Sridharan 13] p i t+1 s i e η t τ=1 u i τ si +u i t si Past performance double counting last iteration Lemma. Optimistic Hedge satisfies both sufficient conditions Intuition. Prove extends to Optimistic Follow the Regularized Leader Algorithms t 1. Uses last iteration as predictor for next iteration p t+1 i = argmax p Δ Si p, u τ t R p i + p, u i + 2. Equivalent to Follow the Regularized Leader; Regularization η Stability τ=1
55 Simulations
56 Simulation Example: Auction Game 4 bidders bidding on 4 items At each iteration bidder picks one item and submits bid Highest bidder on each item wins and pays bid
57 Cumulative Regret Simulation Example: Auction Game 4 bidders bidding on 4 items At each iteration bidder picks one item and submits bid Highest bidder on each item wins and pays bid Maximum Player Regret Time Hedge Optimistic Hedge
58 Cumulative Regret Simulation Example: Auction Game 4 bidders bidding on 4 items At each iteration bidder picks one item and submits bid Highest bidder on each item wins and pays bid Maximum Player Regret O T O 1? Time Hedge Optimistic Hedge
59 Cumulative Regret Expected Bid Simulation Example: Auction Game 4 bidders bidding on 4 items At each iteration bidder picks one item and submits bid Highest bidder on each item wins and pays bid Maximum Player Regret O T Expected Bid on an Item O 1? Time Time Hedge Optimistic Hedge Hedge Optimistic Hedge
60 Cumulative Regret Expected Bid Simulation Example: Auction Game 4 bidders bidding on 4 items At each iteration bidder picks one item and submits bid Highest bidder on each item wins and pays bid Oscillatory Stable Maximum Player Regret O T Expected Bid on an Item O 1? Time Time Hedge Optimistic Hedge Hedge Optimistic Hedge
61 Main Take-Away Points Learning algorithms can enjoy faster regret rates in game theoretic environments Extend previous work to general multi-player games Provide generic sufficient conditions Recency bias and regularization are key components
62 Main Take-Away Points Learning algorithms can enjoy faster regret rates in game theoretic environments Extend previous work to general multi-player games Provide generic sufficient conditions Recency bias and regularization are key components
63 Main Take-Away Points Learning algorithms can enjoy faster regret rates in game theoretic environments Extend previous work to general multi-player games Provide generic sufficient conditions Recency bias and regularization are key components Thank you!
64 Appendix Slides
65 Black-Box Robustness to Any Sequence What if opponents do not use a stable algorithm? Solution: use adaptive step-size, tracking change of environment t Keep upper estimate B on path length I t = τ=1 Set parameter η = min a B, T 1 4 u i τ u i τ 1 Once path length becomes larger, double estimate B and restart algorithm Theorem. Same fast regret guarantees (up to log factors) when opponents are stable; O T when opponents arbitrary. 2
66 Prob. of Player 1 Playing H Prob. of Player 1 Playing H Simulation Example: Zero-Sum Game Matching Pennies Style Game: H T 1 H T 0-1 Hedge Dynamics Optimistic Hedge Dynamics Prob. of Player 2 Playing H Prob. of Player 2 Playing H Trajectory of mixed strategies Nash equilibrium Trajectory of mixed strategies Nash equilibrium
Games in Networks: the price of anarchy, stability and learning. Éva Tardos Cornell University
Games in Networks: the price of anarchy, stability and learning Éva Tardos Cornell University Why care about Games? Users with a multitude of diverse economic interests sharing a Network (Internet) browsers
More informationThe Query Complexity of Correlated Equilibria
The Query Complexity of Correlated Equilibria Sergiu Hart September 2015 SERGIU HART c 2015 p. 1 The Query Complexity of Correlated Equilibria Sergiu Hart Center for the Study of Rationality Dept of Mathematics
More informationIntroduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/3/15
600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/3/15 25.1 Introduction Today we re going to spend some time discussing game
More informationAlgorithmic Game Theory and Applications. Lecture 16: Selfish Network Routing, Congestion Games, and the Price of Anarchy.
Algorithmic Game Theory and Applications Lecture 16: Selfish Network Routing, Congestion Games, and the Price of Anarchy Kousha Etessami games and the internet Basic idea: The internet is a huge experiment
More informationA BGP-Based Mechanism for Lowest-Cost Routing
A BGP-Based Mechanism for Lowest-Cost Routing Joan Feigenbaum, Christos Papadimitriou, Rahul Sami, Scott Shenker Presented by: Tony Z.C Huang Theoretical Motivation Internet is comprised of separate administrative
More informationAlgorithmic Game Theory and Applications. Lecture 16: Selfish Network Routing, Congestion Games, and the Price of Anarchy
Algorithmic Game Theory and Applications Lecture 16: Selfish Network Routing, Congestion Games, and the Price of Anarchy Kousha Etessami warning, again 1 In the few remaining lectures, we will briefly
More informationAlgorithmic Game Theory - Introduction, Complexity, Nash
Algorithmic Game Theory - Introduction, Complexity, Nash Branislav Bošanský Czech Technical University in Prague branislav.bosansky@agents.fel.cvut.cz February 25, 2018 About This Course main topics of
More informationHow Bad is Selfish Routing?
How Bad is Selfish Routing? Tim Roughgarden and Éva Tardos Presented by Brighten Godfrey 1 Game Theory Two or more players For each player, a set of strategies For each combination of played strategies,
More informationUnderstanding the Internet Graph. Christos H. Papadimitriou
Understanding the Internet Graph Christos H. Papadimitriou www.cs.berkeley.edu/~christos Goals To understand the Internet topology using tools from graph and game theory To contribute to the rigorous foundations
More informationVertical Handover Decision Strategies A double-sided auction approach
Vertical Handover Decision Strategies A double-sided auction approach Working paper Hoang-Hai TRAN Ph.d student DIONYSOS Team INRIA Rennes - Bretagne Atlantique 1 Content Introduction Handover in heterogeneous
More informationActive learning for visual object recognition
Active learning for visual object recognition Written by Yotam Abramson and Yoav Freund Presented by Ben Laxton Outline Motivation and procedure How this works: adaboost and feature details Why this works:
More informationCME323 Report: Distributed Multi-Armed Bandits
CME323 Report: Distributed Multi-Armed Bandits Milind Rao milind@stanford.edu 1 Introduction Consider the multi-armed bandit (MAB) problem. In this sequential optimization problem, a player gets to pull
More informationIntroduction to algorithmic mechanism design
Introduction to algorithmic mechanism design Elias Koutsoupias Department of Computer Science University of Oxford EWSCS 2014 March 5-7, 2014 Part I Game Theory and Computer Science Why Game Theory and
More informationThe Query Complexity of Correlated Equilibria
The Query Complexity of Correlated Equilibria Sergiu Hart Noam Nisan May 17, 2013 Abstract We consider the complexity of finding a Correlated Equilibrium in an n-player game in a model that allows the
More informationCase Study 1: Estimating Click Probabilities
Case Study 1: Estimating Click Probabilities SGD cont d AdaGrad Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade March 31, 2015 1 Support/Resources Office Hours Yao Lu:
More informationAlgorithmic Game Theory. Alexander Skopalik
Algorithmic Game Theory Alexander Skopalik Today 1. Strategic games (Normal form games) Complexity 2. Zero Sum Games Existence & Complexity Outlook, next week: Network Creation Games Guest lecture by Andreas
More informationMIDTERM EXAMINATION Networked Life (NETS 112) November 21, 2013 Prof. Michael Kearns
MIDTERM EXAMINATION Networked Life (NETS 112) November 21, 2013 Prof. Michael Kearns This is a closed-book exam. You should have no material on your desk other than the exam itself and a pencil or pen.
More informationMechanism Design in Large Congestion Games
Mechanism Design in Large Congestion Games Ryan Rogers, Aaron Roth, Jonathan Ullman, and Steven Wu July 22, 2015 Routing Game l e (y) Routing Game Routing Game A routing game G is defined by Routing Game
More informationCS364A: Algorithmic Game Theory Lecture #19: Pure Nash Equilibria and PLS-Completeness
CS364A: Algorithmic Game Theory Lecture #19: Pure Nash Equilibria and PLS-Completeness Tim Roughgarden December 2, 2013 1 The Big Picture We now have an impressive list of tractability results polynomial-time
More informationCAP 5993/CAP 4993 Game Theory. Instructor: Sam Ganzfried
CAP 5993/CAP 4993 Game Theory Instructor: Sam Ganzfried sganzfri@cis.fiu.edu 1 Announcements HW 1 due today HW 2 out this week (2/2), due 2/14 2 Definition: A two-player game is a zero-sum game if for
More informationDistributed Mechanism Design and Computer Security
Distributed Mechanism Design and Computer Security John Mitchell Vanessa Teague Stanford University Acknowledgements: J. Feigenbaum, R. Sami, A. Scedrov General problem Want to design distributed systems
More informationOn the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games
On the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games Bruno Codenotti Daniel Štefankovič Abstract The computational complexity of finding a Nash equilibrium in a nonzero sum bimatrix
More informationSelfish Caching in Distributed Systems: A Game-Theoretic Analysis
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis Symposium on Principles of Distributed Computing July 5, 4 Byung-Gon Chun, Kamalika Chaudhuri, Hoeteck Wee, Marco Barreno, Christos Papadimitriou,
More informationRisk bounds for some classification and regression models that interpolate
Risk bounds for some classification and regression models that interpolate Daniel Hsu Columbia University Joint work with: Misha Belkin (The Ohio State University) Partha Mitra (Cold Spring Harbor Laboratory)
More informationHow Bad is Selfish Traffic Routing
How Bad is Selfish Traffic Routing Malik Magdon-Ismail magdon@cs.rpi.edu (Joint Work with Costas Busch) July 6, 2006. Disclaimer: Theoretical talk; no theorems proved. Huge Numbers Many people believe,
More informationConstant Price of Anarchy in Network Creation Games via Public Service Advertising
Constant Price of Anarchy in Network Creation Games via Public Service Advertising Erik D. Demaine and Morteza Zadimoghaddam MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139,
More informationStrategic Network Formation
Strategic Network Formation Zhongjing Yu, Big Data Research Center, UESTC Email:junmshao@uestc.edu.cn http://staff.uestc.edu.cn/shaojunming What s meaning of Strategic Network Formation? Node : a individual.
More informationDetection and Mitigation of Cyber-Attacks using Game Theory
Detection and Mitigation of Cyber-Attacks using Game Theory João P. Hespanha Kyriakos G. Vamvoudakis Correlation Engine COAs Data Data Data Data Cyber Situation Awareness Framework Mission Cyber-Assets
More informationLarge Margin Classification Using the Perceptron Algorithm
Large Margin Classification Using the Perceptron Algorithm Yoav Freund Robert E. Schapire Presented by Amit Bose March 23, 2006 Goals of the Paper Enhance Rosenblatt s Perceptron algorithm so that it can
More informationNash equilibria in Voronoi Games on Graphs
Nash equilibria in Voronoi Games on Graphs Christoph Dürr, Nguyễn Kim Thắng (Ecole Polytechnique) ESA, Eilat October 07 Plan Motivation : Study the interaction between selfish agents on Internet k players,
More informationNetwork games. Brighten Godfrey CS 538 September slides by Brighten Godfrey unless otherwise noted
Network games Brighten Godfrey CS 538 September 27 2011 slides 2010-2011 by Brighten Godfrey unless otherwise noted Demo Game theory basics Games & networks: a natural fit Game theory Studies strategic
More informationOver-contribution in discretionary databases
Over-contribution in discretionary databases Mike Klaas klaas@cs.ubc.ca Faculty of Computer Science University of British Columbia Outline Over-contribution in discretionary databases p.1/1 Outline Social
More informationAnnouncements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.
CS 188: Artificial Intelligence Spring 2011 Lecture 21: Perceptrons 4/13/2010 Announcements Project 4: due Friday. Final Contest: up and running! Project 5 out! Pieter Abbeel UC Berkeley Many slides adapted
More informationHedonic Clustering Games
Hedonic Clustering Games [Extended Abstract] Moran Feldman CS Dept., Technion Haifa, Israel moranfe@cs.technion.ac.il Liane Lewin-Eytan IBM Haifa Research Lab. Haifa, Israel lianel@il.ibm.com Joseph (Seffi)
More informationAlgorithms, Games, and Networks February 21, Lecture 12
Algorithms, Games, and Networks February, 03 Lecturer: Ariel Procaccia Lecture Scribe: Sercan Yıldız Overview In this lecture, we introduce the axiomatic approach to social choice theory. In particular,
More informationNetwork Formation Games and the Potential Function Method
CHAPTER 19 Network Formation Games and the Potential Function Method Éva Tardos and Tom Wexler Abstract Large computer networks such as the Internet are built, operated, and used by a large number of diverse
More informationFeature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.
CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit
More informationHomework 2: Multi-unit combinatorial auctions (due Nov. 7 before class)
CPS 590.1 - Linear and integer programming Homework 2: Multi-unit combinatorial auctions (due Nov. 7 before class) Please read the rules for assignments on the course web page. Contact Vince (conitzer@cs.duke.edu)
More informationSlides credited from Dr. David Silver & Hung-Yi Lee
Slides credited from Dr. David Silver & Hung-Yi Lee Review Reinforcement Learning 2 Reinforcement Learning RL is a general purpose framework for decision making RL is for an agent with the capacity to
More informationLecture 9: (Semi-)bandits and experts with linear costs (part I)
CMSC 858G: Bandits, Experts and Games 11/03/16 Lecture 9: (Semi-)bandits and experts with linear costs (part I) Instructor: Alex Slivkins Scribed by: Amr Sharaf In this lecture, we will study bandit problems
More informationNaïve Bayes for text classification
Road Map Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification using association rules Naïve Bayesian classification Naïve Bayes for text classification Support
More informationBottleneck Routing Games on Grids. Costas Busch Rajgopal Kannan Alfred Samman Department of Computer Science Louisiana State University
Bottleneck Routing Games on Grids ostas Busch Rajgopal Kannan Alfred Samman Department of omputer Science Louisiana State University 1 Talk Outline Introduction Basic Game hannel Game xtensions 2 2-d Grid:
More informationVerified Learning Without Regret
Verified Learning Without Regret From Algorithmic Game Theory to Distributed Systems with Mechanized Complexity Guarantees Samuel Merten (B), Alexander Bagnall, and Gordon Stewart Ohio University, Athens,
More informationA Rant on Queues. Van Jacobson. July 26, MIT Lincoln Labs Lexington, MA
A Rant on Queues Van Jacobson July 26, 2006 MIT Lincoln Labs Lexington, MA Unlike the phone system, the Internet supports communication over paths with diverse, time varying, bandwidth. This means we often
More informationScaling Up Game Theory: Representation and Reasoning with Action Graph Games
Scaling Up Game Theory: Representation and Reasoning with Action Graph Games Kevin Leyton-Brown Computer Science University of British Columbia This talk is primarily based on papers with: Albert Xin Jiang
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Machine Learning: Perceptron Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer and Dan Klein. 1 Generative vs. Discriminative Generative classifiers:
More informationMonte Carlo Sampling for Regret Minimization in Extensive Games
Monte Carlo Sampling for Regret Minimization in Extensive Games Marc Lanctot Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 lanctot@ualberta.ca Martin Zinkevich
More informationOn the Structure of Weakly Acyclic Games
1 On the Structure of Weakly Acyclic Games Alex Fabrikant (Princeton CS Google Research) Aaron D. Jaggard (DIMACS, Rutgers Colgate University) Michael Schapira (Yale CS & Berkeley CS Princeton CS) 2 Best-
More informationNew Results on Simple Stochastic Games
New Results on Simple Stochastic Games Decheng Dai 1 and Rong Ge 2 1 Tsinghua University, ddc02@mails.tsinghua.edu.cn 2 Princeton University, rongge@cs.princeton.edu Abstract. We study the problem of solving
More informationA Game-Theoretic Framework for Congestion Control in General Topology Networks
A Game-Theoretic Framework for Congestion Control in General Topology SYS793 Presentation! By:! Computer Science Department! University of Virginia 1 Outline 2 1 Problem and Motivation! Congestion Control
More informationOnline Convex Optimization in the Bandit Setting: Gradient Descent Without a Gradient
Online Convex Optimization in the Bandit Setting: Gradient Descent Without a Gradient Abraham D. Flaxman, CMU Math Adam Tauman Kalai, TTI-Chicago H. Brendan McMahan, CMU CS May 25, 2006 Outline Online
More informationThe Price of Selfishness in Network Coding
The Price of Selfishness in Network Coding Jason R. Marden and Michelle Effros Abstract We introduce a game theoretic framework for studying a restricted form of network coding in a general wireless network.
More informationRegret Transfer and Parameter Optimization
Proceedings of the wenty-eighth AAAI Conference on Artificial Intelligence Regret ransfer and Parameter Optimization Noam Brown Robotics Institute Carnegie Mellon University noamb@cs.cmu.edu uomas Sandholm
More informationNetwork Improvement for Equilibrium Routing
Network Improvement for Equilibrium Routing UMANG BHASKAR University of Waterloo and KATRINA LIGETT California Institute of Technology Routing games are frequently used to model the behavior of traffic
More informationDo incentives build robustness in BitTorrent?
Do incentives build robustness in BitTorrent? ronghui.gu@yale.edu Agenda 2 Introduction BitTorrent Overview Modeling Altruism in BitTorrent Building BitTyrant Evaluation Conclusion Introduction 3 MAIN
More informationLecture Notes on Congestion Games
Lecture Notes on Department of Computer Science RWTH Aachen SS 2005 Definition and Classification Description of n agents share a set of resources E strategy space of player i is S i 2 E latency for resource
More informationEC422 Mathematical Economics 2
EC422 Mathematical Economics 2 Chaiyuth Punyasavatsut Chaiyuth Punyasavatust 1 Course materials and evaluation Texts: Dixit, A.K ; Sydsaeter et al. Grading: 40,30,30. OK or not. Resources: ftp://econ.tu.ac.th/class/archan/c
More informationGame-theoretic Robustness of Many-to-one Networks
Game-theoretic Robustness of Many-to-one Networks Aron Laszka, Dávid Szeszlér 2, and Levente Buttyán Budapest University of Technology and Economics, Department of Telecommunications, Laboratory of Cryptography
More informationProject: A survey and Critique on Algorithmic Mechanism Design
Project: A survey and Critique on Algorithmic Mechanism Design 1 Motivation One can think of many systems where agents act according to their self interest. Therefore, when we are in the process of designing
More informationSolving Large Imperfect Information Games Using CFR +
Solving Large Imperfect Information Games Using CFR + Oskari Tammelin ot@iki.fi July 16, 2014 Abstract Counterfactual Regret Minimization and variants (ie. Public Chance Sampling CFR and Pure CFR) have
More informationHow Hard Is Inference for Structured Prediction?
How Hard Is Inference for Structured Prediction? Tim Roughgarden (Stanford University) joint work with Amir Globerson (Tel Aviv), David Sontag (NYU), and Cafer Yildirum (NYU) 1 Structured Prediction structured
More informationUsing Reinforcement Learning to Optimize Storage Decisions Ravi Khadiwala Cleversafe
Using Reinforcement Learning to Optimize Storage Decisions Ravi Khadiwala Cleversafe Topics What is Reinforcement Learning? Exploration vs. Exploitation The Multi-armed Bandit Optimizing read locations
More informationModeling and Performance Analysis of BitTorrent-Like Peer-to-Peer Networks
Modeling and Performance Analysis of BitTorrent-Like Peer-to-Peer Networks Dongyu Qiu and R. Srikant Coordinated Science Laboratory University of Illinois at Urbana-Champaign CSL, UIUC p.1/22 Introduction
More informationCS 4700: Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Fall 2017 Instructor: Prof. Haym Hirsh Lecture 8 Today Informed Search (R&N Ch 3,4) Adversarial search (R&N Ch 5) Adversarial Search (R&N Ch 5) Homework
More informationClustering: Classic Methods and Modern Views
Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering
More informationConvergence Issues in Competitive Games
Convergence Issues in Competitive Games Vahab S Mirrokni 1, Adrian Vetta 2 and A Sidoropoulous 3 1 Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT mail: mirrokni@theorylcsmitedu 2
More informationAn Optimal Dynamic Pricing Framework for Autonomous Mobile Ad Hoc Networks
An Optimal Dynamic Pricing Framework for Autonomous Mobile Ad Hoc Networks Z. James Ji, Wei (Anthony) Yu, and K. J. Ray Liu Electrical and Computer Engineering Department and Institute for Systems Research
More informationGeometric Registration for Deformable Shapes 3.3 Advanced Global Matching
Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Correlated Correspondences [ASP*04] A Complete Registration System [HAW*08] In this session Advanced Global Matching Some practical
More informationApprenticeship Learning for Reinforcement Learning. with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang
Apprenticeship Learning for Reinforcement Learning with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang Table of Contents Introduction Theory Autonomous helicopter control
More informationLearning Socially Optimal Information Systems from Egoistic Users
Learning Socially Optimal Information Systems from Egoistic Users Karthik Raman Thorsten Joachims Department of Computer Science Cornell University, Ithaca NY karthik@cs.cornell.edu www.cs.cornell.edu/
More informationModelling competition in demand-based optimization models
Modelling competition in demand-based optimization models Stefano Bortolomiol Virginie Lurkin Michel Bierlaire Transport and Mobility Laboratory (TRANSP-OR) École Polytechnique Fédérale de Lausanne Workshop
More informationA PAC Approach to ApplicationSpecific Algorithm Selection. Rishi Gupta Tim Roughgarden Simons, Nov 16, 2016
A PAC Approach to ApplicationSpecific Algorithm Selection Rishi Gupta Tim Roughgarden Simons, Nov 16, 2016 Motivation and Set-up Problem Algorithms Alg 1: Greedy algorithm Alg 2: Linear programming Alg
More information450 Jon M. Huntsman Hall 3730 Walnut Street, Mobile:
Karthik Sridharan Contact Information Research Interests 450 Jon M. Huntsman Hall 3730 Walnut Street, Mobile: +1-716-550-2561 Philadelphia, PA 19104 E-mail: karthik.sridharan@gmail.com USA http://ttic.uchicago.edu/
More informationOn a Network Generalization of the Minmax Theorem
On a Network Generalization of the Minmax Theorem Constantinos Daskalakis Christos H. Papadimitriou {costis, christos}@cs.berkeley.edu February 10, 2009 Abstract We consider graphical games in which edges
More informationModeling and Simulating Social Systems with MATLAB
Modeling and Simulating Social Systems with MATLAB Lecture 7 Game Theory / Agent-Based Modeling Stefano Balietti, Olivia Woolley, Lloyd Sanders, Dirk Helbing Computational Social Science ETH Zürich 02-11-2015
More informationFinding optimal configurations Adversarial search
CS 171 Introduction to AI Lecture 10 Finding optimal configurations Adversarial search Milos Hauskrecht milos@cs.pitt.edu 39 Sennott Square Announcements Homework assignment is out Due on Thursday next
More informationDynamics, Non-Cooperation, and Other Algorithmic Challenges in Peer-to-Peer Computing
Dynamics, Non-Cooperation, and Other Algorithmic Challenges in Peer-to-Peer Computing Stefan Schmid Distributed Computing Group Oberseminar TU München, Germany December 2007 Networks Neuron Networks DISTRIBUTED
More informationAnarchy is free in network creation
Anarchy is free in network creation Ronald Graham Linus Hamilton Ariel Levavi Po-Shen Loh Abstract The Internet has emerged as perhaps the most important network in modern computing, but rather miraculously,
More informationThe Fly & Anti-Fly Missile
The Fly & Anti-Fly Missile Rick Tilley Florida State University (USA) rt05c@my.fsu.edu Abstract Linear Regression with Gradient Descent are used in many machine learning applications. The algorithms are
More informationA C 2 Four-Point Subdivision Scheme with Fourth Order Accuracy and its Extensions
A C 2 Four-Point Subdivision Scheme with Fourth Order Accuracy and its Extensions Nira Dyn School of Mathematical Sciences Tel Aviv University Michael S. Floater Department of Informatics University of
More informationAnarchy is Free in Network Creation
Anarchy is Free in Network Creation Ronald Graham 1, Linus Hamilton 2, Ariel Levavi 1, and Po-Shen Loh 2 1 Department of Computer Science and Engineering, University of California, San Diego, La Jolla,
More informationA Network Coloring Game
A Network Coloring Game Kamalika Chaudhuri, Fan Chung 2, and Mohammad Shoaib Jamall 2 Information Theory and Applications Center, UC San Diego kamalika@soe.ucsd.edu 2 Department of Mathematics, UC San
More informationBottleneck Routing Games on Grids
Bottleneck Routing Games on Grids Costas Busch, Rajgopal Kannan, and Alfred Samman Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA {busch,rkannan,samman}@csc.lsu.edu
More informationPricing and Forwarding Games for Inter-domain Routing
Pricing and Forwarding Games for Inter-domain Routing Full version Ameya Hate, Elliot Anshelevich, Koushik Kar, and Michael Usher Department of Computer Science Department of Electrical & Computer Engineering
More informationA Distributed Auction Algorithm for the Assignment Problem
A Distributed Auction Algorithm for the Assignment Problem Michael M. Zavlanos, Leonid Spesivtsev and George J. Pappas Abstract The assignment problem constitutes one of the fundamental problems in the
More informationOn Bounded Rationality in Cyber-Physical Systems Security: Game-Theoretic Analysis with Application to Smart Grid Protection
On Bounded Rationality in Cyber-Physical Systems Security: Game-Theoretic Analysis with Application to Smart Grid Protection CPSR-SG 2016 CPS Week 2016 April 12, 2016 Vienna, Austria Outline CPS Security
More informationA Graph-Theoretic Network Security Game
A Graph-Theoretic Network Security Game Marios Mavronicolas Vicky Papadopoulou Anna Philippou Paul Spirakis May 16, 2008 A preliminary version of this work appeared in the Proceedings of the 1st International
More informationPascal De Beck-Courcelle. Master in Applied Science. Electrical and Computer Engineering
Study of Multiple Multiagent Reinforcement Learning Algorithms in Grid Games by Pascal De Beck-Courcelle A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of
More informationTraveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost
Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R
More informationLearning and inference in the presence of corrupted inputs
JMLR: Workshop and Conference Proceedings vol 40:1 21, 2015 Learning and inference in the presence of corrupted inputs Uriel Feige Weizmann Institute of Science Yishay Mansour Microsoft Research, Hertzelia
More informationMODEL SELECTION AND REGULARIZATION PARAMETER CHOICE
MODEL SELECTION AND REGULARIZATION PARAMETER CHOICE REGULARIZATION METHODS FOR HIGH DIMENSIONAL LEARNING Francesca Odone and Lorenzo Rosasco odone@disi.unige.it - lrosasco@mit.edu June 6, 2011 ABOUT THIS
More informationOnline Learning. Lorenzo Rosasco MIT, L. Rosasco Online Learning
Online Learning Lorenzo Rosasco MIT, 9.520 About this class Goal To introduce theory and algorithms for online learning. Plan Different views on online learning From batch to online least squares Other
More informationApproximation Techniques for Utilitarian Mechanism Design
Approximation Techniques for Utilitarian Mechanism Design Department of Computer Science RWTH Aachen Germany joint work with Patrick Briest and Piotr Krysta 05/16/2006 1 Introduction to Utilitarian Mechanism
More informationBargaining and Coalition Formation
1 These slides are based largely on Chapter 18, Appendix A of Microeconomic Theory by Mas-Colell, Whinston, and Green. Bargaining and Coalition Formation Dr James Tremewan (james.tremewan@univie.ac.at)
More informationEnsemble Learning. Another approach is to leverage the algorithms we have via ensemble methods
Ensemble Learning Ensemble Learning So far we have seen learning algorithms that take a training set and output a classifier What if we want more accuracy than current algorithms afford? Develop new learning
More informationMarkov Decision Processes. (Slides from Mausam)
Markov Decision Processes (Slides from Mausam) Machine Learning Operations Research Graph Theory Control Theory Markov Decision Process Economics Robotics Artificial Intelligence Neuroscience /Psychology
More informationComputer Sciences Department
Computer Sciences Department RouteBazaar: An Economic Framework for Flexible Routing Holly Esquivel Chitra Muthukrishnan Feng Niu Shuchi Chawla Aditya Akella Technical Report #654 April 29 RouteBazaar:
More informationTracking Computer Vision Spring 2018, Lecture 24
Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How
More informationFailure Localization in the Internet
Failure Localization in the Internet Boaz Barak, Sharon Goldberg, David Xiao Princeton University Excerpts of talks presented at Stanford, U Maryland, NYU. Why use Internet path-quality monitoring? Internet:
More information(67686) Mathematical Foundations of AI July 30, Lecture 11
(67686) Mathematical Foundations of AI July 30, 2008 Lecturer: Ariel D. Procaccia Lecture 11 Scribe: Michael Zuckerman and Na ama Zohary 1 Cooperative Games N = {1,...,n} is the set of players (agents).
More information