Coordination-Free Computations. Christopher Meiklejohn

Size: px
Start display at page:

Download "Coordination-Free Computations. Christopher Meiklejohn"

Transcription

1 Coordination-Free Computations Christopher Meiklejohn

2 LASP DISTRIBUTED, EVENTUALLY CONSISTENT COMPUTATIONS CHRISTOPHER MEIKLEJOHN (BASHO TECHNOLOGIES, INC.) PETER VAN ROY (UNIVERSITÉ CATHOLIQUE DE LOUVAIN) 2

3 LASP MOTIVATION 3

4 SYNCHRONIZATION IS EXPENSIVE / IMPRACTICAL 4

5 MOBILE GAMES: SHARED STATE BETWEEN CLIENTS CLIENTS GO OFFLINE 5

6 INTERNET OF THINGS : DISJOINT STATE AGGREGATED UPSTREAM CLIENTS GO OFFLINE Gubbi, Jayavardhana, et al. "Internet of Things (IoT): A vision, architectural elements, and future directions." Future Generation Computer Systems 29.7 (2013):

7 NO TOTAL ORDER: REPLICATED SHARED STATE WITH OFFLINE CLIENTS CLIENTS NEED TO MAKE PROGRESS Gilbert, Seth, and Nancy Lynch. "Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services." ACM SIGACT News 33.2 (2002):

8 WALL CLOCKS: UNRELIABLE AT BEST NON-DETERMINISTIC IF USED IN COMPUTATIONS Corbett, James C., et al. "Spanner: Google s globally distributed database." ACM Transactions on Computer Systems (TOCS) 31.3 (2013): 8. 8

9 CONCURRENCY RECONCILED BY USER 9

10 set(1) set(2) R A 1 2? set(3) R B 3? 10

11 set(1) set(2) R A 1 2? set(3) R B 3? 11

12 set(1) set(2) R A 1 2? set(3) R B 3? 12

13 set(1) set(2) R A 1 2? set(3) R B 3? 13

14 CRDTs 14

15 CRDTs PROVIDE DETERMINISTIC RESOLUTION 15

16 CRDTs: MAPS, SETS, COUNTERS, REGISTERS, GRAPHS DETERMINISTIC RESOLUTION 16

17 CRDTs REALIZE MONOTONIC STRONG EVENTUAL CONSISTENCY CORRECT REPLICAS THAT HAVE DELIVERED THE SAME UPDATES HAVE EQUIVALENT STATE Shapiro, Marc, et al. "Conflict-free replicated data types." Stabilization, Safety, and Security of Distributed Systems. Springer Berlin Heidelberg,

18 CRDTs EXAMPLE MAX REGISTER 18

19 set(1) set(2) max(2,3) R A max(2,3) R B 3 3 set(3) 19

20 CRDTs EXAMPLE ORSET SET 20

21 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) R B {1} (1, {a, b}, {b}) add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) 21

22 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) R B {1} (1, {a, b}, {b}) add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) 22

23 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) R B {1} (1, {a, b}, {b}) add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) 23

24 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) R B {1} (1, {a, b}, {b}) add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) 24

25 FRAGMENTS & PROGRAMS 25

26 {1,2,3} {3,4,5} 26

27 {1,2,3} Intersection {3} {3,4,5} 27

28 {1,2,3} Intersection {3} Map {9} {3,4,5} 28

29 {1,2,3} Intersection {3} Map {9} {3,4,5} 29

30 {1,2,3} Intersection {3} Map {9} {3,4,5} 30

31 {1,2,3} Intersection {3} Map {9} {3,4,5} 31

32 {1,2,3} Intersection {3} Map {9} {3,4,5} {1,2,3} Intersection {3} Map {9} {3,4,5} 32

33 {1,2,3} Intersection {3} Map {9} {3,4,5} {1,2,3} Intersection {3} Map {9} {3,4,5} 33

34 {1,2,3} Intersection {3} Map {9} {3,4,5} {1,2,3} Intersection {3} Map {9} {3,4,5} 34

35 FUNCTION APPLICATION AND DATA COMPOSITION IS NONTRIVIAL 35

36 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) R B {1} (1, {a, b}, {b}) add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} 36

37 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} 37

38 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} 38

39 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} 39

40 add(1) R A {1} {1} (1, {a}, {}) (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} add(1) remove(1) R C {1} {} {1} (1, {b}, {}) (1, {b}, {b}) (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} 40

41 COMPOSITION: USER OBSERVABLE VALUE VS. STATE METADATA MAPPING IS NONTRIVIAL WITHOUT MAPPING METADATA; UNMERGABLE Brown, Russell, et al. "Riak dt map: A composable, convergent replicated dictionary." Proceedings of the First Workshop on Principles and Practice of Eventual Consistency. ACM, Conway, Neil, et al. "Logic and lattices for distributed programming." Proceedings of the Third ACM Symposium on Cloud Computing. ACM, Meiklejohn, Christopher. "On the composability of the Riak DT map: expanding from embedded to multi-key structures." Proceedings of the First Workshop on Principles and Practice of Eventual Consistency. ACM,

42 LASP LANGUAGE 42

43 LASP: DISTRIBUTED RUNTIME IMPLEMENTED AS ERLANG LIBRARY USES RIAK-CORE 43

44 LASP: CRDTS AS STREAMS OF STATE CHANGES CRDTS CONNECTED BY MONOTONIC PROCESSES MANY TO ONE MAPPING OF CRDTS 44

45 PRIMITIVE OPERATIONS: MONOTONIC READ, UPDATE FUNCTIONAL: MAP, FILTER, FOLD SET-THEORETIC: PRODUCT, UNION, INTERSECTION LIFTED TO OPERATE OVER METADATA 45

46 EXAMPLE MAP 46

47 add(1) R A {1} (1, {a}, {}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} (2, {a}, {}) (2, {a, b}, {}) (2, {a, b}, {b}) (2, {a, b}, {b}) R B {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R B ) {2} {} {} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) add(1) remove(1) R C {1} (1, {b}, {}) {} (1, {b}, {b}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) 47

48 add(1) R A {1} (1, {a}, {}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} (2, {a}, {}) (2, {a, b}, {}) (2, {a, b}, {b}) (2, {a, b}, {b}) R B {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R B ) {2} {} {} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) add(1) remove(1) R C {1} (1, {b}, {}) {} (1, {b}, {b}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) 48

49 add(1) R A {1} (1, {a}, {}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} (2, {a}, {}) (2, {a, b}, {}) (2, {a, b}, {b}) (2, {a, b}, {b}) R B {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R B ) {2} {} {} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) add(1) remove(1) R C {1} (1, {b}, {}) {} (1, {b}, {b}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) 49

50 add(1) R A {1} (1, {a}, {}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R A ) {2} {2} {2} {2} (2, {a}, {}) (2, {a, b}, {}) (2, {a, b}, {b}) (2, {a, b}, {b}) R B {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R B ) {2} {} {} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) add(1) remove(1) R C {1} (1, {b}, {}) {} (1, {b}, {b}) {1} (1, {a, b}, {b}) fun(x) -> 2 end F(R C ) {2} {} {2} {2} (2, {b}, {}) (2, {b}, {b}) (2, {a, b}, {b}) (2, {a, b}, {b}) 50

51 ARCHITECTURE 51

52 LASP STORE SHARED VARIABLE STORE 52

53 LASP BACKENDS LEVELDB, BITCASK, ETS 53

54 LASP CRDTs PROVIDED BY RIAK DT 54

55 LASP ARCHITECTURE CENTRALIZED SEMANTICS 55

56 STORE: SHARED VARIABLE STORE PROCESSESS SYNCHRONIZE ON VARIABLES 56

57 LASP ARCHITECTURE DISTRIBUTED SEMANTICS 57

58 STORE: REPLICATED, SHARDED VARIABLE STORE PROCESSESS SYNCHRONIZE ON VARIABLES 58

59 REPLICATED: DISTRIBUTED WITH RIAK CORE QUORUM REQUESTS; ANTI-ENTROPY PROTOCOL 59

60 HYBRID: DISTRIBUTE PROGRAMS; R/W WITH LOCAL STORE CENTRALIZED EXECUTION 60

61 EXAMPLES 61

62 %% Create initial set. {ok, S1} = lasp:declare(type), %% Add elements to initial set and update. {ok, _} = lasp:update(s1, {add_all, [1,2,3]}, a), %% Create second set. {ok, S2} = lasp:declare(type), %% Apply map. ok = lasp:map(s1, fun(x) -> X * 2 end, S2),

63 %% Create initial set. {ok, S1} = lasp_core:declare(type, Store), %% Add elements to initial set and update. {ok, _} = lasp_core:update(s1, {add_all, [1,2,3]}, a, Store), %% Create second set. {ok, S2} = lasp_core:declare(type, Store), %% Apply map. ok = lasp_core:map(s1, fun(x) -> X * 2 end, S2, Store),

64 AD COUNTER 64

65 AD COUNTER: TRACKS AD IMPRESSIONS PUSHES ADVERTISEMENTS TO THE CLIENT DISABLES AD AT 50,000+ IMPRESSIONS CLIENTS DISPLAY ADS WHEN OFFLINE 65

66 AD COUNTER INFORMATION FLOW 66

67 Increment Rovio Ad Counter Read 50,000 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Rovio Ad Counter Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Riot Ad Counter Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 67

68 Rovio Ad Counter Union Ads Riot Ad Counter Riot Ads Contracts Riot Ad Counter 68

69 Rovio Ad Counter Increment Read 50,000 Remove Rovio Ads Rovio Ad Counter Union Ads Product Ads Contracts Filte Riot Ad Counter Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT 69

70 Union Ads Product Ads Contracts Filter Ads With Contracts Contracts Lasp Operation User-Maintained CRDT 70

71 Read Product Ads Contracts Filter Ads With Contracts cts Lasp Operation 71

72 Read Counter 1 Counter 2 Counter 3 s acts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Counter 1 Counter 2 Counter 3 tion ained CRDT Counter 1 Counter 2 Counter 3 ined CRDT 72

73 Increment Rovio Ad Counter Read 50,000 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Rovio Ad Counter Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Riot Ad Counter Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 73

74 Rovio Ad Counter Increment Read 50,000 Remove Rovio Ads Rovio Ad Counter Union Ads Product Ads Contrac Riot Ad Counter Riot Ads Contracts Lasp Operat 74

75 0 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 75

76 INFORMATION FLOW: MONOTONIC METADATA TO PREVENT DUPLICATE PROPAGATION 76

77 AD COUNTER EXAMPLE CODE 77

78 Client process; standard recursive looping server. client(id, AdsWithContracts, PreviousValue) -> receive view_ad -> %% Get current ad list. {ok, {_, _, AdList0}} = lasp:read(adswithcontracts, PreviousValue), AdList = riak_dt_orset:value(adlist0), case length(adlist) of 0 -> %% No advertisements left to display; ignore %% message. client(id, AdsWithContracts, AdList0); _ -> %% Select a random advertisement from the list of %% active advertisements. {#ad{counter=ad}, _} = lists:nth( random:uniform(length(adlist)), AdList), %% Increment it. {ok, _} = lasp:update(ad, increment, Id), lager:info("incremented ad counter: ~p", [Ad]), end. end client(id, AdsWithContracts, AdList0)

79 Server functions for the advertisement counter. After 5 views, %% disable the advertisement. %% server({#ad{counter=counter}=ad, _}, Ads) -> %% Blocking threshold read for 5 advertisement impressions. {ok, _} = lasp:read(counter, 5), %% Remove the advertisement. {ok, _} = lasp:update(ads, {remove, Ad}, Ad), lager:info("removing ad: ~p", [Ad]).

80 %% Generate a series of unique identifiers. RovioAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("rovio Ad Identifiers are: ~p", [RovioAdIds]), TriforkAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("trifork Ad Identifiers are: ~p", [TriforkAdIds]), Ids = RovioAdIds ++ TriforkAdIds, lager:info("ad Identifiers are: ~p", [Ids]), %% Generate Rovio's advertisements. {ok, RovioAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(rovioads, {add, #ad{id=id, counter=counterid}}, undefined) end, RovioAdIds), %% Generate Trifork's advertisements. {ok, TriforkAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(triforkads, {add, #ad{id=id, counter=counterid}}, undefined) end, TriforkAdIds), %% Union ads. {ok, Ads} = lasp:declare(?set), ok = lasp:union(rovioads, TriforkAds, Ads), %% For each identifier, generate a contract. {ok, Contracts} = lasp:declare(?set), lists:map(fun(id) -> {ok, _} = lasp:update(contracts, {add, #contract{id=id}}, undefined) end, Ids), %% Compute the Cartesian product of both ads and contracts. {ok, AdsContracts} = lasp:declare(?set), ok = lasp:product(ads, Contracts, AdsContracts), %% Filter items by join on item it. {ok, AdsWithContracts} = lasp:declare(?set), FilterFun = fun({#ad{id=id1}, #contract{id=id2}}) -> Id1 =:= Id2 end, ok = lasp:filter(adscontracts, FilterFun, AdsWithContracts), %% Launch a series of client processes, each of which is responsible %% for displaying a particular advertisement. %% Generate a OR-set for tracking clients. {ok, Clients} = lasp:declare(?set), %% Each client takes the full list of ads when it starts, and reads %% from the variable store. lists:map(fun(id) -> ClientPid = spawn_link(?module, client, [Id, AdsWithContracts, undefined]), {ok, _} = lasp:update(clients, {add, ClientPid}, undefined) end, lists:seq(1,5)), %% Launch a server process for each advertisement, which will block %% until the advertisement should be disabled. %% Create a OR-set for the server list. {ok, Servers} = lasp:declare(?set), %% Get the current advertisement list. {ok, {_, _, AdList0}} = lasp:read(adswithcontracts), AdList = riak_dt_orset:value(adlist0), %% For each advertisement, launch one server for tracking it's %% impressions and wait to disable. lists:map(fun(ad) -> ServerPid = spawn_link(?module, server, [Ad, Ads]), {ok, _} = lasp:update(servers, {add, ServerPid}, undefined) end, AdList),

81 %% Generate a series of unique identifiers. RovioAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("rovio Ad Identifiers are: ~p", [RovioAdIds]), TriforkAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("trifork Ad Identifiers are: ~p", [TriforkAdIds]), Ids = RovioAdIds ++ TriforkAdIds, lager:info("ad Identifiers are: ~p", [Ids]), %% Generate Rovio's advertisements. {ok, RovioAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(rovioads, {add, #ad{id=id, counter=counterid}}, undefined) end, RovioAdIds), %% Generate Trifork's advertisements. {ok, TriforkAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(triforkads, {add, #ad{id=id, counter=counterid}}, undefined) end, TriforkAdIds), %% Union ads. {ok, Ads} = lasp:declare(?set), ok = lasp:union(rovioads, TriforkAds, Ads), %% For each identifier, generate a contract. {ok, Contracts} = lasp:declare(?set), lists:map(fun(id) -> {ok, _} = lasp:update(contracts, {add, #contract{id=id}}, undefined) end, Ids), %% Compute the Cartesian product of both ads and contracts. {ok, AdsContracts} = lasp:declare(?set), ok = lasp:product(ads, Contracts, AdsContracts), %% Filter items by join on item it. {ok, AdsWithContracts} = lasp:declare(?set), FilterFun = fun({#ad{id=id1}, #contract{id=id2}}) -> Id1 =:= Id2 end, ok = lasp:filter(adscontracts, FilterFun, AdsWithContracts), %% Launch a series of client processes, each of which is responsible %% for displaying a particular advertisement. %% Generate a OR-set for tracking clients. {ok, Clients} = lasp:declare(?set), %% Each client takes the full list of ads when it starts, and reads %% from the variable store. lists:map(fun(id) -> ClientPid = spawn_link(?module, client, [Id, AdsWithContracts, undefined]), {ok, _} = lasp:update(clients, {add, ClientPid}, undefined) end, lists:seq(1,5)), %% Launch a server process for each advertisement, which will block %% until the advertisement should be disabled. %% Create a OR-set for the server list. {ok, Servers} = lasp:declare(?set), %% Get the current advertisement list. {ok, {_, _, AdList0}} = lasp:read(adswithcontracts), AdList = riak_dt_orset:value(adlist0), %% For each advertisement, launch one server for tracking it's %% impressions and wait to disable. lists:map(fun(ad) -> ServerPid = spawn_link(?module, server, [Ad, Ads]), {ok, _} = lasp:update(servers, {add, ServerPid}, undefined) end, AdList),

82 %% Generate a series of unique identifiers. RovioAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("rovio Ad Identifiers are: ~p", [RovioAdIds]), TriforkAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("trifork Ad Identifiers are: ~p", [TriforkAdIds]), Ids = RovioAdIds ++ TriforkAdIds, lager:info("ad Identifiers are: ~p", [Ids]), %% Generate Rovio's advertisements. {ok, RovioAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(rovioads, {add, #ad{id=id, counter=counterid}}, undefined) end, RovioAdIds), %% Generate Trifork's advertisements. {ok, TriforkAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(triforkads, {add, #ad{id=id, counter=counterid}}, undefined) end, TriforkAdIds), %% Union ads. {ok, Ads} = lasp:declare(?set), ok = lasp:union(rovioads, TriforkAds, Ads), %% For each identifier, generate a contract. {ok, Contracts} = lasp:declare(?set), lists:map(fun(id) -> {ok, _} = lasp:update(contracts, {add, #contract{id=id}}, undefined) end, Ids), %% Compute the Cartesian product of both ads and contracts. {ok, AdsContracts} = lasp:declare(?set), ok = lasp:product(ads, Contracts, AdsContracts), %% Filter items by join on item it. {ok, AdsWithContracts} = lasp:declare(?set), FilterFun = fun({#ad{id=id1}, #contract{id=id2}}) -> Id1 =:= Id2 end, ok = lasp:filter(adscontracts, FilterFun, AdsWithContracts), %% Launch a series of client processes, each of which is responsible %% for displaying a particular advertisement. %% Generate a OR-set for tracking clients. {ok, Clients} = lasp:declare(?set), %% Each client takes the full list of ads when it starts, and reads %% from the variable store. lists:map(fun(id) -> ClientPid = spawn_link(?module, client, [Id, AdsWithContracts, undefined]), {ok, _} = lasp:update(clients, {add, ClientPid}, undefined) end, lists:seq(1,5)), %% Launch a server process for each advertisement, which will block %% until the advertisement should be disabled. %% Create a OR-set for the server list. {ok, Servers} = lasp:declare(?set), %% Get the current advertisement list. {ok, {_, _, AdList0}} = lasp:read(adswithcontracts), AdList = riak_dt_orset:value(adlist0), %% For each advertisement, launch one server for tracking it's %% impressions and wait to disable. lists:map(fun(ad) -> ServerPid = spawn_link(?module, server, [Ad, Ads]), {ok, _} = lasp:update(servers, {add, ServerPid}, undefined) end, AdList),

83 %% Generate a series of unique identifiers. RovioAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("rovio Ad Identifiers are: ~p", [RovioAdIds]), TriforkAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("trifork Ad Identifiers are: ~p", [TriforkAdIds]), Ids = RovioAdIds ++ TriforkAdIds, lager:info("ad Identifiers are: ~p", [Ids]), %% Generate Rovio's advertisements. {ok, RovioAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(rovioads, {add, #ad{id=id, counter=counterid}}, undefined) end, RovioAdIds), %% Generate Trifork's advertisements. {ok, TriforkAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(triforkads, {add, #ad{id=id, counter=counterid}}, undefined) end, TriforkAdIds), %% Union ads. {ok, Ads} = lasp:declare(?set), ok = lasp:union(rovioads, TriforkAds, Ads), %% For each identifier, generate a contract. {ok, Contracts} = lasp:declare(?set), lists:map(fun(id) -> {ok, _} = lasp:update(contracts, {add, #contract{id=id}}, undefined) end, Ids), %% Compute the Cartesian product of both ads and contracts. {ok, AdsContracts} = lasp:declare(?set), ok = lasp:product(ads, Contracts, AdsContracts), %% Filter items by join on item it. {ok, AdsWithContracts} = lasp:declare(?set), FilterFun = fun({#ad{id=id1}, #contract{id=id2}}) -> Id1 =:= Id2 end, ok = lasp:filter(adscontracts, FilterFun, AdsWithContracts), %% Launch a series of client processes, each of which is responsible %% for displaying a particular advertisement. %% Generate a OR-set for tracking clients. {ok, Clients} = lasp:declare(?set), %% Each client takes the full list of ads when it starts, and reads %% from the variable store. lists:map(fun(id) -> ClientPid = spawn_link(?module, client, [Id, AdsWithContracts, undefined]), {ok, _} = lasp:update(clients, {add, ClientPid}, undefined) end, lists:seq(1,5)), %% Launch a server process for each advertisement, which will block %% until the advertisement should be disabled. %% Create a OR-set for the server list. {ok, Servers} = lasp:declare(?set), %% Get the current advertisement list. {ok, {_, _, AdList0}} = lasp:read(adswithcontracts), AdList = riak_dt_orset:value(adlist0), %% For each advertisement, launch one server for tracking it's %% impressions and wait to disable. lists:map(fun(ad) -> ServerPid = spawn_link(?module, server, [Ad, Ads]), {ok, _} = lasp:update(servers, {add, ServerPid}, undefined) end, AdList),

84 %% Generate a series of unique identifiers. RovioAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("rovio Ad Identifiers are: ~p", [RovioAdIds]), TriforkAdIds = lists:map(fun(_) -> druuid:v4() end, lists:seq(1, 10)), lager:info("trifork Ad Identifiers are: ~p", [TriforkAdIds]), Ids = RovioAdIds ++ TriforkAdIds, lager:info("ad Identifiers are: ~p", [Ids]), %% Generate Rovio's advertisements. {ok, RovioAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(rovioads, {add, #ad{id=id, counter=counterid}}, undefined) end, RovioAdIds), %% Generate Trifork's advertisements. {ok, TriforkAds} = lasp:declare(?set), lists:map(fun(id) -> %% Generate a G-Counter. {ok, CounterId} = lasp:declare(?counter), %% Add it to the advertisement set. {ok, _} = lasp:update(triforkads, {add, #ad{id=id, counter=counterid}}, undefined) end, TriforkAdIds), %% Union ads. {ok, Ads} = lasp:declare(?set), ok = lasp:union(rovioads, TriforkAds, Ads), %% For each identifier, generate a contract. {ok, Contracts} = lasp:declare(?set), lists:map(fun(id) -> {ok, _} = lasp:update(contracts, {add, #contract{id=id}}, undefined) end, Ids), %% Compute the Cartesian product of both ads and contracts. {ok, AdsContracts} = lasp:declare(?set), ok = lasp:product(ads, Contracts, AdsContracts), %% Filter items by join on item it. {ok, AdsWithContracts} = lasp:declare(?set), FilterFun = fun({#ad{id=id1}, #contract{id=id2}}) -> Id1 =:= Id2 end, ok = lasp:filter(adscontracts, FilterFun, AdsWithContracts), %% Launch a series of client processes, each of which is responsible %% for displaying a particular advertisement. %% Generate a OR-set for tracking clients. {ok, Clients} = lasp:declare(?set), %% Each client takes the full list of ads when it starts, and reads %% from the variable store. lists:map(fun(id) -> ClientPid = spawn_link(?module, client, [Id, AdsWithContracts, undefined]), {ok, _} = lasp:update(clients, {add, ClientPid}, undefined) end, lists:seq(1,5)), %% Launch a server process for each advertisement, which will block %% until the advertisement should be disabled. %% Create a OR-set for the server list. {ok, Servers} = lasp:declare(?set), %% Get the current advertisement list. {ok, {_, _, AdList0}} = lasp:read(adswithcontracts), AdList = riak_dt_orset:value(adlist0), %% For each advertisement, launch one server for tracking it's %% impressions and wait to disable. lists:map(fun(ad) -> ServerPid = spawn_link(?module, server, [Ad, Ads]), {ok, _} = lasp:update(servers, {add, ServerPid}, undefined) end, AdList),

85 AD COUNTER DISTRIBUTION 85

86 Increment Rovio Ad Counter Read 50,000 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Rovio Ad Counter Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Riot Ad Counter Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 86

87 Increment Rovio Ad Counter Read 50,000 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Rovio Ad Counter Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Riot Ad Counter Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 87

88 Increment Rovio Ad Counter Read 50,000 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Rovio Ad Counter Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Riot Ad Counter Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 88

89 Increment Rovio Ad Counter Read 50,000 Remove Rovio Ads Read Counter 1 Counter 2 Counter 3 Rovio Ad Counter Union Ads Product Ads Contracts Filter Ads With Contracts Counter 1 Counter 2 Counter 3 Riot Ad Counter Counter 1 Counter 2 Counter 3 Riot Ads Contracts Lasp Operation Riot Ad Counter User-Maintained CRDT Lasp-Maintained CRDT Counter 1 Counter 2 Counter 3 89

90 DISTRIBUTION BOUNDARIES: ARBITRARY ALLOWS COMPOSITION OF ENTIRE SYSTEM 90

91 MATERIALIZED VIEWS 91

92 MATERIALIZED VIEWS: INCREMENTALLY UPDATING, DELTA PROPAGATION SCHEMA BASED OR DYNAMIC; 2I IN RIAK 92

93 DATABASE AS STREAM OF UPDATES 93

94 K 1 K 2 K 3 K 1 K 2 Riak 94

95 RS 1 K 1 K 2 K 3 K 1 K 2 Riak RS 2 RS 3 95

96 K 1 K 1 RS 1 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 96

97 K 1 K 1 N 1 N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 97

98 COMPOSE PROGRAMS 98

99 K 1 K 1 N 1 N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 Lasp Stream Processor Template 99

100 K 1 K 1 N 1 N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 Lasp Stream Processor Template Lasp All Objects Program 100

101 K 1 K 1 N 1 N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 Lasp Stream Processor Template Lasp All Objects Program Lasp Over 65 Filter 101

102 K 1 K 1 N 1 N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 Lasp Stream Processor Template Lasp All Objects Program Lasp Over 65 Filter Lasp Over 80 Filter 102

103 K 1 K 1 N 1 N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 2 K 2 K 3 RS 3 Lasp Stream Processor Template Lasp All Objects Program Lasp Over 65 Filter Lasp Over 80 Filter Lasp Named Chris Filter (L) 103

104 DISTRIBUTE APPLICATIONS 104

105 K 1 K 1 N 1 L N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 3 L RS 3 L K 2 K 2 Lasp Named Chris Filter (L) 105

106 N 2 L MERGE L N 3 L RESULTS N 1 (SEC) 106

107 L L RS 1 RS 2 SUM RESULTS 107

108 K 1 K 1 N 1 L N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 3 L RS 3 L K 2 K 2 Execution of Lasp Named Chris Filter (L) 108

109 CACHE: CACHE RESULTS OF MERGE SPECULATIVELY EXECUTE BASED ON DIVERGENCE 109

110 INTERNET OF THINGS 110

111 K 1 K 1 N 1 L N 2 N 3 K 1 K 2 K 3 K 1 K 2 Riak RS 2 K 3 L RS 3 L K 2 K 2 Execution of Lasp Named Chris Filter (L) 111

112 U 1 U 1 L S 1 K 1 K 2 K 3 K 1 K 2 IoT Network S 2 U 2 U 2 L U 3 S 3 L Execution of Lasp Temperature Sensor > 90 F 112

113 U 1 U 1 L S 1 IoT Network S 2 L U 2 U 2 U 3 S 3 L Execution of Lasp Temperature Sensor > 90 F 113

114 IOT: EXECUTE AT THE EDGE WRITE CODE THINKING ABOUT ALL DATA 114

115 RELATED WORK 115

116 RELATED WORK DISTRIBUTED OZ 116

117 RELATED WORK DERFLOW L 117

118 RELATED WORK BLOOM L 118

119 RELATED WORK LVARS 119

120 RELATED WORK D-STREAMS 120

121 RELATED WORK SUMMINGBIRD 121

122 FUTURE WORK 122

123 FUTURE WORK INVARIANT PRESERVATION 123

124 FUTURE WORK CAUSAL+ CONSISTENCY 124

125 FUTURE WORK ORSWOT OPTIMIZATION 125

126 FUTURE WORK DELTA STATE-CRDTs 126

127 FUTURE WORK OPERATION-BASED CRDTs 127

128 FUTURE WORK DEFORESTATION 128

129 SOURCE GITHUB.COM/CMEIKLEJOHN/LASP 129

130 ERLANG WORKSHOP 2014 DERFLOW DISTRIBUTED DETERMINISTIC DATAFLOW PROGRAMMING FOR ERLANG 130

131 PAPOC / EUROSYS 2015 LASP A LANGUAGE FOR DISTRIBUTED, EVENTUALLY CONSISTENT COMPUTATIONS WITH CRDTs 131

132 RICON 2015 NOVEMBER 4-6, 2015 SAN FRANCISCO, CA 132

133 SYNCFREE SYNCFREE IS A EUROPEAN RESEARCH PROJECT TAKING PLACE FOR 3 YEARS, STARING OCTOBER 2013, AND IS FUNDED BY THE EUROPEAN UNION, GRANT AGREEMENT N

134 Questions? Please remember to evaluate via the GOTO Guide App

DISTRIBUTED EVENTUALLY CONSISTENT COMPUTATIONS

DISTRIBUTED EVENTUALLY CONSISTENT COMPUTATIONS LASP 2 DISTRIBUTED EVENTUALLY CONSISTENT COMPUTATIONS 3 EN TAL AV CHRISTOPHER MEIKLEJOHN 4 RESEARCH WITH: PETER VAN ROY (UCL) 5 MOTIVATION 6 SYNCHRONIZATION IS EXPENSIVE 7 SYNCHRONIZATION IS SOMETIMES

More information

Lasp: A Language for Distributed, Coordination-Free Programming

Lasp: A Language for Distributed, Coordination-Free Programming Lasp: A Language for Distributed, Coordination-Free Programming Christopher Meiklejohn Basho Technologies, Inc. cmeiklejohn@basho.com Peter Van Roy Université catholique de Louvain peter.vanroy@uclouvain.be

More information

Selective Hearing: An Approach to Distributed, Eventually Consistent Edge Computation

Selective Hearing: An Approach to Distributed, Eventually Consistent Edge Computation Selective Hearing: An Approach to Distributed, Eventually Consistent Edge Computation Christopher Meiklejohn Basho Technologies, Inc. Bellevue, WA Email: cmeiklejohn@basho.com Peter Van Roy Université

More information

Designing and Evaluating a Distributed Computing Language Runtime. Christopher Meiklejohn Université catholique de Louvain, Belgium

Designing and Evaluating a Distributed Computing Language Runtime. Christopher Meiklejohn Université catholique de Louvain, Belgium Designing and Evaluating a Distributed Computing Language Runtime Christopher Meiklejohn (@cmeik) Université catholique de Louvain, Belgium R A R B R A set() R B R A set() set(2) 2 R B set(3) 3 set() set(2)

More information

The Implementation and Use of a Generic Dataflow Behaviour in Erlang

The Implementation and Use of a Generic Dataflow Behaviour in Erlang The Implementation and Use of a Generic Dataflow Behaviour in Erlang Christopher Meiklejohn Basho Technologies, Inc. Bellevue, WA, United States cmeiklejohn@basho.com Peter Van Roy Université catholique

More information

Conflict-free Replicated Data Types in Practice

Conflict-free Replicated Data Types in Practice Conflict-free Replicated Data Types in Practice Georges Younes Vitor Enes Wednesday 11 th January, 2017 HASLab/INESC TEC & University of Minho InfoBlender Motivation Background Background: CAP Theorem

More information

Mutual consistency, what for? Replication. Data replication, consistency models & protocols. Difficulties. Execution Model.

Mutual consistency, what for? Replication. Data replication, consistency models & protocols. Difficulties. Execution Model. Replication Data replication, consistency models & protocols C. L. Roncancio - S. Drapeau Grenoble INP Ensimag / LIG - Obeo 1 Data and process Focus on data: several physical of one logical object What

More information

SCALABLE CONSISTENCY AND TRANSACTION MODELS

SCALABLE CONSISTENCY AND TRANSACTION MODELS Data Management in the Cloud SCALABLE CONSISTENCY AND TRANSACTION MODELS 69 Brewer s Conjecture Three properties that are desirable and expected from realworld shared-data systems C: data consistency A:

More information

Riak. Distributed, replicated, highly available

Riak. Distributed, replicated, highly available INTRO TO RIAK Riak Overview Riak Distributed Riak Distributed, replicated, highly available Riak Distributed, highly available, eventually consistent Riak Distributed, highly available, eventually consistent,

More information

LightKone: Towards General Purpose Computations on the Edge

LightKone: Towards General Purpose Computations on the Edge LightKone: Towards General Purpose Computations on the Edge Ali Shoker, Joao Leitao, Peter Van Roy, and Christopher Meiklejohn December 21, 2016 Abstract Cloud Computing has become a prominent and successful

More information

Eventually Consistent HTTP with Statebox and Riak

Eventually Consistent HTTP with Statebox and Riak Eventually Consistent HTTP with Statebox and Riak Author: Bob Ippolito (@etrepum) Date: November 2011 Venue: QCon San Francisco 2011 1/62 Introduction This talk isn't really about web. It's about how we

More information

Eventual Consistency Today: Limitations, Extensions and Beyond

Eventual Consistency Today: Limitations, Extensions and Beyond Eventual Consistency Today: Limitations, Extensions and Beyond Peter Bailis and Ali Ghodsi, UC Berkeley - Nomchin Banga Outline Eventual Consistency: History and Concepts How eventual is eventual consistency?

More information

Strong Eventual Consistency and CRDTs

Strong Eventual Consistency and CRDTs Strong Eventual Consistency and CRDTs Marc Shapiro, INRIA & LIP6 Nuno Preguiça, U. Nova de Lisboa Carlos Baquero, U. Minho Marek Zawirski, INRIA & UPMC Large-scale replicated data structures Large, dynamic

More information

The CAP theorem. The bad, the good and the ugly. Michael Pfeiffer Advanced Networking Technologies FG Telematik/Rechnernetze TU Ilmenau

The CAP theorem. The bad, the good and the ugly. Michael Pfeiffer Advanced Networking Technologies FG Telematik/Rechnernetze TU Ilmenau The CAP theorem The bad, the good and the ugly Michael Pfeiffer Advanced Networking Technologies FG Telematik/Rechnernetze TU Ilmenau 2017-05-15 1 / 19 1 The bad: The CAP theorem s proof 2 The good: A

More information

Self-healing Data Step by Step

Self-healing Data Step by Step Self-healing Data Step by Step Uwe Friedrichsen (codecentric AG) NoSQL matters Cologne, 29. April 2014 @ufried Uwe Friedrichsen uwe.friedrichsen@codecentric.de http://slideshare.net/ufried http://ufried.tumblr.com

More information

Why distributed databases suck, and what to do about it. Do you want a database that goes down or one that serves wrong data?"

Why distributed databases suck, and what to do about it. Do you want a database that goes down or one that serves wrong data? Why distributed databases suck, and what to do about it - Regaining consistency Do you want a database that goes down or one that serves wrong data?" 1 About the speaker NoSQL team lead at Trifork, Aarhus,

More information

CS Amazon Dynamo

CS Amazon Dynamo CS 5450 Amazon Dynamo Amazon s Architecture Dynamo The platform for Amazon's e-commerce services: shopping chart, best seller list, produce catalog, promotional items etc. A highly available, distributed

More information

Just-Right Consistency. Centralised data store. trois bases. As available as possible As consistent as necessary Correct by design

Just-Right Consistency. Centralised data store. trois bases. As available as possible As consistent as necessary Correct by design Just-Right Consistency As available as possible As consistent as necessary Correct by design Marc Shapiro, UMC-LI6 & Inria Annette Bieniusa, U. Kaiserslautern Nuno reguiça, U. Nova Lisboa Christopher Meiklejohn,

More information

Strong Eventual Consistency and CRDTs

Strong Eventual Consistency and CRDTs Strong Eventual Consistency and CRDTs Marc Shapiro, INRIA & LIP6 Nuno Preguiça, U. Nova de Lisboa Carlos Baquero, U. Minho Marek Zawirski, INRIA & UPMC Large-scale replicated data structures Large, dynamic

More information

TARDiS: A branch and merge approach to weak consistency

TARDiS: A branch and merge approach to weak consistency TARDiS: A branch and merge approach to weak consistency By: Natacha Crooks, Youer Pu, Nancy Estrada, Trinabh Gupta, Lorenzo Alvis, Allen Clement Presented by: Samodya Abeysiriwardane TARDiS Transactional

More information

CAP Theorem, BASE & DynamoDB

CAP Theorem, BASE & DynamoDB Indian Institute of Science Bangalore, India भ रत य व ज ञ न स स थ न ब गल र, भ रत DS256:Jan18 (3:1) Department of Computational and Data Sciences CAP Theorem, BASE & DynamoDB Yogesh Simmhan Yogesh Simmhan

More information

Using Erlang, Riak and the ORSWOT CRDT at bet365 for Scalability and Performance. Michael Owen Research and Development Engineer

Using Erlang, Riak and the ORSWOT CRDT at bet365 for Scalability and Performance. Michael Owen Research and Development Engineer 1 Using Erlang, Riak and the ORSWOT CRDT at bet365 for Scalability and Performance Michael Owen Research and Development Engineer 2 Background 3 bet365 in stats Founded in 2000 Located in Stoke-on-Trent

More information

Genie. Distributed Systems Synthesis and Verification. Marc Rosen. EN : Advanced Distributed Systems and Networks May 1, 2017

Genie. Distributed Systems Synthesis and Verification. Marc Rosen. EN : Advanced Distributed Systems and Networks May 1, 2017 Genie Distributed Systems Synthesis and Verification Marc Rosen EN.600.667: Advanced Distributed Systems and Networks May 1, 2017 1 / 35 Outline Introduction Problem Statement Prior Art Demo How does it

More information

Computing Parable. The Archery Teacher. Courtesy: S. Keshav, U. Waterloo. Computer Science. Lecture 16, page 1

Computing Parable. The Archery Teacher. Courtesy: S. Keshav, U. Waterloo. Computer Science. Lecture 16, page 1 Computing Parable The Archery Teacher Courtesy: S. Keshav, U. Waterloo Lecture 16, page 1 Consistency and Replication Today: Consistency models Data-centric consistency models Client-centric consistency

More information

CS6450: Distributed Systems Lecture 11. Ryan Stutsman

CS6450: Distributed Systems Lecture 11. Ryan Stutsman Strong Consistency CS6450: Distributed Systems Lecture 11 Ryan Stutsman Material taken/derived from Princeton COS-418 materials created by Michael Freedman and Kyle Jamieson at Princeton University. Licensed

More information

What Came First? The Ordering of Events in

What Came First? The Ordering of Events in What Came First? The Ordering of Events in Systems @kavya719 kavya the design of concurrent systems Slack architecture on AWS systems with multiple independent actors. threads in a multithreaded program.

More information

DYNAMO: AMAZON S HIGHLY AVAILABLE KEY-VALUE STORE. Presented by Byungjin Jun

DYNAMO: AMAZON S HIGHLY AVAILABLE KEY-VALUE STORE. Presented by Byungjin Jun DYNAMO: AMAZON S HIGHLY AVAILABLE KEY-VALUE STORE Presented by Byungjin Jun 1 What is Dynamo for? Highly available key-value storages system Simple primary-key only interface Scalable and Reliable Tradeoff:

More information

CSE 5306 Distributed Systems. Consistency and Replication

CSE 5306 Distributed Systems. Consistency and Replication CSE 5306 Distributed Systems Consistency and Replication 1 Reasons for Replication Data are replicated for the reliability of the system Servers are replicated for performance Scaling in numbers Scaling

More information

Strong Consistency & CAP Theorem

Strong Consistency & CAP Theorem Strong Consistency & CAP Theorem CS 240: Computing Systems and Concurrency Lecture 15 Marco Canini Credits: Michael Freedman and Kyle Jamieson developed much of the original material. Consistency models

More information

Towards a Proof Framework for Information Systems with Weak Consistency

Towards a Proof Framework for Information Systems with Weak Consistency Towards a Proof Framework for Information Systems with Weak Consistency Peter Zeller and Arnd Poetzsch-Heffter University of Kaiserslautern, Germany {p zeller,poetzsch@cs.uni-kl.de Abstract. Weakly consistent

More information

Bloom: Big Systems, Small Programs. Neil Conway UC Berkeley

Bloom: Big Systems, Small Programs. Neil Conway UC Berkeley Bloom: Big Systems, Small Programs Neil Conway UC Berkeley Distributed Computing Programming Languages Data prefetching Register allocation Loop unrolling Function inlining Optimization Global coordination,

More information

Distributed Systems (5DV147)

Distributed Systems (5DV147) Distributed Systems (5DV147) Replication and consistency Fall 2013 1 Replication 2 What is replication? Introduction Make different copies of data ensuring that all copies are identical Immutable data

More information

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS W13.A.0.0 CS435 Introduction to Big Data W13.A.1 FAQs Programming Assignment 3 has been posted PART 2. LARGE SCALE DATA STORAGE SYSTEMS DISTRIBUTED FILE SYSTEMS Recitations Apache Spark tutorial 1 and

More information

CS6450: Distributed Systems Lecture 15. Ryan Stutsman

CS6450: Distributed Systems Lecture 15. Ryan Stutsman Strong Consistency CS6450: Distributed Systems Lecture 15 Ryan Stutsman Material taken/derived from Princeton COS-418 materials created by Michael Freedman and Kyle Jamieson at Princeton University. Licensed

More information

Spanner: Google's Globally-Distributed Database. Presented by Maciej Swiech

Spanner: Google's Globally-Distributed Database. Presented by Maciej Swiech Spanner: Google's Globally-Distributed Database Presented by Maciej Swiech What is Spanner? "...Google's scalable, multi-version, globallydistributed, and synchronously replicated database." What is Spanner?

More information

Consistency and Replication. Some slides are from Prof. Jalal Y. Kawash at Univ. of Calgary

Consistency and Replication. Some slides are from Prof. Jalal Y. Kawash at Univ. of Calgary Consistency and Replication Some slides are from Prof. Jalal Y. Kawash at Univ. of Calgary Reasons for Replication Reliability/Availability : Mask failures Mask corrupted data Performance: Scalability

More information

Chapter 7 Consistency And Replication

Chapter 7 Consistency And Replication DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 7 Consistency And Replication Data-centric Consistency Models Figure 7-1. The general organization

More information

Eventual Consistency Today: Limitations, Extensions and Beyond

Eventual Consistency Today: Limitations, Extensions and Beyond Eventual Consistency Today: Limitations, Extensions and Beyond Peter Bailis and Ali Ghodsi, UC Berkeley Presenter: Yifei Teng Part of slides are cited from Nomchin Banga Road Map Eventual Consistency:

More information

NoSQL systems: sharding, replication and consistency. Riccardo Torlone Università Roma Tre

NoSQL systems: sharding, replication and consistency. Riccardo Torlone Università Roma Tre NoSQL systems: sharding, replication and consistency Riccardo Torlone Università Roma Tre Data distribution NoSQL systems: data distributed over large clusters Aggregate is a natural unit to use for data

More information

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University 11/5/2018 CS435 Introduction to Big Data - FALL 2018 W12.A.0.0 CS435 Introduction to Big Data 11/5/2018 CS435 Introduction to Big Data - FALL 2018 W12.A.1 Consider a Graduate Degree in Computer Science

More information

There is a tempta7on to say it is really used, it must be good

There is a tempta7on to say it is really used, it must be good Notes from reviews Dynamo Evalua7on doesn t cover all design goals (e.g. incremental scalability, heterogeneity) Is it research? Complexity? How general? Dynamo Mo7va7on Normal database not the right fit

More information

MINIMIZING TRANSACTION LATENCY IN GEO-REPLICATED DATA STORES

MINIMIZING TRANSACTION LATENCY IN GEO-REPLICATED DATA STORES MINIMIZING TRANSACTION LATENCY IN GEO-REPLICATED DATA STORES Divy Agrawal Department of Computer Science University of California at Santa Barbara Joint work with: Amr El Abbadi, Hatem Mahmoud, Faisal

More information

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Winter 2015 Lecture 14 NoSQL

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Winter 2015 Lecture 14 NoSQL CSE 544 Principles of Database Management Systems Magdalena Balazinska Winter 2015 Lecture 14 NoSQL References Scalable SQL and NoSQL Data Stores, Rick Cattell, SIGMOD Record, December 2010 (Vol. 39, No.

More information

Weak Consistency and Disconnected Operation in git. Raymond Cheng

Weak Consistency and Disconnected Operation in git. Raymond Cheng Weak Consistency and Disconnected Operation in git Raymond Cheng ryscheng@cs.washington.edu Motivation How can we support disconnected or weakly connected operation? Applications File synchronization across

More information

Large-Scale Key-Value Stores Eventual Consistency Marco Serafini

Large-Scale Key-Value Stores Eventual Consistency Marco Serafini Large-Scale Key-Value Stores Eventual Consistency Marco Serafini COMPSCI 590S Lecture 13 Goals of Key-Value Stores Export simple API put(key, value) get(key) Simpler and faster than a DBMS Less complexity,

More information

Building Consistent Transactions with Inconsistent Replication

Building Consistent Transactions with Inconsistent Replication Building Consistent Transactions with Inconsistent Replication Irene Zhang, Naveen Kr. Sharma, Adriana Szekeres, Arvind Krishnamurthy, Dan R. K. Ports University of Washington Distributed storage systems

More information

Dynamo: Amazon s Highly Available Key-Value Store

Dynamo: Amazon s Highly Available Key-Value Store Dynamo: Amazon s Highly Available Key-Value Store DeCandia et al. Amazon.com Presented by Sushil CS 5204 1 Motivation A storage system that attains high availability, performance and durability Decentralized

More information

Conflict-free Replicated Data Types (CRDTs)

Conflict-free Replicated Data Types (CRDTs) Conflict-free Replicated Data Types (CRDTs) for collaborative environments Marc Shapiro, INRIA & LIP6 Nuno Preguiça, U. Nova de Lisboa Carlos Baquero, U. Minho Marek Zawirski, INRIA & UPMC Conflict-free

More information

Towards a Solution to the Red Wedding Problem

Towards a Solution to the Red Wedding Problem Towards a Solution to the Red Wedding Problem Christopher S. Meiklejohn Université catholique de Louvain Instituto Superior Técnico Heather Miller Northeastern University École Polytechnique Fédérale de

More information

Dynamo: Amazon s Highly Available Key-value Store

Dynamo: Amazon s Highly Available Key-value Store Dynamo: Amazon s Highly Available Key-value Store Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and

More information

Chapter 11 - Data Replication Middleware

Chapter 11 - Data Replication Middleware Prof. Dr.-Ing. Stefan Deßloch AG Heterogene Informationssysteme Geb. 36, Raum 329 Tel. 0631/205 3275 dessloch@informatik.uni-kl.de Chapter 11 - Data Replication Middleware Motivation Replication: controlled

More information

#IoT #BigData. 10/31/14

#IoT #BigData.  10/31/14 #IoT #BigData Seema Jethani @seemaj @basho 1 10/31/14 Why should we care? 2 11/2/14 Source: http://en.wikipedia.org/wiki/internet_of_things Motivation for Specialized Big Data Systems Rate of data capture

More information

DEMYSTIFYING BIG DATA WITH RIAK USE CASES. Martin Schneider Basho Technologies!

DEMYSTIFYING BIG DATA WITH RIAK USE CASES. Martin Schneider Basho Technologies! DEMYSTIFYING BIG DATA WITH RIAK USE CASES Martin Schneider Basho Technologies! Agenda Defining Big Data in Regards to Riak A Series of Trade-Offs Use Cases Q & A About Basho & Riak Basho Technologies is

More information

Dynamo: Amazon s Highly Available Key-value Store

Dynamo: Amazon s Highly Available Key-value Store Dynamo: Amazon s Highly Available Key-value Store Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and

More information

CAP Theorem. March 26, Thanks to Arvind K., Dong W., and Mihir N. for slides.

CAP Theorem. March 26, Thanks to Arvind K., Dong W., and Mihir N. for slides. C A CAP Theorem P March 26, 2018 Thanks to Arvind K., Dong W., and Mihir N. for slides. CAP Theorem It is impossible for a web service to provide these three guarantees at the same time (pick 2 of 3):

More information

What am I? Bryan Hunt Basho Client Services Engineer Erlang neophyte JVM refugee Be gentle

What am I? Bryan Hunt Basho Client Services Engineer Erlang neophyte JVM refugee Be gentle What am I? Bryan Hunt Basho Client Services Engineer Erlang neophyte JVM refugee Be gentle What are you? Developer Operations Other Structure of this talk Introduction to Riak Introduction to Riak 2.0

More information

Advanced Databases ( CIS 6930) Fall Instructor: Dr. Markus Schneider. Group 17 Anirudh Sarma Bhaskara Sreeharsha Poluru Ameya Devbhankar

Advanced Databases ( CIS 6930) Fall Instructor: Dr. Markus Schneider. Group 17 Anirudh Sarma Bhaskara Sreeharsha Poluru Ameya Devbhankar Advanced Databases ( CIS 6930) Fall 2016 Instructor: Dr. Markus Schneider Group 17 Anirudh Sarma Bhaskara Sreeharsha Poluru Ameya Devbhankar BEFORE WE BEGIN NOSQL : It is mechanism for storage & retrieval

More information

Replication in Distributed Systems

Replication in Distributed Systems Replication in Distributed Systems Replication Basics Multiple copies of data kept in different nodes A set of replicas holding copies of a data Nodes can be physically very close or distributed all over

More information

Dynamo. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Motivation System Architecture Evaluation

Dynamo. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Motivation System Architecture Evaluation Dynamo Smruti R. Sarangi Department of Computer Science Indian Institute of Technology New Delhi, India Smruti R. Sarangi Leader Election 1/20 Outline Motivation 1 Motivation 2 3 Smruti R. Sarangi Leader

More information

Introduction Storage Processing Monitoring Review. Scaling at Showyou. Operations. September 26, 2011

Introduction Storage Processing Monitoring Review. Scaling at Showyou. Operations. September 26, 2011 Scaling at Showyou Operations September 26, 2011 I m Kyle Kingsbury Handle aphyr Code http://github.com/aphyr Email kyle@remixation.com Focus Backend, API, ops What the hell is Showyou? Nontrivial complexity

More information

Eventual Consistency 1

Eventual Consistency 1 Eventual Consistency 1 Readings Werner Vogels ACM Queue paper http://queue.acm.org/detail.cfm?id=1466448 Dynamo paper http://www.allthingsdistributed.com/files/ amazon-dynamo-sosp2007.pdf Apache Cassandra

More information

INF-5360 Presentation

INF-5360 Presentation INF-5360 Presentation Optimistic Replication Ali Ahmad April 29, 2013 Structure of presentation Pessimistic and optimistic replication Elements of Optimistic replication Eventual consistency Scheduling

More information

Distributed Data Management Replication

Distributed Data Management Replication Felix Naumann F-2.03/F-2.04, Campus II Hasso Plattner Institut Distributing Data Motivation Scalability (Elasticity) If data volume, processing, or access exhausts one machine, you might want to spread

More information

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf Distributed systems Lecture 6: distributed transactions, elections, consensus and replication Malte Schwarzkopf Last time Saw how we can build ordered multicast Messages between processes in a group Need

More information

Conflict-Free Replicated Data Types (basic entry)

Conflict-Free Replicated Data Types (basic entry) Conflict-Free Replicated Data Types (basic entry) Marc Shapiro Sorbonne-Universités-UPMC-LIP6 & Inria Paris http://lip6.fr/marc.shapiro/ 16 May 2016 1 Synonyms Conflict-Free Replicated Data Types (CRDTs).

More information

SMAC: State Management for Geo-Distributed Containers

SMAC: State Management for Geo-Distributed Containers SMAC: State Management for Geo-Distributed Containers Jacob Eberhardt, Dominik Ernst, David Bermbach Information Systems Engineering Research Group Technische Universitaet Berlin Berlin, Germany Email:

More information

SCALABLE CONSISTENCY AND TRANSACTION MODELS THANKS TO M. GROSSNIKLAUS

SCALABLE CONSISTENCY AND TRANSACTION MODELS THANKS TO M. GROSSNIKLAUS Sharding and Replica@on Data Management in the Cloud SCALABLE CONSISTENCY AND TRANSACTION MODELS THANKS TO M. GROSSNIKLAUS Sharding Breaking a database into several collecbons (shards) Each data item (e.g.,

More information

G Bayou: A Weakly Connected Replicated Storage System. Robert Grimm New York University

G Bayou: A Weakly Connected Replicated Storage System. Robert Grimm New York University G22.3250-001 Bayou: A Weakly Connected Replicated Storage System Robert Grimm New York University Altogether Now: The Three Questions! What is the problem?! What is new or different?! What are the contributions

More information

Data synchronization Conflict resolution strategies

Data synchronization Conflict resolution strategies Data synchronization Conflict resolution strategies Waldemar Korłub Department of Computer Architecture Faculty of Electronics, Telecommunications and Informatics Gdansk University of Technology November

More information

DSMW: Distributed Semantic MediaWiki

DSMW: Distributed Semantic MediaWiki DSMW: Distributed Semantic MediaWiki Hala Skaf-Molli, Gérôme Canals and Pascal Molli Université de Lorraine, Nancy, LORIA INRIA Nancy-Grand Est, France {skaf, canals, molli}@loria.fr Abstract. DSMW is

More information

Consistency and Replication

Consistency and Replication Topics to be covered Introduction Consistency and Replication Consistency Models Distribution Protocols Consistency Protocols 1 2 + Performance + Reliability Introduction Introduction Availability: proportion

More information

Important Lessons. Today's Lecture. Two Views of Distributed Systems

Important Lessons. Today's Lecture. Two Views of Distributed Systems Important Lessons Replication good for performance/ reliability Key challenge keeping replicas up-to-date Wide range of consistency models Will see more next lecture Range of correctness properties L-10

More information

CISC 7610 Lecture 5 Distributed multimedia databases. Topics: Scaling up vs out Replication Partitioning CAP Theorem NoSQL NewSQL

CISC 7610 Lecture 5 Distributed multimedia databases. Topics: Scaling up vs out Replication Partitioning CAP Theorem NoSQL NewSQL CISC 7610 Lecture 5 Distributed multimedia databases Topics: Scaling up vs out Replication Partitioning CAP Theorem NoSQL NewSQL Motivation YouTube receives 400 hours of video per minute That is 200M hours

More information

Distributed Systems. 19. Spanner. Paul Krzyzanowski. Rutgers University. Fall 2017

Distributed Systems. 19. Spanner. Paul Krzyzanowski. Rutgers University. Fall 2017 Distributed Systems 19. Spanner Paul Krzyzanowski Rutgers University Fall 2017 November 20, 2017 2014-2017 Paul Krzyzanowski 1 Spanner (Google s successor to Bigtable sort of) 2 Spanner Take Bigtable and

More information

MUTE: A Peer-to-Peer Web-based Real-time Collaborative Editor

MUTE: A Peer-to-Peer Web-based Real-time Collaborative Editor MUTE: A Peer-to-Peer Web-based Real-time Collaborative Editor Matthieu Nicolas, Victorien Elvinger, Gérald Oster, Claudia-Lavinia Ignat, François Charoy To cite this version: Matthieu Nicolas, Victorien

More information

Spanner : Google's Globally-Distributed Database. James Sedgwick and Kayhan Dursun

Spanner : Google's Globally-Distributed Database. James Sedgwick and Kayhan Dursun Spanner : Google's Globally-Distributed Database James Sedgwick and Kayhan Dursun Spanner - A multi-version, globally-distributed, synchronously-replicated database - First system to - Distribute data

More information

Managing Update Conflicts in Bayou. Lucy Youxuan Jiang, Hiwot Tadese Kassa

Managing Update Conflicts in Bayou. Lucy Youxuan Jiang, Hiwot Tadese Kassa Managing Update Conflicts in Bayou Lucy Youxuan Jiang, Hiwot Tadese Kassa Outline! Background + Motivation! Bayou Model Dependency checking for conflict detection Merge procedures for conflict resolution

More information

Spanner: Google s Globally- Distributed Database

Spanner: Google s Globally- Distributed Database Today s Reminders Spanner: Google s Globally-Distributed Database Discuss Project Ideas with Phil & Kevin Phil s Office Hours: After class today Sign up for a slot: 11-12:30 or 3-4:20 this Friday Phil

More information

CONSISTENT FOCUS. Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability.

CONSISTENT FOCUS. Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability. Q SCALABLE FOCUS WEB SERVICES Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability. EVENTUALLY CONSISTENT 14 October 2008 ACM QUEUE rants:

More information

Horizontal or vertical scalability? Horizontal scaling is challenging. Today. Scaling Out Key-Value Storage

Horizontal or vertical scalability? Horizontal scaling is challenging. Today. Scaling Out Key-Value Storage Horizontal or vertical scalability? Scaling Out Key-Value Storage COS 418: Distributed Systems Lecture 8 Kyle Jamieson Vertical Scaling Horizontal Scaling [Selected content adapted from M. Freedman, B.

More information

Distributed Systems. Lec 12: Consistency Models Sequential, Causal, and Eventual Consistency. Slide acks: Jinyang Li

Distributed Systems. Lec 12: Consistency Models Sequential, Causal, and Eventual Consistency. Slide acks: Jinyang Li Distributed Systems Lec 12: Consistency Models Sequential, Causal, and Eventual Consistency Slide acks: Jinyang Li (http://www.news.cs.nyu.edu/~jinyang/fa10/notes/ds-eventual.ppt) 1 Consistency (Reminder)

More information

Scaling Out Key-Value Storage

Scaling Out Key-Value Storage Scaling Out Key-Value Storage COS 418: Distributed Systems Logan Stafman [Adapted from K. Jamieson, M. Freedman, B. Karp] Horizontal or vertical scalability? Vertical Scaling Horizontal Scaling 2 Horizontal

More information

GFS: The Google File System

GFS: The Google File System GFS: The Google File System Brad Karp UCL Computer Science CS GZ03 / M030 24 th October 2014 Motivating Application: Google Crawl the whole web Store it all on one big disk Process users searches on one

More information

(It s the invariants, stupid)

(It s the invariants, stupid) Consistency in three dimensions (It s the invariants, stupid) Marc Shapiro Cloud to the edge Social, web, e-commerce: shared mutable data Scalability replication consistency issues Inria & Sorbonne-Universités

More information

Consistency and Replication. Why replicate?

Consistency and Replication. Why replicate? Consistency and Replication Today: Consistency models Data-centric consistency models Client-centric consistency models Lecture 15, page 1 Why replicate? Data replication versus compute replication Data

More information

Lecture 6 Consistency and Replication

Lecture 6 Consistency and Replication Lecture 6 Consistency and Replication Prof. Wilson Rivera University of Puerto Rico at Mayaguez Electrical and Computer Engineering Department Outline Data-centric consistency Client-centric consistency

More information

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 7 Consistency And Replication

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 7 Consistency And Replication DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 7 Consistency And Replication Reasons for Replication Data are replicated to increase the reliability

More information

TAG: A TINY AGGREGATION SERVICE FOR AD-HOC SENSOR NETWORKS

TAG: A TINY AGGREGATION SERVICE FOR AD-HOC SENSOR NETWORKS TAG: A TINY AGGREGATION SERVICE FOR AD-HOC SENSOR NETWORKS SAMUEL MADDEN, MICHAEL J. FRANKLIN, JOSEPH HELLERSTEIN, AND WEI HONG Proceedings of the Fifth Symposium on Operating Systems Design and implementation

More information

Distributed Systems. replication Johan Montelius ID2201. Distributed Systems ID2201

Distributed Systems. replication Johan Montelius ID2201. Distributed Systems ID2201 Distributed Systems ID2201 replication Johan Montelius 1 The problem The problem we have: servers might be unavailable The solution: keep duplicates at different servers 2 Building a fault-tolerant service

More information

Improving Logical Clocks in Riak with Dotted Version Vectors: A Case Study

Improving Logical Clocks in Riak with Dotted Version Vectors: A Case Study Improving Logical Clocks in Riak with Dotted Version Vectors: A Case Study Ricardo Gonçalves Universidade do Minho, Braga, Portugal, tome@di.uminho.pt Abstract. Major web applications need the partition-tolerance

More information

Basic vs. Reliable Multicast

Basic vs. Reliable Multicast Basic vs. Reliable Multicast Basic multicast does not consider process crashes. Reliable multicast does. So far, we considered the basic versions of ordered multicasts. What about the reliable versions?

More information

Megastore: Providing Scalable, Highly Available Storage for Interactive Services & Spanner: Google s Globally- Distributed Database.

Megastore: Providing Scalable, Highly Available Storage for Interactive Services & Spanner: Google s Globally- Distributed Database. Megastore: Providing Scalable, Highly Available Storage for Interactive Services & Spanner: Google s Globally- Distributed Database. Presented by Kewei Li The Problem db nosql complex legacy tuning expensive

More information

Vector Clocks in Coq

Vector Clocks in Coq Vector Clocks in Coq An Experience Report Christopher Meiklejohn Basho Technologies, Inc. Cambridge, MA 02139 cmeiklejohn@basho.com March 6, 2014 Outline of the talk Introduction Background Implementation

More information

CSE 5306 Distributed Systems

CSE 5306 Distributed Systems CSE 5306 Distributed Systems Consistency and Replication Jia Rao http://ranger.uta.edu/~jrao/ 1 Reasons for Replication Data is replicated for the reliability of the system Servers are replicated for performance

More information

Availability versus consistency. Eventual Consistency: Bayou. Eventual consistency. Bayou: A Weakly Connected Replicated Storage System

Availability versus consistency. Eventual Consistency: Bayou. Eventual consistency. Bayou: A Weakly Connected Replicated Storage System Eventual Consistency: Bayou Availability versus consistency Totally-Ordered Multicast kept replicas consistent but had single points of failure Not available under failures COS 418: Distributed Systems

More information

Consistency and Replication

Consistency and Replication Consistency and Replication 1 D R. Y I N G W U Z H U Reasons for Replication Data are replicated to increase the reliability of a system. Replication for performance Scaling in numbers Scaling in geographical

More information

Thesis Defense: Developing Real-Time Collaborative Editing Using Formal Methods

Thesis Defense: Developing Real-Time Collaborative Editing Using Formal Methods Thesis Defense: Developing Real-Time Collaborative Editing Using Formal Methods Lars Tveito September 9th, 2016 Department of Informatics, University of Oslo Outline Introduction Formal Semantics of Editing

More information

Extending Eventually Consistent Cloud Databases for Enforcing Numeric Invariants

Extending Eventually Consistent Cloud Databases for Enforcing Numeric Invariants Extending Eventually Consistent Cloud Databases for Enforcing Numeric Invariants Valter Balegas, Diogo Serra, Sérgio Duarte Carla Ferreira, Rodrigo Rodrigues, Nuno Preguiça NOVA LINCS/FCT/Universidade

More information

Architekturen für die Cloud

Architekturen für die Cloud Architekturen für die Cloud Eberhard Wolff Architecture & Technology Manager adesso AG 08.06.11 What is Cloud? National Institute for Standards and Technology (NIST) Definition On-demand self-service >

More information

Putting together the platform: Riak, Redis, Solr and Spark. Bryan Hunt

Putting together the platform: Riak, Redis, Solr and Spark. Bryan Hunt Putting together the platform: Riak, Redis, Solr and Spark Bryan Hunt 1 $ whoami Bryan Hunt Client Services Engineer @binarytemple 2 Minimum viable product - the ideologically correct doctrine 1. Start

More information