ignite - Persistence and Split brains Scenario -


1.how ignite handle split brain scenario in clustered mode ?

2.incase of putall, hit persistent store each entry or put store @ once ?

3.how putall work regard persistent store if set batch size ?

4.in case of partitioned backup , order in data moves ? primary->backup->persistence or primary->backup meantime asynchronously persistence ?

5.if update done in persistence store , has done reflect in cache without reloading?(how handle backend updates)

6.on doing update in backend , reflect changes in cache if reload cache using loadcache, changes not updated in cache or if straightway use get() updates not reflected. updates reflected after clearing cache once , calling loadcache or api . right way reload cache?

 person p1 = new person(1, "benakaraj", "ks", 11, 26, 1000);     person p2 = new person(2, "ashwin", "konale", 13, 26, 10000);     connection con = null;     statement stmt = null;      con = ds.getconnection();     stmt = con.createstatement();     string sql =         "create table person(per_id int,name varchar(20),last_name varchar(20),org_id int,age int,salary real,primary key(per_id))";     stmt.executeupdate(sql);      roccacheconfiguration<integer, person> pesonconfig = new roccacheconfiguration<>();     pesonconfig.setname("bkendupdtcache");     pesonconfig.setcachemode(cachemode.partitioned);     jdbctype jdbctype = new jdbctype();      jdbctype.setcachename("bkendupdtcache");     jdbctype.setdatabaseschema("roc4test");     jdbctype.setdatabasetable("person");     jdbctype.setkeytype(integer.class);     jdbctype.setvaluetype(person.class);     // key fields person.      collection<jdbctypefield> keys = new arraylist<>();     keys.add(new jdbctypefield(types.integer, "per_id", int.class, "perid"));     jdbctype.setkeyfields(keys.toarray(new jdbctypefield[keys.size()]));      // value fields person.     collection<jdbctypefield> vals = new arraylist<>();     vals.add(new jdbctypefield(types.integer, "per_id", int.class, "perid"));     vals.add(new jdbctypefield(types.varchar, "name", string.class, "name"));     vals.add(new jdbctypefield(types.varchar, "last_name", string.class, "lastname"));     vals.add(new jdbctypefield(types.integer, "org_id", int.class, "orgid"));     vals.add(new jdbctypefield(types.integer, "age", int.class, "age"));     vals.add(new jdbctypefield(types.float, "salary", float.class, "salary"));     jdbctype.setvaluefields(vals.toarray(new jdbctypefield[vals.size()]));      collection<jdbctype> jdbctypes = new arraylist<>();      jdbctypes.add(jdbctype);      cachejdbcpojostorefactory<integer, organization> cachejdbcdpojostorefactory4 =         context.getbean(cachejdbcpojostorefactory.class);     cachejdbcdpojostorefactory4.settypes(jdbctypes.toarray(new jdbctype[jdbctypes.size()]));      pesonconfig.setcachestorefactory((factory<? extends cachestore<integer, person>>) cachejdbcdpojostorefactory4);     pesonconfig.setreadthrough(true);     pesonconfig.setwritethrough(true);     roccache<integer, person> personcache2 = roccachemanager.createcache(pesonconfig);     personcache2.put(1, p1);     personcache2.put(2, p2);     assertequals(personcache2.get(2).getname(), "ashwin");     sql = assertequals(personcache2.get(2).getname(), "abhi");  "update person set name='abhi' per_id=2";         stmt.execute(sql);          //fails , asks assertion stale value         personcache.loadcache(null);         assertequals(personcache2.get(2).getname(), "abhi");          //works fine         personcache2.clear(2);         assertequals(personcache2.get(2).getname(), "abhi");          //works fine         personcache2.clear();         personcache2.loadcache(null);         assertequals(personcache2.get(2).getname(), "abhi");          sql = "drop table person";         stmt.executeupdate(sql);         con.close();         stmt.close();         roccachemanager.destroycache("bkendupdtcache"); 

  1. by default 2 independent clusters never join each other again (otherwise data inconsistency possible). have manually stop 1 of clusters , restart after network restored. however, automatic resolution can implemented plugin. e.g., gridgain provides functionality out of box: https://gridgain.readme.io/docs/network-segmentation

  2. ignites tries minimize persistence store invocations as possible. if storage supports batch reads , writes, it's idea take advantage of when implementing loadall, writeall , removeall methods.

  3. batch update operation split batch based on node mappings. each part of batch persisted @ once on corresponding primary node.

  4. store updated atomically primary node (if write store fails, cache not updated, , visa versa). backups updated asynchronously in background default.

  5. if possible, should avoid , treat ignite primary data storage optional store @ backend (i.e. access data through ignite api). there no easy way propagate db updates ignite.

  6. you can invalidate entries using clear/clearall method, or reload them using loadall method. option use expirations: https://apacheignite.readme.io/docs/expiry-policies


Comments

Popular posts from this blog

java - Run spring boot application error: Cannot instantiate interface org.springframework.context.ApplicationListener -

python - pip wont install .WHL files -

Excel VBA "Microsoft Windows Common Controls 6.0 (SP6)" Location Changes -