||A Variable-Grain Consistency Maintenance Scheme for Shared Data on Emergency and Rescue Applications
||Institute of Computer & Communication
emergency and rescue
在本文中我們提出了「海鷗」系統可以在急難救援的環境中透通的維持急救資料的多份副本間資料內容的一致性。海鷗系統在地點間採用樂觀複製(Optimistic replication)技術來提供高程度的可得性，相對的在地點內採用悲觀複製(Pessimistic replication)技術來提供較嚴格的一致性。同時，海鷗系統採用了動態調整一致性單位的方式來提高效能，因為動態調整一致性單位在偽分享(false sharing)的情況下，可以達到更高程度的平行度。最後，海鷗系統採用了透通的資料一致性管理機制，因此使用者不需要修改原有的程式就可以在海鷗系統上順利執行。
In the scenario of the emergency and rescue operations, information sharing is the most important factor that affects the success and failure of the entire operation. However, efficient information sharing is difficult to achieve in such a scenario because there is no communication infrastructure existed at the disaster sites.
Generally Speaking, the network condition is relatively reliable in the intra-site environment and relatively unreliable in the inter-site environment. Moreover, the network partitioning problem may occur between two sites. Therefore, the replication technique in data grid should be used in emergency and rescue application for improving the efficiency of the information sharing. However, the data consistency problem occurs between replicas.
In this context, we propose a middleware called Seagull that can transparently manage the data consistency, designated as Seagull for the situation of the emergency and rescue applications. Seagull adopts the optimistic replication technique that provides high availability, and adopts the pessimistic replication technique in the intra-site environment that provides strong consistency. Moreover, it adopts an adaptive consistency granularity strategy that achieves the better performance of the consistency management because this strategy provides higher parallelism when the false sharing happens. Lastly, Seagull adopts the transparency data consistency management scheme, and thus the users do not need to modify their source codes to run on the Seagull.
Chapter 1 Introduction 1
1.1. Emergency and rescue applications 1
1.2. Motivation 3
1.3. Goals 5
Chapter 2 Related Work 8
2.1. Replication techniques 8
2.2. Consistency granularity 10
2.3. Concurrency transparency 12
Chapter 3 System overview and design 13
3.1. Hybrid replication techniques 14
3.1.1. Optimistic technique 14
3.1.2. Pessimistic technique 17
3.2. Adaptive consistency granularity 19
3.3. Transparency 21
3.3.1. Application transparency 21
3.3.2. File location transparency 21
Chapter 4 Implementation 23
4.1. System architecture 23
4.2. Collector module 24
4.3. Lock server module 27
4.4. Aggregator module 29
4.5. Manager module 29
Chapter 5 Performance Evaluation 31
5.1. Experiments Environment 31
5.2. The performance of the intra-site environment 33
5.2.1. Adaptive consistency granularity 33
5.2.2. Grain size threshold 38
5.2.3. Revoke lock time 40
5.3. The performance of inter-site environment 41
5.3.1. Write threshold 41
5.3.2. Time threshold 43
Chapter 6 Conclusions and Future Work 44
 G. Belalem and Y. Slimani, "Consistency Management for Data Grid in OptorSim Simulator," in Multimedia and Ubiquitous Engineering, 2007. MUE '07. International Conference on, 2007, pp. 554-560.
 Y. Chao-Tung, et al., "FRCS: A File Replication and Consistency Service in Data Grids," in Multimedia and Ubiquitous Engineering, 2008. MUE 2008. International Conference on, 2008, pp. 444-447.
 Y. Chao-Tung, et al., "A One-Way File Replica Consistency Model in Data Grids," presented at the Proceedings of the The 2nd IEEE Asia-Pacific Service Computing Conference, 2007.
 A. Domenici, et al., "Replica consistency in a Data Grid," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 534, pp. 24-28, 2004.
 C. Ruay-Shiung and C. Jih-Sheng, "Adaptable Replica Consistency Service for Data Grids," presented at the Proceedings of the Third International Conference on Information Technology: New Generations, 2006.
 L. Guy, et al., "Replica management in data grids," 2002, pp. 278-280.
 H. Al Mistarihi and C. Yong, "Replica management in data grid," IJCSNS, vol. 8, p. 22, 2008.
 D. Bernholdt, et al., "The earth system grid: Supporting the next generation of climate modeling research," Proceedings of the IEEE, vol. 93, pp. 485-495, 2005.
 V. Astakhov, et al., "Data Integration in the Biomedical Informatics Research Network (BIRN)," 2005, pp. 317-320.
 T. Plagemann, et al., "Reconsidering consistency management in shared data spaces for emergency and rescue applications."
 T. Plagemann, et al., "A data sharing facility for mobile ad-hoc emergency and rescue applications," 2007.
 D. Andrea, et al., "Relaxed Data Consistency with CONStanza," presented at the Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid, 2006.
 S. Yasushi and S. Marc, "Optimistic replication," ACM Comput. Surv., vol. 37, pp. 42-81, 2005.
 M. M. Deris, et al., "An efficient replicated data access approach for large-scale distributed systems," presented at the Proceedings of the 2004 IEEE International Symposium on Cluster Computing and the Grid, 2004.
 S. Tikar and S. Vadhiyar, "Efficient reuse of replicated parallel data segments in computational grids," Future Generation Computer Systems, vol. In Press, Corrected Proof.
 S. Yuzhong and X. Zhiwei, "Grid replication coherence protocol," in Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, 2004, p. 232.
 T. Hara and S. K. Madria, "Consistency Management Strategies for Data Replication in Mobile Ad Hoc Networks," IEEE Transactions on Mobile Computing, vol. 8, pp. 950-967, 2009.
 Z. Guessoum, et al., "Towards reliable multi-agent systems: An adaptive replication mechanism," Multiagent Grid Syst., vol. 6, pp. 1-24, 2010.
 R. Rodrigues and B. Liskov, "High availability in DHTs: Erasure coding vs. replication," Peer-to-Peer Systems IV, pp. 226-239, 2005.
 B. Charron-Bost, "Concerning the size of logical clocks in distributed systems," Information Processing Letters, vol. 39, p. 16, 1991.
 L. Lamport, "Time, clocks, and the ordering of events in a distributed system," Communications of the ACM, vol. 21, pp. 558-565, 1978.
 D. Mills, "Improved algorithms for synchronizing computer network clocks," IEEE/ACM Transactions on Networking (TON), vol. 3, pp. 245-254, 1995.
 W. Lei and Y. Chen, "TLDFS: A Distributed File System based on the Layered Structure," in Network and Parallel Computing Workshops, 2007. NPC Workshops. IFIP International Conference on, 2007, pp. 727-732.
 D. Hildebrand and P. Honeyman, "Scaling NFSv4 with parallel file systems," presented at the Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid'05) - Volume 2 - Volume 02, 2005.
 Z. Jiaying and H. Peter, "NFSv4 replication for grid storage middleware," presented at the Proceedings of the 4th international workshop on Middleware for grid computing, Melbourne, Australia, 2006.
 B. Pawlowski, et al., "The NFS version 4 protocol," 2000.
 J. Zhang and P. Honeyman, "Naming, Migration, and Replication for NFSv4," Ann Arbor, vol. 1001, pp. 48103-4978, 2006.
 L. Lamport, "Lower bounds for asynchronous consensus," Distributed Computing, vol. 19, pp. 104-125, 2006.
 S. Ghemawat, et al., "The Google file system," ACM SIGOPS Operating Systems Review, vol. 37, p. 43, 2003.
 Fuse http://fuse.sourceforge.net/
 WANem http://wanem.sourceforge.net/
 Ubuntu http://www.ubuntu.com/