Consistency model

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

In computer science, consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as a filesystems, databases, optimistic replication systems or Web caching). The system supports a given model if operations on memory follow specific rules. The data consistency model specifies a contract between programmer and system, wherein the system guarantees that if the programmer follows the rules, memory will be consistent and the results of memory operations will be predictable.

High level languages, such as C++ and Java, partially maintain the contract by translating memory operations into low-level operations in a way that preserves memory semantics. To hold to the contract, compilers may reorder some memory instructions, and library calls such as pthread_mutex_lock() encapsulate required synchronization.[1]

Verifying sequential consistency is undecidable in general, even for finite-state cache-coherence protocols.[2]

Consistency models define rules for the apparent order and visibility of updates, and it is a continuum with tradeoffs.[3]

Example

Assume that the following case occurs:[3]

  • The row X is replicated on nodes M and N
  • The client A writes row X to node N
  • After a period of time t, client B reads row X from node M

The consistency model has to determine whether client B sees the write from client A or not.

Types

There are two methods to define and categorize consistency models; issue and view.

Issue: Issue method describes the restrictions that defines how a process can issue operations.

View: View method which defines the order of operations visible to processes.

For example, a consistency model can define that a process is not allowed to issue an operation until all previous issued operations are completed.

Different consistency models enforce different conditions. One Consistency model can be considered stronger than another if it requires all conditions of that model and more. In other words, a model with less constraints can be considered as a weaker consistency model.

Strict consistency

Strict consistency is the strongest consistency model. It requires that if a process reads any memory location, the value returned by the read operation is the value written by the most recent write operation to that location. For a uni-processor system, this model makes perfect sense. But it is almost impossible to implement the strict consistency model in distributed shared memory systems. Consider a situation where there are two processors, A and B. Processor A writes a value at a particular time instance and processor B reads that value at a later time. Consider a light cone originating at processor A. If processor A and processor B are placed adjacent to each other on a timeline, the point where a ray of light from this light cone can touch processor B's timeline determines the instance at which processor B can see the new value of the data written by processor A. If the processor B tries to read the data before this time, it would read the previous value of data, although processor A had already written the new value.

Sequential consistency

The sequential consistency model as defined by Lamport(1979)[4] is a weaker memory model than strict consistency.

Linearizability (also known as atomic consistency) can be defined as sequential consistency with the real-time constraint.

Causal consistency

Causal consistency can be considered a weakening model of sequential consistency by categorizing events into those causally related and those that are not. It defines that only write operations that are causally related must be seen in the same order by all processes.

Processor consistency

The processor consistency model[5] is similar to PRAM consistency model with a stronger condition that defines all writes to the same memory location must be seen in the same sequential order by all other processes. Process consistency is weaker than sequential consistency but stronger that PRAM consistency model.

The Stanford DASH multiprocessor system implements a variation of processor consistency which is incomparable (neither weaker nor stronger) to Goodmans definitions.[6]

PRAM consistency (also known as FIFO consistency)

PRAM consistency (Pipelined RAM) was presented by Lipton and Sandberg in 1988[7] as one of the first described consistency models. Due to its informal definition there are in fact at least two subtle different implementations,[6] one by Ahamad et al. and one by Mosberger.

In PRAM consistency, all processes view the operations of a single process in the same order that they were issued by that process, while operations issued by different processes can be viewed in different order from different processes. PRAM consistency is weaker than processor consistency.

Cache consistency

Cache consistency[5][8] requires that all write operations to the same memory location are performed in some sequential order. Cache consistency is weaker than process consistency and incomparable with PRAM consistency.

Slow consistency

In slow consistency,[8] if a process reads a value previously written to a memory location, it cannot subsequently read any earlier value from that location. Writes performed by a process are immediately visible to that process. Slow consistency is a weaker model than PRAM and cache consistency.

Example: Slow memory diagram depicts a slow consistency example. The first process writes 1 to the memory location X and then it writes 1 to the memory location Y. The second process reads 1 from Y and it then reads 0 from X even though X was written before Y.

Hutto, Phillip W., and Mustaque Ahamad (1990)[9] illustrate that by appropriate programming, slow memory (consistency) can be expressive and efficient. They mention that slow memory has two valuable properties; locality and supporting reduction from atomic memory. They propose two algorithms to present the expressiveness of slow memory.

Release consistency

Entry consistency

General consistency

In general consistency,[10] all the copies of a memory location are eventually identical after all processes' writes are completed.

Local consistency

In local consistency,[8] each process performs its own operations in the order defined by its program. There is no constraint on the ordering in which the write operations of other processes appear to be performed. Local consistency is the weakest consistency model in shared memory systems.

Some other consistency models are as follows:

Relaxed Memory Consistency Models

Some different consistency models can be defined by relaxing one or more requirements in sequential consistency called relaxed consistency models.[13] These consistency models do not provide memory consistency at the hardware level. In fact, the programmers are responsible for implementing the memory consistency by applying synchronization techniques.

There are four comparisons to define the relaxed consistency:

  • Relaxation: One way to categorize the relaxed consistency is to define which sequential consistency requirements are relaxed. We can have less strict models by relaxing either program order or write atomicity requirements defined by Adve and Gharachorloo, 1996.[14] Program order guarantees that each process issues a memory request ordered by its program and write atomicity defines that memory requests are serviced based on the order of a single FIFO queue. In relaxing program order, any or all the ordering of operation pairs, write-after-write, read-after-write, or read/write-after-read, can be relaxed. In the relaxed write atomicity model, a process can view its own writes before any other processors.
  • Synchronizing vs. Non-Synchronizing: A synchronizing model can be defined by dividing the memory accesses into two groups and assigning different consistency restrictions to each group considering that one group can have a weak consistency model while the other one needs a more restrict consistency model. In contrast, a non-synchronizing Model assigns the same consistency model to the memory access types.
  • Issue vs. View-Based:[8] Issue method provides sequential consistency simulation by defining the restrictions for processes to issue memory operations. Whereas, view method describes the visibility restrictions on the events order for processes.
  • Relative Model Strength: Some consistency models are more restrict than others. In other words, strict consistency models enforce more constraints as consistency requirements. The strength of a model can be defined by the program order or atomicity relaxations and the strength of models can also be compared. Some models are directly related if they apply same relaxations or more. On the other hand, the models that relax different requirements are not directly related.

Relaxation Models

The following models are some models of relaxed consistency:

Weak Ordering

There are two categories of memory operations in weak ordering;[13] data operations and synchronization operations.

Release Consistency

Processor Consistency

The processor consistency model[5] is similar to PRAM consistency model with a stronger condition that defines all writes to the same memory location must be seen in the same sequential order by all other processes.

Transactional Memory Models

Transactional Memory model[13] is the combination of cache coherency and memory consistency models as a communication model for shared memory systems supported by software or hardware; a transactional memory model provides both memory consistency and cache coherency. A transaction is a sequence of operations executed by a process that transforms data from one consistent state to another. A transaction either commits when there is no conflict or aborts. In commits, all changes are visible to all other processes when a transaction is completed, while aborts discard all changes. Compared to relaxed consistency models, a transactional model is easier to use and can provide the higher performance than a sequential consistency model.

Consistency and replication

Tanenbaum et al., 2007[15] defines two main reasons for replicating; the reliability and performance. The reliability can be achieved in a replicated file system by switching to another replica in the case of the current replica failure. The replication also protects data from being corrupted by providing multiple copies of data on different replicas. It also improves the performance by dividing the work. While replication can improve performance and reliability, it can cause the consistency problem in multiple copies of data. The multiple copies are consistent if a read operation returns the same value from all copies and a write operation as a single atomic operation (transaction) updates all copies before any other operation takes place. Tanenbaum, Andrew, & Maarten Van Steen, 2007[15] refer to this type of consistency as tight consistency provided by synchronous replication. However, applying global synchronizations to keep all copies consistent is costly. One way to decrease the cost of global synchronization and improve the performance can be weakening the consistency restrictions.

Data-centric consistency models

Tanenbaum et al., 2007[15] defines the consistency model as a contract between the software (processes) and memory implementation (data store). This model guarantees that if the software follows certain rules, the memory works correctly. Since, in a system without a global clock, defining the last operation writes is difficult, some restrictions can be applied on the values that can be returned by a read operation.

Consistent ordering of operations

Some consistency models such as sequential and also causal consistency models deal with the order of operations on shared replicated data in order to provide consistency. In this models, all replicas must agree on a consistent global ordering of updates.

Sequential consistency

The goal of data-centric consistency models is to provide a consistent view on a data store where processes may carry out concurrent updates. One important data-centric consistency model is sequential consistency defined by Lamport (1979).[4] Tanenbaum et al., 2007[15] defines sequential consistency under following condition:

"The result of any execution is the same as if the (read and write) operations by all processes on the data store were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program."[15]

Adve and Gharachorloo, 1996[14] define two requirements to implement the sequential consistency; program order and write atomicity.

  • Program order: Program order guarantees that each process issues a memory request ordered by its program.
  • Write atomicity: Write atomicity defines that memory requests are serviced based on the order of a single FIFO queue.

In sequential consistency, there is no notion of time or most recent writes operation. There are some operations interleaving that is same for all processes. A process can see the write operations of all processes but it can just see its own read operations.

Linearizability[16] (Atomic memory)[13] can be defined as a sequential consistency with real time constraint by considering a begin time and end time for each operation. An execution is linearizable if each operation taking place in linearizable order by placing a point between its begin time and its end time and guarantees sequential consistency.

Causal consistency

The causal consistency[15] defined by Hutto and Ahamad, 1990[9] is a weaker consistency model than sequential consistency by making the distinction between causally related operations and those that are not related. For example, if an event b takes effect from an earlier event a, the causal consistency guarantees that all processes see event b after event a.

Tanenbaum et al., 2007[15] defines that a data store is considered causal consistent under the following condition:

"Writes that are potentially causally related must be seen by all processes in the same order. Concurrent writes may be seen in a different order on different machines."[15]

Grouping operations[15]

In grouping operation, accesses to the synchronization variables are sequentially consistent. A process is allowed to access a synchronization variable that all previous writes have been completed. In other words, accesses to synchronization variables are not permitted until all operations on the synchronization variables are completely performed.

Continuous consistency

The continuous consistency is defined later in the consistency protocol section.

Client-centric consistency models[15]

In distributed systems, maintaining sequential consistency in order to control the concurrent operations is essential. In some special data stores without simultaneous updates, client-centric consistency models can deal with inconsistencies in a less costly way. The following models are some client-centric consistency models:

Eventual consistency

An eventual consistency[15] is a weak consistency model in the system with the lack of simultaneous updates. It defines that if no update takes very long time, all replicas eventually become consistent.

Monotonic read consistency

Tanenbaum et al., 2007[15] defines monotonic read consistency as follows:

"If a process reads the value of a data item x, any successive read operation on x by that process will always return that same value or a more recent value."[15]

Monotonic read consistency guarantees that after a process reads a value of data item x at time t, it will never see the older value of that data item.

Monotonic write consistency

Monotonic write consistency condition is defined by Tanenbaum et al., 2007[15] as follows:

"A write operation by a process on a data item X is completed before any successive write operation on X by the same process."[15]

Read-your-writes consistency

A value written by a process on a data item X will be always available to a successive read operation performed by the same process on data item X.[15]

Writes-follows-reads consistency

In Writes-follow-reads consistency, updates are propagated after performing the previous read operations. Tanenbaum et al., 2007[15] defines the following condition for Writes-follow-reads consistency:

"A write operation by a process on a data item x following a previous read operation on x by the same process is guaranteed to take place on the same or a more recent value of x that was read."[15]

Consistency protocols

The implementation of a consistency model is defined by a consistency protocol. Tanenbaum et al., 2007[15] illustrates some consistency protocols for data-centric models.

Continuous consistency

Continuous consistency introduced by Yu and Vahdat (2000).[17] In this model, consistency semantic of an application is described by using conits in the application. Since the consistency requirements can differ based on application semantics, Yu and Vahdat (2000)[17] believe that a predefined uniform consistency model may not be an appropriate approach. The application should specify the consistency requirements that satisfy the application semantic. In this model, an application specifies each consistency requirements as a conits (abbreviation of consistency units). A conit can be a physical or logical consistency and is used to measure the consistency. Tanenbaum et al., 2007[15] describes the notion of a conit by giving an example. There are three inconsistencies that can be tolerated by applications.

  • Deviation in numerical values[17] Numerical deviation bounds the difference between the conit value and relative value of last update. A weight can be assigned to the writes which defines the importance of the writes in a specific application. The total weights of unseen writes for a conit can be defined as a numerical deviation in an application. There are two different types of numerical deviation; absolute and relative numerical deviation.
  • Deviation in ordering[17] Ordering deviation is the discrepancy between the local order of writes in a replica and their relative ordering in the eventual final image.
  • Deviation in staleness between replicas[17] Staleness deviation defines the validity of the oldest write by bounding the difference between the current time and the time of oldest write on a conit not seen locally. Each server has a local queue of uncertain write that is required an actual order to be determined and applied on a conit. The maximal length of uncertain writes queue is the bound of ordering deviation. When the number of writes exceeds the limit, instead of accepting new submitted write, the server will attempt to commit uncertain writes by communicating with other servers based on the order that writes should be executed.

If all three deviation bounds set to zero, the continuous consistency model is the strong consistency.

Primary-based protocols

File:Primary-backup-protocol.gif
Primary backup protocol
File:Local-write-backup-protocol.gif
Primary-backup protocol (local-write)

Primary-based protocols[15] can be considered as a class of consistency protocols that are simpler to implement. For instance, sequential ordering is a popular consistency model when consistent ordering of operations is considered. The sequential ordering can be determined as primary-based protocol. In these protocols, there is an associated primary for each data item in a data store to coordinate write operations on that data item.

Remote-write protocols

In the simplest primary-based protocol that supports replication, also known as primary-backup protocol, writes operation are forwarded to a single server and read operations can be performed locally.

Example: Tanenbaum et al., 2007[15] gives an example of a primary-backup protocol. The diagram of primary-backup protocol shows an example of this protocol. When a client requests a write, the write request is forwarded to a primary server. The primary server sends request to backups to perform the update. The server then receives the update acknowledgement from all backups and sends the acknowledgement of completion of writes to the client. Any client can read the last available update locally. The trade-off of this protocol is that a client who sends the update request might have to wait so long to get the acknowledgement in order to continue. This problem can be solved by performing the updates locally, and then ask other backups perform their updates. The non-blocking primary-backup protocol does not guarantee the consistency of update on all backup servers. However, it improves the performance. In the primary-backup protocol, all processes will see the same order of write operations since this protocol orders all incoming writes based on a globally unique time. Blocking protocols guarantee that processes view the result of the last write operation.
Local-write protocols

In primary-based local-write protocols,[15] primary copy moves between processes willing to perform an update. To update a data item, a process first moves it to its location. As a result, in this approach, successive write operations can be performed locally while each process can read their local copy of data items. After the primary finishes its update, the update is forwarded to other replicas and all perform the update locally. This non-blocking approach can lead to an improvement. The diagram of the local-write protocol depicts the local-write approach in primary-based protocols. A process requests a write operation in a data item x. The current server is considered as the new primary for a data item x. The write operation is performed and when the request is finished, the primary sends an update request to other backup servers. Each backup sends an acknowledgment to the primary after finishing the update operation.

Replicated-write protocols

In Replicated-write protocols,[15] unlike the primary-based protocol, all updates are carried out to all replicas.

Active replication

In active replication,[15] there is a process associated to each replica to perform the write operation. In other words, updates are sent to each replica in the form of an operation in order to be executed. All updates need to be performed in the same order in all replicas. As a result, a totally-ordered multicast mechanism is required. There is a scalability issue in implementing such a multicasting mechanism in large distributed systems. There is another approach in which each operation is sent to a central coordinator (sequencer). The coordinator first assigns a sequence number to each operation and then forwards the operation to all replicas. Second approach cannot also solve the scalability problem.

Quorum-based protocols[15]

Voting can be another approach in replicated-write protocols. In this approach, a client requests and receives permission from multiple servers in order to read and write a replicated data. As an example, suppose in a distributed file system, a file is replicated on N servers. To update a file, a client must send a request to at least N/2+1 in order to make their agreement to perform an update. After the agreement, changes are applied on the file and a new version number is assigned to the updated file. Similarly, for reading replicated file, a client sends a request to N/2+1 servers in order to receive the associated version number from those servers. Read operation is completed if all received version numbers are the most recent version.

Cache-coherence protocols

In a replicated file system, a cache-coherence protocol[15] provides the cache consistency while caches are generally controlled by clients. In many approaches, cache consistency is provided by the underlying hardware. Some other approaches in middleware-based distributed systems apply software-based solutions to provide the cache consistency. Cache consistency models can differ in their coherence detection strategies that define when inconsistencies occur. There are two approaches to detect the inconsistency; static and dynamic solutions. In the static solution, a compiler determines which variables can cause the cache inconsistency. So, the compiler enforces an instruction in order to avoid the inconsistency problem. In the dynamic solution, the server checks for inconsistencies at run time to control the consistency of the cached data that has changed after it was cached. The coherence enforcement strategy is another cache-coherence protocol. It defines that how to provide the consistency in caches by using the copies located on the server. One way to keep the data consistent is to never cache the shared data. A server can keep the data and apply some consistency protocol such as primary-based protocols to ensure the consistency of shared data. In this solution, only private data can be cached by clients. In the case that shared data are cached, there are two approaches in order to enforce the cache coherence. In first approach, when a shared data is updated, the server forwards invalidation to all caches. In second approach, an update is propagated. Most caching systems apply these two approaches or dynamically choose between them.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 5.2 Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. 8.0 8.1 8.2 8.3 Lua error in package.lua at line 80: module 'strict' not found.
  9. 9.0 9.1 Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. 13.0 13.1 13.2 13.3 Lua error in package.lua at line 80: module 'strict' not found.
  14. 14.0 14.1 Lua error in package.lua at line 80: module 'strict' not found.
  15. 15.00 15.01 15.02 15.03 15.04 15.05 15.06 15.07 15.08 15.09 15.10 15.11 15.12 15.13 15.14 15.15 15.16 15.17 15.18 15.19 15.20 15.21 15.22 15.23 15.24 15.25 15.26 Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. 17.0 17.1 17.2 17.3 17.4 Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found. (contains many valuable references)
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

External links