Learning Resource

Data Security, Privacy, Availability, and Integrity in Cloud Computing: Issues and Current Solutions

By Sultan Aldossary and William Allen

Abstract

Cloud computing has changed the world around us. Since data is getting bigger and needs to be accessible from many devices, moving digital information to the cloud has become a norm. However, there are many issues to manage related to cloud storage, including those that stem from the use of the virtual machine, which is the means to share resources in a cloud environment. In this paper, we present those issues that are preventing people from adopting the cloud and give a survey on solutions that have been done to minimize risks related to these issues. For example, the data stored in the cloud needs to be confidential, be available, and preserve integrity. Moreover, sharing the data stored in the cloud among many users is still an issue, since the cloud service provider is untrustworthy for managing authentication and authorization. In this paper, we discuss these issues related to data stored the cloud and provide solutions that differ from other papers.

Key Terms: data security, data confidentiality, data privacy, cloud computing, cloud security

Introduction

Cloud computing is everywhere. In many cases, users are accessing the cloud without knowing they are using it. According to Subashini and Kavitha [1], small and medium organizations will move to cloud computing because it will support fast access to their applications and reduce the cost of infrastructure. Cloud computing is not only a technical solution but also a business model demonstrating that computing power can be sold and rented. Cloud computing is focused on delivering services, and organizational data is being hosted in the cloud. The ownership of data is decreasing, while agility and responsiveness are increasing. Organizations now are trying to avoid focusing on IT infrastructure, instead planning their business processes to increase profitability. Therefore, the importance of cloud computing is increasing, becoming a huge market, and receiving much attention from the academic and industrial communities. Cloud computing was defined by the US National Institute of Standards and Technology (NIST) [2] as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The schematic definition of cloud computing can be simple: it comprises a data center, resources shared using virtualization, elasticity, and on-demand and instant service and is billed as a utility. This cloud model is composed of five essential characteristics (on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service), three service models (IaaS, PaaS, and SaaS), and four deployment models (public, private, community, and hybrid). With this technology, users outsource their data to a server outside their premises, which is run by a cloud provider, according to Zhou and Huang [4]. In addition, Kumar and Lu have found that memory, processor, bandwidth, and storage are visualized and can be accessed by a client using the Internet [5]. Cloud computing is composed of many technologies, including as service-oriented architecture, virtualization, and web 2.0. There are many security issues with cloud computing. However, the cloud is essential for organizations because it satisfies the need for access to abundant resources to be used in high demand. In addition, cloud computing offers highly efficient data retrieval and availability, and cloud providers are taking responsibility for resource optimization.

Characteristics of Cloud Computing

There are five characteristics of cloud computing. The first one is on-demand self-service, where a consumer of services is provided the needed resources by interacting with a could provider, and without human intervention. The second characteristic is broad network access, which means resources can be accessed from anywhere through a standard mechanism by thin or thick client platforms, such mobile phones, laptops, and desktop computers. Another characteristic is resource pooling, which means the resources are pooled so multitenants can share them. In the multitenant model, resources are assigned dynamically to a consumer and can be reassigned after the first consumer is finished with it, to respond to high resource demand. Even if consumers are assigned to resources on demand, they do not know the exact location of these assigned resources, sometimes only knowing the country, state, or data center. Storage, processing, memory, and network are the kinds of resources that are assigned. Rapid elasticity a fourth cloud computing characteristic, and indicates that resources are dynamically increased when needed and decreased when there is no need. Measured service is the final characteristic of cloud computing, used to monitor how much is consumed. The cloud provider needs measured service to know how much the consumer has used for billing purposes.

Service Models

According to Mell and Grance [2], there are three models—software-based, platform-based, and infrastructure-based—which differ in the capabilities they offer consumers. Each service-oriented cloud computing architecture model differs from the traditional model by having more service components managed by a cloud service provider, rather than by the user.

Software as a Service (SaaS)

In this service, the cloud service provider provides software and the cloud infrastructure to clients so they can use this software on the cloud infrastructure for their applications. Since clients can only run the software and use it, they do not have control over the underlying infrastructure and physical setting of the cloud such as network, operating system, and storage. The cloud service provider is responsible for these service components and is the only one in charge of controlling the underlying physical setting without client intervention. Clients can access this software as a thin client through a web browser.

Platform as a Service (PaaS)

This service is similar to SaaS in that the infrastructure is controlled by the cloud service provider, but is different in that the users can deploy their software. In this model, clients can install and deploy their customized applications by using the tool offered by the cloud service provider. Physical settings are controlled and restricted by the cloud service provider, and application settings are given to each user to control.

Infrastructure as a Service (IaaS)

In this service, computing resources such as processing, storage, and networks can be provisioned. Clients of IaaS can install and use any arbitrary operating system and can install and deploy their applications on this operating system. Cloud services such as Amazon EC2 are adopting this model and charging their clients according to the resources being used.

Deployment Models

Cloud deployment models have been discussed in the literature [8]–[15]. There are four deployment models mentioned in Mell and Grance [2] as follows:

Private Cloud

In this model, the cloud provider provides cloud infrastructure to a single organization that has many consumers. This infrastructure is to be used exclusively for their use and need. The owner, manager, and operator of this cloud could be the organization itself, a third party, or the organization and a third party together. This private cloud could be on premises or off premises.

Community Cloud

In this model, the cloud provider provides cloud infrastructure to many organizations that form a community that shares a mission, security requirements, compliance consideration, or policy. This infrastructure is to be used exclusively for their uses and needs. The owner, manager, and operator of this cloud could be one of organizations, a third party, or the organization and a third party together. This community cloud could be on premises or off premises.

Public Cloud

This model differs from the previous model in that it is open to the public. It is not private and not exclusively for the community. In this model, a public cloud can be provisioned for public use. The owner, manager, and operator of this cloud could be a government, private organization, a business or academic organization, and sometimes many of them can be in a single cloud and can get service from the same provider.

Hybrid Cloud

This model comprises two or more deployment models (private, community, or public). The cloud infrastructure can be a combination of those models. Data centers within an organization, private clouds, and public clouds can be combined to get services and data from both, thereby creating a well-managed and unified computing environment. A cloud can be considered a hybrid if the data moves from a data center to a private cloud or public cloud, or vice versa.

Cloud Security Issues

Even with the many benefits of cloud computing, users are reluctant to adopt this technology and move from conventional computing to cloud computing, according to Zhou and Huang [4]. In cloud computing, security is a broad topic. It comprises a mix of technologies, controls to safeguard data, policies to protect data, services, and infrastructure. This combination is a possible target of attacks. Therefore, there are security requirements necessary in the cloud that are absent in traditional environments. Traditional security architecture is broken because the customer does not own the infrastructure anymore. Also, Lori [16] finds that protection in the cloud-based security system is equal to the security of the weakest entity. By outsourcing, users lose their physical control over data when it is stored in a remote server and they delegate their control to an untrusted cloud provider or party [17], [18]. Despite using powerful and reliable servers compared to client processing power and reliability, there are many threats to cloud storage, not only from an outsider but also from an insider who can manipulate cloud vulnerabilities to do harm [19]. These threats may jeopardize data confidentiality, data integrity, and data availability. Some untrusted providers could hide data breaches to save their reputation or free some space by deleting less-used or less-accessed data [20].

Top Threats to Cloud Computing

Cloud computing is facing a lot of issues: data loss, data breaches, malicious insiders, insecure interfaces and APIs, account or service hijacking, data location, and denial of service.

Data Loss

Many companies are outsourcing their entire data management to cloud service providers. Because of the low cost rate that the cloud offers, the customers should make sure not to expose their important data to risks incurred by one of the many ways to compromise their data associated with cloud computing. There are ways to lose data due to a malicious attacks, server crashes, or unintentional deletion by the provider without having backups. Catastrophic events like earthquakes and fires could also cause data loss, as well as any event that harms the encryption keys [21]. To avoid losing data, CSA proposes several solutions[22]:

  • Use a strong API for access control.
  • While the data is in transit, encrypt and protect its integrity.
  • Analyze data protection at run time and design time.
  • Require the service provider to wipe the persistent media data before releasing it to the pool.

Data Breaches

A cloud environment has various users and organizations whose data are in the same place. Any breach to this cloud environment would expose all users' and organizations' data to be unclosed[1]. With multitenancy, customers using different applications on virtual machines could share the same database, and any corruption will affect others sharing the same database [21]. Even SaaS providers have claimed that they provide more security to customers' data than conventional providers. In Baker [23], it was reported that hacking and malware are the common causes of data breaches, with 50 percent caused by hacking, and 49 percent by malware.

Malicious Insiders

Malicious insiders are the people who are authorized to manage data, such as database administrators or employees of the company offering cloud services [21], partners, and contractors who have access to the data. These people can steal or corrupt the data, sometimes because they are getting paid by other companies, or to just hurt the target company. CSA proposes several solutions [22]:

  • Conduct a comprehensive supplier assessment and make supply chain management ID stricter.
  • As part of the legal contract, define human resources requirements.
  • Make information security and all cloud service practices more transparent.
  • Create a process to notify when data breaches happen.

Insecure Interfaces and APIs

The communication between the cloud service provider and the client is through the API through which the clients can manage and control their data [21]. Therefore, those interfaces should be secure to prevent any unauthorized access. Weak interfaces that security mechanisms cannot defend could lead to accessing resources even as privileged user. CSA proposes several solutions to avoid insecure interfaces and APIs [22]:

  • Analyze the security model for interfaces of the cloud provider.
  • Make a strong access control and authentication when data is transmitted.
  • Understand dependencies in API.

Account or Service Hijacking

Users are using passwords to access cloud service resources, so when their accounts are hijacked and stolen, the passwords are misused and altered [21]. The unauthorized user who has a password can access clients' data and steal, alter, delete, or sell it to others. CSA proposes several solutions to avoid account or service hijacking [22]:

  • Prevent users from sharing their credentials.
  • Use a two-factor authentication system.
  • Monitor all activities to detect unauthorized access.
  • Understand security policies and SLAs.

Data Location

Cloud providers have data centers spread over many places. Data location is an issue in cloud computing, since the cloud users need to know where their data is stored. There are regulations in some countries governing where companies can store their data, and some nations require that companies to store their data in their home country. The data location is particularly significant when the user data is stored in a location that is prone to wars and natural disasters.

Denial of Service

Some organizations need their systems to be available all the time because availability is important to the critical services they provide. The cloud services provider offers resources that are shared among many clients. If an attacker uses all available resources, others cannot use those resources, which leads to denial of service and could slow access. Customers that use a cloud service and are affected by a botnet could affect the data availability of other providers.

Multitenancy

In Mell and Grance [2], the author did not consider multitenancy as an essential characteristic of cloud computing. However, in CSA [24] and ENISA [25], multitenancy is considered an important part of cloud computing. Although multitenancy arrangements offer many benefits, there are also an abundance of challenges related to having more than one tenant on one physical machine, which is required to use the cloud computing infrastructure. Since tenants are in the same place, they could attack each other. Previously, an attack could occur between two separate physical machines, but now, because two or more tenants are sharing the same hardware, an attacker and a victim can be in the same place. The technology is used to keep tenants separate from each other by providing a boundary for each tenant using virtualization. However, virtualization itself is poses several security issues.

Virtualization Security Issues

Virtualization is an important component of cloud computing that is now getting more attention from academic and industrial communities. Virtualization means separating underlying hardware resources from provided resources. By using virtualization, two or more operating systems might run in a single machine, each with own resources.

Cross Virtual Machine(VM) Side-Channel Attacks

This attack requires the attacker to be in another virtual machine on the same physical hardware with the victim. In this attack, the attacker and victim are using the same processor and same cache. When the attacker alternates with the victim's VM execution, the attacker can attain some information about the victim's behavior. In Ristenpart et al. [27], there is an example of a VM side-channel attack and an explanation of how the attacker can infer some information about a victim. The timing side-channel attack is one kind of VM side-channel attack [28]. This attack involves determining the time needed by various computations. Determining this time can lead to leaking sensitive information [28] to the person who performs this computation, or sometimes out of cloud provider itself. This attack is hard to detect because the owner of the VM can check other VMs for privacy concerns. Sometimes cloud providers can detect a side-channel attack to protect their reputation, but they do not announce it. A separate type of side-channel attack is the energy-consumption side-channel attack [29].

VM Image Sharing

VM can be instantiated from a VM image. A shared image repository can be used to share VM images, or a user can have his own VM image [30]. There is a repository for sharing VM images that some malicious users could take advantage of to inject a code inside a VM [31]. This injection could lead to a serious problem because, for example, a VM image may contain malware. If the image is returned without properly cleaning it, sensitive data could be leaked [30].

VM Isolation

Since VMs run in the same hardware, they share all components, including processor, memory, and storage. Isolating the VM logically to prevent one user from intervening with another is not enough, since they are sharing computation, memory, and storage. Therefore, the data may leak when it is in computation, memory, or storage. To counteract this serious issue, isolation should be at the level of VM and hardware [32].

VM Escape

It's possible for VMs or a malicious user to escape from the virtual machine manager (VMM) supervision [33]. VMM controls all VMs and how the VM or underlying resources, such as hardware, are used. One of the most serious scenarios is that malicious code can go through unnoticed from the VMM and can interfere with the hypervisor or other guests [31].

VM Migration

The VM migration process suspends the running VM, copies the status from the source virtual machine monitor (VMM) to the destination VMM, and resumes the VM at the destination [11]. In virtual machine migration, the running VM is suspended, has its status copied to the VMM from its source VMM, and is resumed on the destination VMM [34]. In Zhang and Chen [35], VM migration is defined as the moving of a VM from one physical machine to another, while it is running, without shutting it down. Fault tolerance, load balancing, and maintenance are some causes of VM migration [30], [36]. The data and the code of VM [35] are exposed when transferring in the network between two physical hardware locations when they are vulnerable to an attacker. An attacker could also let VM transfer to a vulnerable server in order to compromise it. When an attacker compromises the VMM, he or she can get a VM from this data center and migrate it to other centers. Therefore, he can access all resources as a legitimate VM[37]. This process needs to be secured [30] to prevent attackers from benefiting.

VM Rollback

VM rollback entails reverting a VM to its previous state. Since this process adds more flexibility to the user, it has more security issues. For example, a VM could be rolled back to a previous vulnerable state that has not been fixed, [38] or can be rolled back to an old security policy or old configuration [30]. A user could also be disabled in a previous state and when the owner of the VM rolls back, the user can still have access [30].

Hypervisor Issues

The hypervisor and virtual machine monitor are the main parts of virtualization. The virtual machine monitor is responsible for managing and isolating VMs from each other. The VMM is the intermediary between the hardware and VMs, so it is responsible for proving, managing, and assigning the resources. A hypervisor with full control of hardware can access a VM's memory [39]. Jin et al. [39] propose a hardware-based solution to protect a VM's memory pages from the malicious hypervisor.

Data Integrity Issues

Data that is stored in the cloud could suffer from the damage on transmitting to and from cloud data storage. Since the data and computation are outsourced to a remote server, the data integrity should be maintained and checked constantly in order to prove that data and computation are intact. Data integrity means data should be kept from unauthorized modification. Any modification to the data should be detected. Computation integrity means that program execution should be as expected and be kept from malware, an insider, or a malicious user who could change the program execution and render an incorrect result. Any deviation from normal computation should be detected. Integrity should be checked at the data level and computation level. Data integrity could help in retrieving lost data or notifying if there is data manipulation. The following are two examples of how data integrity could be violated.

Data Loss or Manipulation

Because users have a huge number of user files, cloud providers offer storage as a service (SaaS). Since the data is outsourced to a remote cloud, which is unsecured and unreliable, and because the cloud is untrustworthy, data might be lost or modified by unauthorized users, intentionally or accidentally. There are also many administrative errors that could result in lost data, such as getting or restoring incorrect backups. The attacker could use the user's outsourced data once the user has lost the control over it.

Untrusted Remote Server Performing Computation on Behalf of User

Cloud computing is not just about storage. There are also intensive computations that need cloud processing power in order to perform their tasks. Therefore, users outsource their computations. Since the cloud provider is not in the security boundary and is not transparent to the owner of the tasks, no one will prove whether the computation integrity is intact. Sometimes, the cloud provider behaves in such a way that no one will discover a deviation of computation from normal execution. Because the resources have a value to the cloud provider, the cloud provider could not execute the task in a proper manner. Even if the cloud provider is considered more secure, there are many remaining security challenges, such as those coming from the cloud provider's underlying systems, vulnerable code, and misconfiguration.

Protecting Data Integrity

Tenants of cloud systems commonly assume that if their data is encrypted before outsourcing it to the cloud, it is secure enough. Although encryption provides confidentiality against attacks from a cloud provider, it does not protect that data from corruption caused by configuration errors and software bugs. There are two traditional ways of proving the integrity of data outsourced on a remote server, and both can be done by a client or a third party.

The first method is to download the file and check the hash value using a message authentication code algorithm. MAC algorithms take two inputs—a secret key and variable length of data—that produce one output, a MAC (tag). In this way, the algorithm is run on the client side. After getting an MAC, the data owner outsources the data to the cloud. To check its integrity, the data owner downloads the outsourced data, calculates its MAC, and compares it with the MAC calculated before outsourcing that data. By using this method, accidental and intentional changes will be detected. Through use of the key, the authenticity of data will be protected, and only the one who has the key can check the data authenticity and integrity. Downloading and calculating the MAC of a large file is an overwhelming process and takes a lot of time and is impractical since it consumes more bandwidth.

The second method is to compute the hash value in the cloud by using a hash tree. In this technique, the hash tree is built from bottom to top, where the leaves are the data and parents are also hashed together until the root is reached. The owner of the data only stores the root. When the owner needs to check his data, he asks for just the root value and compares it with the one he has. This method is somewhat impractical, because computing the hash value of a huge number of values consumes more computation. Sometimes, when the provided service is just storage without computation, the user downloads the file—the same as in the first case—or sends it to a third party, which will consume more bandwidth. Therefore, there is a need to find a way to check data integrity while saving bandwidth and computation power. Remote data auditing, whereby the data integrity or correctness of remotely stored data is investigated, has been given more attention recently [40]–[45].

Third Party Auditor

A third party auditor (TPA) is the person who has the skills and experience to carry out all auditing processes, and a TPA scheme is used for checking data integrity. Since there are many security risks and suspicious actions, users of cloud storage depend on third party auditors [46]. Balusamy et al. [47] proposed a framework, which involves the data owner in checking the integrity of their outsourced data.

In the proposed scheme, the data owner is aware of all his resources on the cloud, and the scheme involves the data owner in the auditing process. First, TPA uses normal auditing processes. Once they discover any modification to the data, the owner is notified about those changes, and he or she checks the logs of the auditing process to validate those changes. If owners suspects that unusual actions have happened to their data, they can check the data by themselves or through another auditor assigned by them. Therefore, the owner is always tracking any modification to his own data. There is an assigned threshold value that a response from the third party auditor should not exceed. The data owner validates all modifications less than or equal to this threshold. If the time exceeds this threshold, the data owner is supposed to perform surprise auditing.

Provable Data Possession

Ateniese et al. [41] proposed the first provable data possession (PDP) scheme to investigate statically the correctness of the data outsourced to cloud storage without retrieving the data. The proposed model is to check that data stored in a remote server is still in its possession and that the server has the original data without retrieving it. This model is based on probabilistic proofs, by randomly choosing a set of blocks from the server to prove the possession. They used an RSA-based homomorphic verifiable tag, which combines tags to provide a message that the client can use to prove that the server has a specific block, regardless of whether the client has access to this specific block or not. Even with the advantages this scheme offers, it does not deal with dynamic data storage, and there is computation and communication overhead in the server because of whole-file RSA numbering. In the case of proof that is untrusted or has malicious intent, this scheme fails in proofing data possession [7].

Ateniese et al. [42] overcome the limitation in the 2007 Ateniese et al. study[41]. By using symmetric cryptography, they proposed a PDP scheme that supports partial and dynamic verification. The limitation of this proposition is that it does not support auditability.

Since PDP schemes just check parts of the file for integrity, there is a need to correct blocks when they suffer from corruption due to hardware issues. Ateniese et al. [48] propose a scheme to prove data possession using forward error checking (FEC). First, the file is encoded using FEC. Then, the encoded file is used by a PDP scheme. This methods helps to find the corruptions and mitigate against them.

Wang et al. [44] propose a new dynamic PDP for auditing remote dynamic data. They use the Merkle hash function (MHF) and the bilinear aggregate signature. They modify the MHF structure by sorting the leaf nodes of MHK to be arranged from left to right. This sorting will help in identifying the location of the update. However, this method incurs more computation overhead when the file is large.

Sookhak et al. [49] propose a new method for dynamic remote data auditing by using algebraic signatures and a new data structure called a divide and conquer table (DCT). DCTs keep track of the data after appending, updating, insertions, and deletion, thereby avoiding the need to download the file to check its integrity.

Proof of Retrievability

PDP differs from proof of retrievabilty in that PDP only detects when corruption happens to a large amount of data [50]. PDP protocols can be verified publicly or privately. In the protocol that is privately verifiable, only the owner of the key can verify the encoded data, while in publicly verifiable protocol, data integrity can be verified or audited by a third party. Proof of retrievability is a cryptographic approach based on a challenge-response protocol in which a piece of data is proved to be intact and retrievable without retrieving it from the cloud. The simplest form of proof of retrievability is taking the hash of block using a keyed hash function. Data owners take the hash values of the file by using keyed hash functions. After getting the hash values, the data owner keeps the key and the hash values,and sends the file to a remote server. When the data owner needs to check their data retrievabilty, they send their key and ask the server to send the hash values, allowing them to compare with the hash values that data owner has. The advantage of this solution is that it is simple and implementable. However, there are many disadvantages, including the need to store many keys in order to use one each time. The number times a data owner can check hash values is limited by the number of keys, since the remote server could store all keys and hash values and use them when it is asked to prove it has that file. In addition, it expends more client and server resources since the hash values need to be calculated each time when the is required. Moreover, some clients, such mobile devices and PDAs do not have the resources to calculate the hash values of big files.

Jules and Kaliski, Jr. [50] used an error correction code and spot checking to prove the possession and retrievability of the data. The verifer hides sentinels among file blocks before sending them to the remote server. When the verifer wants to check the retrievability of the data, it only asks the server for those sentinels. To keep those sentinels indistinguishable for the remote server, the data owner encrypts the file after adding sentinels. In contrast to the simple one, it uses one key regardless of the size of the file. Unlike the simple solution where the entire file is processed, this method requires access to only parts of the file, requiring less in the way of I/O operations. Disadvantages to this scheme include that the files need to be in encrypted form, so the process incurs computation overhead in clients like mobile devices and PDAs.

Proof of Ownership

With proof of ownership, the client proves ownership of the file outsourced by the client to server. This method differs from POR and PDP in that POR and PDP need to embed some secret in the file before outsourcing it and the client can check whether the file is in the cloud server by asking for the secret and comparing it with what he or she has. The proof of ownership comes after the need to save storage by duplication. The owner of the files needs to prove to the server he owns this file.

Halevi et al. [51] introduced the proof of ownership idea. The techniques behind proving the ownership are the collision resistant hash functions and Merkle hash tree. The owner of a file creates a Merkle hash tree (MHT) and sends the file to the cloud, called a verifier. Once it is received by cloud, the file is divided into bits using a pairwise independent hash, and the verifier then creates an MHT for this file. Once the prover asks for the ownership of the file, the verifier sends a challenge, which is the root and the number of leaves. The prover calculates the sibling path and returns it to the verifier as proof of ownership of the file. The verifier, after receiving the sibling path, checks this path against what the MGT has and validates the prover. However, this method violates the privacy of users since their sensitive data is leaked to the remote server—an issue that is not addressed by Halevi et al [51]. Therefore, there has to be a way to prevent that remote server from accessing outsourced data and building a user profile [52].

Data Availability

Fawaz et al. [53] developed a storage architecture that covers security, reliability, and availability. The underlying technique of their proposed architecture uses a storage method based on RAID 10. They used three server providers and stripped the data to two servers and the parity bits in the third server provider. They followed a sequential method to store the data after encrypting it and dividing the cipher into blocks. One block is in one server provider's storage, the next block is in the next server provider's storage and the parity bit in the third server provider. A parity bit can be in any server provider's storage, while the other is in the one of the remaining server provider's storage. In case the two server providers collide to collect the data each one has, the encryption will protect the data from unauthorized access. In case one server provider's service is distributed using a parity bit and an available server provider, the service will be available. The scenario is the same in case one service provider corrupts the data. Any number of service providers can be used in this storage architecture.

In Bowers et. Al [54], a HAIL (high availability and integrity layer) is designed to address the threat caused by a service provider being unavailable. A HAIL distributes the data across many cloud providers to keep their service available at all times. A HAIL leverages many cloud service providers to make a reliable, cost effective solution out of unreliable components. The idea behind the HAIL is inspired by RAID, which is reliable storage made from unreliable storage. The HAIL works when there is corruption. It does not detect the corruption, but rather remedies it by avoiding it in a subset of storage providers using the data in the other service provider's storage.

Bessani et al. [55] proposed using Depsky, which uses many clouds to build a cloud-of-clouds that is implemented to address two security requirements in their storage system: confidentiality and availability of data. They combined the byzantine quorum protocol as well as secret sharing cryptographic and erasure codes.

Data Confidentiality Issues

Usually the data is encrypted before it is outsourced, and the service provider gets encrypted data. Therefore, the client is responsible for handling the access control policy, encrypting the data, decrypting it, and managing the cryptographic keys [56]. However, when the data is shared among many users, there has to be more flexibility in the encryption process to handle users of the group, manage the keys between users, and enforce the access control policy to protect data confidentiality [57]. Sharing the data among a group of users adds a greater burden on the owner of the outsourced data.

Chu et al. [59] describe a cryptosystem in which the data owner encrypts the data by using his public key and identifiers, called a class, on the encryption process. The owner has a master key to create other secret keys, some classes of data, and all classes of ciphertext. Once the user gets their aggregate key, they only decrypt the class of ciphertext this key is designed for. Each part of the aggregate key can decrypt part of the ciphertext, and the whole key can decrypt the whole ciphertext. Therefore, this cryptosystem helps in sharing data among a group of users with fine-grained access control without giving them a key that can decrypt all that data.

Access Control

When data is outsourced to the cloud, which is untrusted because it is in a domain where security is not managed by the data owner, data security has to be given more attention. When more than one entity wants to share data, there has to be a mechanism to restrict who can access that data. Many techniques have been discussed in the literature. Those techniques were proposed to keep data content confidential and keep unauthorized entities from accessing and disclosing the data using access control while permitting many authorized entities to share the data. The following are some of the techniques explained in the literature.

Public Key Encryption

Public key encryption is used to encrypt the data using the public key. Only the owner of the private key can decrypt this data. There are many issues that make this method difficult to apply in the cloud when many people need to access those files.

Sana et al. [60] proposed a lightweight encryption algorithm by using symmetric encryption performance to encrypt files, and using asymmetric encryption-efficient security to distribute keys. However, there are many disadvantages of using this method, including key-management issues and the need to get fine-grained access to files (i.e., access to only part of the file). This solution is also not flexible and scalable because encryption and decryption are needed when a user leaves the group in order to prevent them from accessing the data. Key generation and encryption process is shown in the figure below.

Schematic showing the process of public key encryption, in which (1) the public key infrastructure creates a public and private key for user, Alice; (2) Bob asks PKI, Alice Public Key; (3) PKI sign the key and send it to Bob; (4) Bob encrypts the message using Alice’s public key; (5) Using Alice's private key, encrypted message is decrypted.
Public Key Encryption

Identity-Based Encryption (IBE)

Shamir [61] has introduced identity-based encryption. The owner of data can encrypt his data by specifying the identity of the authorized entity to decrypt it based on that entity's identity, which must match the one specified by the owner. Therefore, there is no key exchange. The encryption process is shown in the figure below.

Schematic showing the process of Identity-Based Encryption (IBE), in which By using Alice identify, Bob encrypts the message for Alice. Then, a private key generator will (1) setup algorithm generates a master key for Alice. With the master key, (2) Alice shows and proves her identity to PKG; identity like: Alice@Alice.com and (3) with proven identity, key generation algorithm generates private key for Alice. Alice then uses her private key to decrypt the message.
Identity-Based Encryption

Attribute-Based Encryption (ABE)

In attribute based encryption, the identity of a user is determined through a set of attributes, which generates the secret key. It also defines the access structure used for access control, which encrypts data for confidentiality and shares it among a group of users. This method namely integrates encryption with access control.

In Sahai and Waters [62], attribute-based encryption, known as fuzzy identity-based encryption, was proposed a few years after IBE. In this scheme, a group of attributes identify someone's identity. The data owner encrypts the data, and only those who have attributes that overlap with the attributes specified in the ciphertext can decrypt it. The key-generation process is shown in the figure below, and the encryption and decryption algorithm is shown in the next figure after that.

Schematic showing the process of Attribute Based Encryption (ABE) in which a private key generator will (1) setup an algorithm generate a master key for Alice. Then, with the master key (2) Alice’s identity is decided; fuzzy identity (w); which leads to attribute 1- attribute n. The master key also, (3) with identity(w)key generation algorithm generate private key for Alice; and the private key is generated.
Attribute-Based Encryption
Schematic showing the process of Encryption and Decryption Attribute Based Encryption (ABE) in which x encrypted the message with Bob(w). Then, the message is sent to Bob and Bob decrypts the message with his identity. Alice can decrypt the message if her identity (w intersect.=d) where d is the number that at least two identities intersect.
Encryption and Decryption Attribute Base Encryption

Key Policy Attribute-Based Encryption (KP-ABE)

In Goyal et al. [63], key policy attribute-based encryption was proposed. This method has a more general approach than ABE because it expresses more conditions than just matching the attributes to enforce more control. In this mechanism, ciphertext is linked with a set of attributes. The private key is linked to a monotonic access structure, which is based on a tree to specify the identity of the user. When the user's private key has the attributes that satisfy the attribute in ciphertext, the user decrypts the ciphertext. The key generation process is shown in the figure below, and the encryption and decryption algorithm is shown in the subsequent figure. A disadvantage of this method is that the decriptor must trust the key generator to generate keys for the appropriate person with the right access structure. If the data needs to be re-encrypted, new private keys have to be issued to keep accessing that data. Therefore, there is a need to get the policy associated with the key. Also, this technique does not support nonmonotonic access structures that express negative attributes, such as "not."

Ostrovsky et al. [64] propose a scheme that works with nonmonotonic access structures that supports positive and negative attributes. However, this scheme increases the size of ciphertext and key. Also, there is cost related to time needed for encryption and decryption. In KP-ABE, the size of ciphertext increases linearly with the number of associated attributes.

In Attrapadung et al. [65] a scheme is proposed that results in a constant size of ciphertext regardless of the number of attributes and that supports nonmonotonic access structures. However, the size of the key is quadratic related to the number of attributes. To overcome that disadvantage, a ciphertext policy attribute-based encryption was proposed. However, CP-ABE costs more than KP-ABE [66].

Schematic showing the process of Key Policy Attribute Based Encryption key Generation. In which a private key generator (1) setup algorithm generates a master key for Alice. Then with the master key (2) Alice identify(w) is decided; (3) With identity(w), key generation algorithm generates a private key for Alice, this private key is used to create a key policy.
Key Policy Attribute-Based Encryption Key Generation
Schematic showing the process of KP-ABE encryption and decryption, in which X encrypts the message with a set of attributes(y). Alice decrypts the message if her key is satisfied with y. That is T(y)=1 where T is the access tree. This is sent to Bob. Bob can also decrypt the message if his key y is satisfied with y. That is T(y)=1 where T is the access tree.
KP-ABE-Encryption and Decryption

Ciphertext Policy Attribute-Based Encryption (CP-ABE)

In Bethencourt et al. [67], CP-ABE was proposed. In this scheme, the access structure, which is responsible for specifying the encryption policy, is associated with ciphertext. A private key for a user is created based on his attributes. A user can decrypt the ciphertext if the attributes in his private key satisfy the access structure in ciphertext. The benefit of making an access structure with ciphertext is that the encryptor can define the encryption policy and all already-issued private keys cannot be changed unless the system is rebooted. There are four functions for the CP-ABE scheme [67][68]:

  1. (MasterKey, PublicKey)=Setup(P). A trusted authority runs this function and it takes a security parameter(P) as its input and master key (MK), and public key (PK) as its output.
  2. SK=Key Generation(A,MK). A trusted authority runs this function and it takes a set of attributes (A) and master key (MK) as its input, and its output is a secret key for users associated with a set of attributes.
  3. Ciphertext (CT)=Encryption (M,MK,P): The data owner runs this function to encrypt his data. It takes a message (M), access control policy (P), and master public key (PK) as its inputs. Its output is a ciphertext under access control policy (P). The encryption algorithm is shown in the figure below.
  4. M=Decryption(ciphertext,SK). A decryptor who has the ciphertext runs this function. This ciphertext, under access policy (P) and secret key (SK), can be encrypted if and only if the access policy of the secret key overlap satisfies the access policy of the ciphertext, and its output is the original message. If it does not satisfy those conditions, the decryptor cannot get the original message. The decryption algorithm is shown in the figure below.
Schematic showing the process of CP-ABE encryption and decryption, in which X encrypts the message with the access tree(T). Alice decrypts the message if her key with her attributes satisfies the access tree associated with the encrypted message. This is sent to Bob. Bob can also decrypt the message if his key with his attributes satisfies the access tree associated with the encrypted message.
CP-ABE-Encryption and Decryption

Multicloud Computing (MCC) Issues

Cloud computing now is moving to multicloud computing because of security issues stemming from using a single cloud, such data availability. With multicloud computing, users' devices can connect to several clouds, each with their own storage. Issues with multi-cloud computing include data availability and security [70], according to Cachinet et al. Services of single clouds are still subject to outage. There is a fear among organizations that a single cloud would not fulfill their demands, such as reliability and availability. Some organizations need the availability to be high and do not need their data to be locked in. Therefore, they need a system that is always available and not under control of a single cloud provider. It is predicted the use of a multicloud will be become a trend in the coming years years.

Alzain et al. [6] have discussed many security issues in a single cloud, and they are are promoting the multicloud and its solutions to address single-cloud security issues. They promised that by using multicloud, valuable information, such as credit card information and medical records, could be protected from untrusted third parties and malicious insiders.

Vukoli'c [71] said that moving from a single cloud to multicloud distributes trust, reliability, and security among multiple cloud providers. In addition, the users can avoid moving their data once they get locked in by using another cloud to run their business.

Mahesh et al. [72] suggests encrypting data, dividing it into chunks, and storing those chunks in many cloud service providers. They insisted this would help to prevent all security issues related to the cloud.

Suganthi et al. [73] proposed a solution for protecting the privacy of the signer of that data from a third party auditor during the auditing process. When an owner of data portions their data and signs and distributes it to multiclouds to share with others, the third party could get the identity of the signer, since it is needed when auditing. Therefore, they proposed this solution to prevent violating the owner's privacy by revealing their identity through homomorphic authenticators and aggregate signatures[73]. The aggregate signature scheme is a group of signatures that is aggregated to one digital signature[74]. One aggregate signature for n signatures of m messages that are from u users is the result of this scheme [74]. Therefore, the benefit of using this method is that the auditor will know the how the users sign the messages, but without knowing specifically how to sign each message.

Mobile Cloud Computing

Limitations of Mobile Devices

With the advancement in mobile devices such as more processing, storage, memory, sensors, and operating system capabilities, there is a limitation with regard to energy resources needed for complex computation. Some of the applications in mobile devices are data intensive or compute intensive. The mobile device cannot run them because of limits to battery life, and cloud computing is therefore needed to run those complex computations.

Mobile Cloud Computing

Mobile cloud computing is using mobile devices as the front end and the cloud as back end for storage and computation. Mobile cloud computing consists of mobile computing, cloud computing, and network.

In Ren et al. [76], three schemes are proposed to maintain the confidentiality and integrity of a mobile device's files stored in the cloud. The first scheme is the encryption-based scheme (EnS). In this scheme, the mobile device encrypts the file and gets its hash code. Since the length of a password is limited, he encryption key is a concatenation of the password entered by a user, the filename changed to bits, and the file size, all used to defend against brute-force attacks on a cloud server. Only the filename is kept in the file, and everything related to the file is deleted. When downloading the file from the cloud server, only the password is needed to decrypt the file. This practice will need more processing on the mobile device side. They proved the confidentiality and integrity of the file using this scheme when it is stored in a distrusted cloud server. To overcome the issue of power consumption in the first scheme, a coding-based scheme is proposed. This scheme does not use an encryption function since it consumes less power. The confidentiality of the file is protected using matrix multiplication, and the integrity is ensured by using hash-based message authentication codes. The file is divided into many blocks, each block divided into many chunks, and each chunk into n bits. Each block represents a matrix with chunks as rows and bits as columns. A code victor matrix is created from the entered password. For confidentiality, each matrix is multiplied by the code victor matrix, resulting in a secrecy code. To protect file integrity, all secrecy codes are concatenated and hashed, resulting in the integrity key. The file is hashed with the integrity, creating a message authentication code. The third scheme is the sharing-based scheme (ShS), which applies X-OR operations on the file and needs less computational power. The hash-based message authentication code is used to verify the integrity of the file while X-OR operations are used to protect the confidentiality of the file.

Khan et al. [77] propose a new scheme called block-based sharing. This scheme overcomes all limitations of the previous schemes proposed in Ren et al. [76] and use X-OR operations. First, they extend the password entered by a user in order to be the same as the block size. For example, the block size is 160 bits, and the password entered by the user is 60 bits. In this case, they extend the password's 60 bits to match the block's 160 bits. Second, they divide a file to blocks with the same size. After that, they X-OR the first block with first extended password. The second block is X-ORed with the extended password after shifting each bit to the right. Therefore, each block is X-ORed with a distinct password equal to the size of the block. To protect integrity, they hash the concatenation of the file name, and extend the password and file size to get an integrity key. Then, they hash the file with the integrity key to get a message authentication code. Then, only cipher text, the message authentication code, and the hash of filename goes to the cloud. The hash of the filename is sent for file retrieval. This scheme results in less energy consumption, memory use, and CPU use.

Louk and Lim [78] used homomorphic encryption, multicloud computing and mobile. They used multiple cloud schemes for storing the data, to avoid data lock-in, and used homomorphic encryption to run computations without downloading the data back and forth between cloud computing and mobile to avoid the communication costs. Since encryption is expensive for mobile devices, there have been useful propositions about how to avoid using it.

Bahrami et al. [79] proposed a lightweight method for data privacy in mobile cloud computing. They used JEPG files as their case study because they are a common file in mobile computing. They divide the JEPG file into many splits, distribute them to many files based on a predefined pattern, and scramble chunks randomly in each split file with the help of pseudorandom permutations using the chaos system. Each file is then sent to MCCs. For the retrieval process, the split files are collected from MCCs. Each split chunks are rearranged using the chaos system. All split files are rearranged based on a predefined. They used this method because it is low in computation and works effectively in mobile computing. When they compared it with encrypting the JEPG in the mobile and sending it, they found their solution to be more efficient. Their proposed method has two requirements: balancing computation overhead with security and avoiding offloading the file to mobile cloud computing for encryption by making the file meaningless before sending it.

Conclusion

Cloud computing is an emerging technology that will receive more attention in the future from industry and academia. The cost of this technology is more attractive when it is compared to the cost of building the infrastructure. However, there are many security issues coming with this technology, as happens when every technology matures. Those issues include challenges related to the use of the Internet, networks, applications, and storage. Storing data in a remote server leads to some security issues related to data confidentiality from unauthorized people in remote sites, the integrity of stored data in remote servers, and the availability of the data when it is needed. Sharing data in the cloud when the cloud service provider is mistrusted is also an issue, tough some techniques covered in this paper can protect data seen by the cloud service provider while it is shared among many users. Many studies have been conducted to discover the issues that affect confidentiality, integrity, and availability of data to find a solution for them. Those solutions will lead to more secure cloud storage, which will also lead to greater acceptance, and trust in the cloud will increase.

References

[1] Subashini S., & Kavitha, V. (2011). A survey on security issues in service delivery models of cloud computing. Journal of Network and Computer Applications, 34(1), 1–11.

[2] Mell, P., & Grance, T. (2011). The NIST definition of cloud computing.

[3] Khorshed, M.T., Ali, A. S., & Wasimi, S. A. (2012). A survey on gaps, threat remediation challenges and some thoughts for proactive attack detection in cloud computing. Future Generation Computer Systems, 28(6), 833–851.

[4] Zhou, Z., & Huang, D. (2012). Efficient and secure data storage operations for mobile cloud computing. Proceedings of the 8th International Conference on Network and Service Management. International Federation for Information Processing, 37–45.

[5] Kumar, K., & Lu, Y. H. (2010). Cloud computing for mobile users: Can offloading computation save energy? Computer, 4, 51–56.

[6] Al Zain, M., Pardede, E., Soh, B., & Thom, J. (2012, January). Cloud computing security: From single to multi-clouds. System Science (HICSS), 2012 45th Hawaii International Conference, 5490–5499.

[7] Sookhak, M., Talebian, H., Ahmed, E., Gani, A., & Khan, M. K. (2014). A review on remote data auditing in single cloud server: Taxonomy and open issues. Journal of Network and Computer Applications, 43, 121–141.

[8] Aguiar, E., Zhang, Y., & Blanton, M. (2014). An overview of issues and recent developments in cloud computing and storage security. High Performance Cloud Auditing and Applications, 3–33.

[9] Gul, I., & Islam, M. (2011). Cloud computing security auditing. Next Generation Information Technology (ICNIT), 2011 The 2nd International Conference on IEEE, 143–148.

[10] Mohamed, E. M, Abdelkader, H. A., & El-Etriby, S. (2012). Enhanced data security model for cloud computing. Informatics and Systems (INFOS), 2012 8th International Conference on IEEE, CC–12.

[11] Ramgovind, S., Eloff, M. M., and Smith, E. (2010). The management of security in cloud computing. Information Security for South Africa (ISSA), 1–7.

[12] Sabahi, F. (2011). Cloud computing security threats and responses. Communication Software and Networks (ICCSN), 2011 IEEE 3rd International Conference on IEEE, 245–249.

[13] Wang, X., Wang, B., and Huang, J. (2011). Cloud computing and its key techniques. Computer Science and Automation Engineering (CSAE), 2011 IEEE International Conference on, Vol. 2. IEEE, 404–410.

[14] Subashini, S., & Kavitha, V. (2011). A survey on security issues in service delivery models of cloud computing. Journal of network and computer applications, 34(1), 1–11.

[15] Yang, J., & Chen, Z. (2010). Cloud computing research and security issues. Computational Intelligence and Software Engineering (CiSE), 2010 International Conference on IEEE, 1–3.

[16] Lori, M. (2009). Data security in the world of cloud computing. IEEE Computer and Reliability Societies, 61–64.

[17] Wang, C., Ren, K., Lou, W., & Li, J. (2010). Toward publicly auditable secure cloud data storage services. Network, IEEE, 24(4), 19–24.

[18] Wei, L., Zhu, H., Cao, Z., Dong, X., Jia, W., Chen, Y., & Vasilakos, A. V. (2014). Security and privacy for storage and computation in cloud computing. Information Sciences, 258, 371–386.

[19] Wang, C., Wang, Q., Ren, K., & Lou, W. (2010). Privacy-preserving public auditing for data storage security in cloud computing. INFOCOM, 2010 Proceedings IEEE, 1–9.

[20] Yang, K., & Jia, X. (2012). Data storage auditing service in cloud computing: challenges, methods and opportunities. World Wide Web, 15(4), 409–428.

[21] Cloud Security Alliance. (2013). The notorious nine cloud computing top threats in 2013. Retrieved from https://cloudsecurityalliance.org/media/news/ca-warns-providers-of-the-notorious-nine-cloud-computing-top-threats-in-2013/

[22] Hubbard, D., & Sutton, M. (2010). Top threats to cloud computing v1. Retrieved from https://cloudsecurityalliance.org/topthreats/csathreats.v1.0.pdf

[23] Baker, W. (2011). 2011 data breach investigations report. Retrieved from http://www.wired.com/imagesblogs/threatlevel/2011/04/V erizon2011 − DBIR04 − 13 − 11.pdf

[24] Brunette, G., Mogull, R. (2009). Security guidance for critical areas of focus in cloud computing v2., 1–76.

[25] Catteddu, D. (2010). Cloud computing: benefits, risks and recommendations for information security. Web Application Security, 17–17.

[26] Aljahdali, H., Albatli, A., Garraghan, P., Townend, P., Lau, L. & Xu, J. (2014). Multi-tenancy in cloud computing. Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium, 344–351.

[27] Ristenpart, T., Tromer, E., Shacham, H., & Savage, S. (2009). Hey, you, get off my cloud: Exploring information leakage in third-party compute clouds. Proceedings of the 16th ACM Conference on Computer and Communications Security, 199–212.

[28] Aviram, A., Hu, S., Ford, B., & Gummadi, R. (2010). Determinating timing[4 channels in compute clouds. Proceedings of the 2010 ACM Workshop on Cloud Computing Security Workshop, 103–108.

[29] Hlavacs, H., Treutner, T., Gelas, J. P., Lefevre, L., & Orgerie, A. C. (2011). Energy consumption side-channel attack at virtual machines in a cloud. Dependable, Autonomic and Secure Computing (DASC), 2011 IEEE Ninth International Conference on IEEE, 605–612.

[30] Hashizume, K., Rosado, D. G., Fernandez-Medina, E., & Fernandez, E. B. (2013). An analysis of security issues for cloud computing. Journal of Internet Services and Applications, 4(1), 1–13.

[31] Jansen, W. A. (2011). Cloud hooks: Security and privacy issues in cloud computing. System Sciences (HICSS), 2011 44th Hawaii International Conference on IEEE, 1–10.

[32] Gonzalez, N., Miers, C., Redıgolo, F., Simplicio, M., Carvalho, T., Naslund, M., & Pourzandi, M., A quantitative analysis of current security concerns and solutions for cloud computing. Journal of Cloud Computing, 1(1), 1–18.

[33] Song, M. H., (2014). Analysis of risks for virtualization technology. Applied Mechanics and Materials, 539, 374–377.

[34] Bifulco, R., Canonico, R., Ventre, G., & Manetti, V. (2011). Transparent migration of virtual infrastructures in large datacenters for cloud computing. Computers and Communications (ISCC), 2011 IEEE Symposium on IEEE, 179–184.

[35] Zhang, F., & Chen, H., (2013). Security-preserving live migration of virtual machines in the cloud. Journal of Network and Systems Management, 21(4), 562–587.

[36] Corradi, A., Fanelli, M., & Foschini, L., (2014). Vm consolidation: A real case based on openstack cloud. Future Generation Computer Systems, 32, 118–127.

[37] Fiebig, S., Siebenhaar, M., Gottron, C., & Steinmetz, R. (2013). Detecting vm live migration using a hybrid external approach. CLOSER 483–488.

[38] Wu, H., Ding, Y., Winer, C., & Yao, L. (2010). Network security for virtual machine in cloud computing. Computer Sciences and Convergence Information Technology (ICCIT), 2010 5th International Conference on IEEE, 18–21.

[39] Jin, S., Ahn, J., Cha, S. & Huh, J. (2011). Architectural support for secure virtualization under a vulnerable hypervisor. Proceedings of the 44th Annual IEEE/ACM International Symposium on Microarchitecture, 272–283.

[40] Erway, C., Papamanthou, C., & Tamassia, R. (2009). Dynamic provable data possession. Proceedings of the 16th ACM conference on Computer and Communications Security, 213–222.

[41] Ateniese, G., Burns, R., Curtmola, R., Herring, J., Kissner, L., Peterson, Z., & Song, D. (2007). Provable data possession at untrusted stores. Proceedings of the 14th ACM Conference on Computer and Communications Security, 598–609.

[42] Ateniese, G., DiPietro, R., Mancini, L. V., & Tsudik, G. (2008). Scalable and efficient provable data possession. Proceedings of the 4th International Conference on Security and Privacy in Communication Networks, 9.

[43] Yang, K., & Jia, X. (2013). An efficient and secure dynamic auditing protocol for data storage in cloud computing. Parallel and Distributed Systems, 24 (9), 1717–1726.

[44] Wang, Q., Wang, C., Ren, K., Lou, W., & Li, J. (2011). Enabling public auditability and data dynamics for storage security in cloud computing. Parallel and Distributed Systems, 22 (5), 847–859.

[45] Wang, C., Wang, Q., Ren, K., Cao, N., & Lou, W. (2012). Toward secure and dependable storage services in cloud computing. Services Computing, IEEE Transactions on, 5(2), 220–232.

[46] Wang, C., Chow, S., Wang, Q., Ren, K., & Lou, W. (2013, February). Privacy-preserving public auditing for secure cloud storage. Computers 62(2), 362–375.

[47] Balusamy, B., Venkatakrishna, P., Vaidhyanathan, A., Ravikumar, M., Devi Munisamy, N. (2015). Enhanced security framework for data integrity using third-party auditing in the cloud system. Artificial Intelligence and Evolutionary Algorithms in Engineering Systems, 325, 25–31.

[48] Ateniese, G., Burns, R., Curtmola, R., Herring, J., Khan, O., Kissner, L., Peterson, Z., & Song, D. (2011). Remote data checking using provable data possession. Information Systems Security 14(1), 12:1–12:34.

[49] Sookhak, M., Gani, A., Khan, M. K., & Buyya, R., (2015). Dynamic remote data auditing for securing big data storage in cloud computing. Information Sciences.

[50] Juels, A., Kaliski Jr, B. S. (2007). Pors: Proofs of retrievability for large files. Proceedings of the 14th ACM conference on Computer and Communications Security, 584–597.

[51] Halevi, S., Harnik, D., Pinkas, B., & Shulman-Peleg, A. (2011). Proofs of ownership in remote storage systems. Proceedings of the 18th ACM Conference on Computer and Communications Security, 491–500.

[52] Kaaniche, N., & Laurent, M., A secure client side deduplication scheme in cloud storage environments. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, 1–7.

[53] Al-Anzi, F. S., Salman, A. A., Jacob, N. K., & Soni, J. (2014). Towards robust, scalable and secure network storage in cloud computing. Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on, 51–55.

[54] Bowers, K. D., Juels, A., & Oprea, A., Hail: A high-availability and integrity layer for cloud storage. Proceedings of the 16th ACM Conference on Computer and Communications Security, 187–198.

[55] Bessani, A., Correia, M., Quaresma, B., Andre, F., & Sousa, P. (2013). Depsky: Dependable and secure storage in a cloud-of-clouds. ACM Transactions on Storage (TOS), 9(4), 2013.

[56] Chen, D., Li, X., Wang, L., Khan, S., Wang, J., Zeng, K., & Cai, C. (2014). Fast and scalable multi-way analysis of massive neural data," Computers, 63, 2014.

[57] Khan, A. N., Kiah, M. M., Madani, S. A., Ali, M., Shamshirband S. (2014). Incremental proxy re-encryption scheme for mobile cloud computing environment. The Journal of Supercomputing, 68(2) 624–651.

[58] Kumari, P. S., Venkateswarlu, P., & Afzal, M. (2015). A key aggregate framework with adaptable offering of information in cloud. International Journal of Research, 2(3), 5–10.

[59] Chu, C. K., Chow, S. S., Tzeng, W. G., Zhou, J., & Deng, R. H. (2014). Key-aggregate cryptosystem for scalable data sharing in cloud storage. Parallel and Distributed Systems, 25(2), 468–477.

[60] Sana Belguith, R. A., & Jemai, A. (2015). Enhancing data security in cloud computing using a lightweight cryptographic algorithm. ICAS 2015: The Eleventh International Conference on Autonomic and Autonomous Systems.

[61] Shamir, A. (1985). Identity-based cryptosystems and signature schemes. Advances in Cryptology, 47–53.

[62] Sahai, A. & Waters, B. (2005). Fuzzy identity-based encryption. Advances in Cryptology–Eurocrypt, 457–473.

[63] Goyal, V., Pandey, O., Sahai, A., & Waters, B. (2006). Attribute-based encryption for fine-grained access control of encrypted data. Proceedings of the 13th ACM Conference on Computer and Communications Security, 89–98.

[64] Ostrovsky, R., Sahai, A., & Waters, B. (2007). Attribute-based encryption with non-monotonic access structures. Proceedings of the 14th ACM Conference on Computer and Communications Security, 195–203.

[65] Attrapadung, N., Libert, B., & DePanafieu, E. (2011). Expressive key-policy attribute-based encryption with constant-size ciphertexts. Public Key Cryptography–PKC, 90–108.

[66] Qiao, Z., Liang, S., Davis, S., & Jiang, H. (2014). Survey of attribute based encryption. Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2014 15th IEEE/ACIS International Conference on, 1–6.

[67] Bethencourt, J., Sahai, A., & Waters, B. (2007). Ciphertext-policy attribute-based encryption. Security and Privacy, 321–334.

[68] Hur, J. & Noh, D. K. (2011). Attribute-based access control with efficient revocation in data outsourcing systems. Parallel and Distributed Systems 22(7), 1214–1221.

[69] Hasan, H., & Chuprat, S. (2014. Secured data partitioning in multi cloud environment. Information and Communication Technologies (WICT), 2014 Fourth World Congress on, 146–151.

[70] Cachin, C., Keidar, I., & Shraer, A. (2009). Trusting the cloud. ACM SIGACT News 40(2), 81–86.

[71] Vukoli´c, M., The byzantine empire in the intercloud. ACM SIGACT News 41(3), 105–111.

[72] Shankarwar, M., & Pawar, A. (2015). Security and privacy in cloud computing: A survey. Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA), 328, 1–11.

[73] Suganthi, J., Ananthi, J., & Archana, S. Privacy preservation and public auditing for cloud data using ass in multi-cloud. Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, 1–6.

[74] Boneh, D., Gentry, C., Lynn, B. & Shacham, H. (2003). Aggregate and verifiably encrypted signatures from bilinear maps. Advances in cryptology—Eurocrypt, 416–432.

[75] Donald, A., & Arockiam, L. (2015). A secure authentication scheme for mobicloud. Computer Communication and Informatics (ICCCI), 2015 International Conference on, 1–6.

[76] Ren, W., Yu, L., Gao, R., & Xiong, F. (2011). Lightweight and compromise resilient storage outsourcing with distributed secure accessibility in mobile cloud computing. Tsinghua Science & Technology, 16(5), 520–528.

[77] Khan, A. N., Kiah, M. M., Ali, M., Madani, S. A., Shamshirband S. (2014). Bss: block-based sharing scheme for secure data storage services in mobile cloud environment. The Journal of Supercomputing, 70(2), 946–976.

[78] Louk, M. & Lim, H. Homomorphic encryption in mobile multi cloud computing. Information Networking (ICOIN), 2015 International Conference on, 493–497.

[79] Bahrami, M. & Singhal, M. A light-weight permutation based method for data privacy in mobile cloud computing. Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2015 3rd IEEE International Conference on, 189–198.

Licenses and Attributions

Data Security, Privacy, Availability and Integrity in Cloud Computing: Issues and Current Solutions by Sultan Aldossary and William Allen from International Journal of Advanced Computer Science and Applications is available under a Creative Commons Attribution 4.0 International license. UMGC has modified this work and it is available under the original license.