Azure Storage Account
- 1 Intro
- 2 Documentation
- 3 Tips and Tidbits
- 4 Immutable Storage
- 5 Storage Account Types
- 6 Storage Account Redundancy
- 7 Storage Account Endpoints
- 8 Data Import
- 9 Use private endpoints for Azure Storage
- 10 Controlling Access
- 11 Custom Domain
- 12 Storage Lifecycle Management
- 13 Storage Account Creation In Portal
- 14 Access Tier
- 15 Programming
- 16 Managing Concurrency in Blob storage
- 17 Reacting to Blob storage events
- 18 Storage Account Feed
- 19 Copy Blobs Between Containers In PowerShell
- 20 Move A Storage Account To Another Region
- 21 Tutorial - Encrypt and decrypt blobs using Azure Key Vault
- 22 Get Storage Access Token + Access Credentials To Storage Account
- 23 Create an account SAS with .NET
- 24 Create a service SAS
Intro
My notes on Azure Storage Accounts
Documentation
Dowload for Azure Storage Explorer App: https://azure.microsoft.com/en-us/features/storage-explorer/
Tips and Tidbits
An Azure Storage account is the top-level container for all of your Azure Blob storage.
The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS.
Storage account names must be:
globally unique (name collisions with other customers very likely)
3-24 lowercase characters. Letter and numbers only!
Storage account limits:
Resource | Limit |
---|---|
Number of storage accounts per region per subscription, including standard, and premium storage accounts. | 250 |
Maximum storage account capacity | 5 PiB 1 |
Account keys grant unlimited access to the entire content of the storage accounts.
Access keys provide authorization only, not identification, therefore, keys don’t give the ability to properly audit storage usage.
Overcome with shared access signature (SAS) which provide secure delegated access to the resources in a storage account.
SAS offers granular control over data access, including the ability to limit access to an individual storage object, such as a blob, restricting such access to a custom time window, as well as filtering network access to a designated IP address range.
All data written to Azure Storage is encrypted by the service.
Data in an Azure Storage account is always replicated three times in the primary region.
Designed to be massively scalable.
Azure storage in three categories:
Storage for Virtual Machines. This includes disks and files.
Unstructured Data. This includes Blobs and Data Lake Store
Data Lake Store is Hadoop Distributed File System (HDFS) as a service
Structured Data. This includes Tables. Tables are a key/value, auto-scaling NoSQL store.
Storage accounts have two tiers: Standard and Premium.
Standard storage accounts are backed by magnetic drives (HDD)
Premium storage accounts are backed by solid state drives (SSD)
choose between three account types, block blobs, page blobs, or file shares.
It is not possible to convert a Standard storage account to Premium storage account or vice versa.
A single storage account has a fixed-rate limit of 20,000 input/output (I/O) operations per second.
This means that a storage account is capable of supporting 40 standard VHDs at full utilization.
An old article which shows storage options on 2019. Relevant because some AZ-303/AZ-304 questions are based on these outdated offerings.
Azure Storage includes these data services, each of which is accessed through a storage account.
Azure Containers (Blobs): A massively scalable object store for text and binary data.
Blobs (or Binary Large OBjects)
Stores unstructured data in the cloud as objects/blobs
Blob storage can store any type of text or binary data, such as a document, media file, or application installer.
Blob storage is also referred to as object storage.
A container provides a grouping of a set of blobs.
All blobs must be in a container.
An account can contain an unlimited number of containers.
A container can store an unlimited number of blobs
The name may only contain lowercase letters, numbers, and hyphens, and must begin with a letter or a number.
The name must also be between 3 and 63 characters long.
Azure Storage offers three types of blobs:
block blobs,
consist of blocks of data assembled to make a blob
page blobs, and
can be up to 8 TB in size
more efficient for frequent read/write operations.
Azure virtual machines use page blobs as OS and data disks.
append blobs
optimized for append operations, so they are useful for logging scenarios
Azure Files: Managed file shares for cloud or on-premises deployments.
Network file shares that can be accessed by using the standard Server Message Block (SMB) protocol.
Multiple VMs can share the same files with both read and write access.
One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token.
Configuration files can be stored on a file share and accessed from multiple VMs.
Large file shares: This field enables the storage account for file shares spanning up to 100 TiB.
Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options.
Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file share capability.
FileStorage storage accounts (storage accounts for premium file shares) do not have this option, as all premium file shares can scale up to 100 TiB.
Selecting the blob access tier does not affect the tier of the file share.
Azure Queues: A messaging store for reliable messaging between application components.
Used to store and retrieve messages.
Messages can be up to 64 KB in size
A queue can contain millions of messages.
Generally used to store lists of messages to be processed asynchronously.
Azure Tables: A NoSQL store for schemaless storage of structured data.
All storage accounts are encrypted using Storage Service Encryption (SSE) for data at rest.
Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the cloud.
Azure Storage encryption cannot be disabled.
You can specify a customer-managed key to use for encrypting and decrypting all data in the storage account.
A customer-managed key is used to encrypt all data in all services in your storage account.
You can specify a customer-provided key on Blob storage operations.
A client making a read or write request against Blob storage can include an encryption key on the request for granular control over how blob data is encrypted and decrypted.
Storage accounts that contain Azure managed disks for virtual machines always use LRS.
Azure unmanaged disks should also use LRS. It is possible to create a storage account for Azure unmanaged disks that uses GRS, but it is not recommended due to potential issues with consistency over asynchronous geo-replication.
Neither managed nor unmanaged disks support ZRS or GZRS.
A single storage account has a fixed-rate limit of 20,000 I/O operations/sec. T
his means that a storage account is capable of supporting 40 standard virtual hard disks (unmanaged disks) at full utilization.
If you need to scale out with more disks, then you'll need more storage accounts, which can get complicated.
Azure storage doesn’t support HTTPs for custom domain names, secure transfer option is not applied when using a custom domain name.
Blob Storage and Azure Files are the only storage services that support Bring Your Own Key (BYOK) for encryption.
For other storage services (eg Azure Tables), they use Microsoft-provided keys.
You can serve static content (HTML, CSS, JavaScript, and image files) directly from a container in a general-purpose V2 or BlockBlobStorage account.
To learn more, see Static website hosting in Azure Storage.
Azure Storage static website hosting is a great option in cases where you don't require a web server to render content.
AuthN and AuthZ are not supported.
Consider using Azure Static Web Apps.
It's a great alternative to static websites and is also appropriate in cases where you don't require a web server to render content.
You can configure headers and AuthN / AuthZ is fully supported.
It's easier to enable HTTP access for your custom domain, because Azure Storage natively supports it.
To enable HTTPS, you'll have to use Azure CDN because Azure Storage does not yet natively support HTTPS with custom domains
Immutable Storage
Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state.
This state makes the data non-erasable and non-modifiable for a user-specified interval.
For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted.
Immutable storage is available for general-purpose v1, general-purpose v2, premium block blob, and legacy blob accounts in all Azure regions.
All blob access tiers support immutable storage.
All redundancy configurations support immutable storage
The minimum retention interval for a time-based retention policy is one day, and the maximum is 146,000 days (400 years).
When you first configure a time-based retention policy, the policy is unlocked for testing purposes.
When you have finished testing, you can lock the policy so that it is fully compliant with SEC 17a-4(f) and other regulatory compliance.
Both locked and unlocked policies protect against deletes and overwrites.
However, you can modify an unlocked policy by shortening or extending the retention period.
You can also delete an unlocked policy.
You cannot delete a locked time-based retention policy.
You can extend the retention period, but you cannot decrease it.
A maximum of five increases to the effective retention period is allowed over the lifetime of a locked policy that is defined at the container level.
For a policy configured for a blob version, there is no limit to the number of increase to the effective period.
A time-based retention policy can be configured at either of the following scopes:
Version-level policy: A time-based retention policy can be configured to apply to a blob version for granular management of sensitive data.
Container-level policy: A time-based retention policy that is configured at the container level applies to all objects in that container.
Individual objects cannot be configured with their own immutability policies.
Optimize costs by automatically managing the data lifecycle
Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts.
If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared.
Legal hold policies: A legal hold stores immutable data until the legal hold is explicitly cleared.
When a legal hold is set, objects can be created and read, but not modified or deleted.
A legal hold is a temporary immutability policy that can be applied for legal investigation purposes or general protection policies.
A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it is explicitly cleared.
When a legal hold is in effect, blobs can be created and read, but not modified or deleted.
Administrators can remove the Legal Hold policy!! This is not possible for time-based retention policies so if a known retention time is given, use time-based policy that once locked, it can’t be removed.
Storage Account Types
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage.
If you want support for NFS file shares in Azure Files, use the premium file shares account type.
General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible.
BlockBlobStorage accounts: Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency.
FileStorage accounts: Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high performance scale applications.
BlobStorage accounts: Legacy Blob-only storage accounts. Use general-purpose v2 accounts instead when possible.
Premium storage accounts support Pages Blobs in LSR, and Block Blobs and Files in LRS and ZRS
Storage Account Redundancy
Redundancy options for a storage account include:
Locally redundant storage (LRS): A simple, low-cost redundancy strategy. Data is copied synchronously three times within a single physical location in the primary region.
LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable.
A write request to a storage account that is using LRS happens synchronously. The write operation returns successfully only after the data is written to all three replicas.
LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
LRS is the least expensive replication option, but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS): Redundancy for scenarios requiring high availability. Data is copied synchronously across three Azure availability zones in the primary region.
Each availability zone is a separate physical location with independent power, cooling, and networking.
Durability for Azure Storage data objects of at least 99.9999999999% (12 nines) over a given year.
Data is still accessible for both read and write operations even if a zone becomes unavailable.
When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
The write operation returns successfully only after the data is written to all replicas across the three availability zones.
ZRS recommended for scenarios that require consistency, durability, and high availability.
ZRS is also recommended for restricting replication of data to within a country or region to meet data governance requirements.
Geo-redundant storage (GRS): Cross-regional redundancy to protect against regional outages.
Data is copied synchronously three times in the primary region, then copied asynchronously to the secondary region.
Within the secondary region, data is copied synchronously three times using LRS.
However, the data in the secondary region is available to be read/written only if the customer or Microsoft initiates a failover from the primary to secondary region.
Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are being directed to the new primary endpoint
The paired secondary region is determined based on the primary region, and can't be changed.
In some cases, the paired regions across which the data is geo-replicated may be in another country or region. For more information on paired regions, see Azure regions
GRS offers durability for Azure Storage data objects of at least 99.99999999999999% (16 9's) over a given year.
Geo-zone-redundant storage (GZRS): Redundancy for scenarios requiring both high availability and maximum durability.
Data is copied synchronously across three Azure availability zones in the primary region, then copied asynchronously to the secondary region.
Within the secondary region, data is copied synchronously three times using LRS.
This is the default for storage accounts
GZRS is designed to provide at least 99.99999999999999% (16 nines) durability of objects over a given year.
With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there is a failover to the secondary region.
After the failover has completed, the secondary region becomes the primary region, and you can again read and write data.
For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
For more information, see Read access to data in the secondary region.
Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered.
When you enable read access to the secondary region, your data is available to be read at all times, including in a situation where the primary region becomes unavailable.
Azure Files does not support read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS).
When read access to the secondary is enabled, your application can be read from the secondary endpoint as well as from the primary endpoint.
The application must determine when to read from the secondary endpoint so the app must be written with that in mind.
The secondary endpoint appends the suffix –secondary to the account name.
For example, if your primary endpoint for Blob storage is
myaccount.blob.core.windows.net
, then the secondary endpoint ismyaccount-secondary.blob.core.windows.net
To determine which write operations have been replicated to the secondary region, your application can check the Last Sync Time property for your storage account.
All write operations written to the primary region prior to the last sync time have been successfully replicated to the secondary region
You can migrate from one type of redundancy to another as specified here: Change how a storage account is replicated - Azure Storage
Storage Account Endpoints
A storage account provides a unique namespace in Azure for your data.
Every object that you store in Azure Storage has an address that includes your unique account name.
The combination of the account name and the Azure Storage service endpoint forms the endpoints for your storage account.
Storage service | Endpoint |
---|---|
Blob storage |
|
Azure Data Lake Storage Gen2 |
|
Azure Files |
|
Queue storage |
|
Table storage |
|
Storage accounts have a public endpoint that is accessible through the internet.
You can also create Private Endpoints for your storage account, which assigns a private IP address from your VNet to the storage account
The Azure storage firewall provides access control access for the public endpoint of your storage account.
You can also use the firewall to block all access through the public endpoint when using private endpoints.
Storage firewall rules apply to the public endpoint of a storage account. They are not needed for private endpoints
Data Import
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter.
This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites.
Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.
If you want to transfer data using disk drives supplied by Microsoft, you can use Azure Data Box Disk to import data into Azure.
Microsoft ships up to 5 encrypted solid-state disk drives (SSDs) with a 40 TB total capacity per order, to your datacenter through a regional carrier.
Use private endpoints for Azure Storage
You can use private endpoints for your Azure Storage accounts to allow clients on a virtual network (VNet) to securely access data over a Private Link.
The private endpoint uses a separate IP address from the VNet address space for each storage account service.
Network traffic between the clients on the VNet and the storage account traverses over the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
You need a separate private endpoint for each storage resource that you need to access, namely Blobs, Data Lake Storage Gen2, Files, Queues, Tables, or Static Websites.
Controlling Access
By default, the data in your account is available only to you, the account owner
Every request made against your storage account must be authorized.
the request must include a valid Authorization header.
Azure Active Directory: Use Azure Active Directory (Azure AD) credentials to authenticate a user
After you switch authentication method, you will get an (expected error).
Despite having the Owner role in the subscription, you also need to be assigned either built-in or a custom role that provides access to the blob content of the storage account, such as
Storage Blob Data Owner, Storage Blob Data Contributor, or Storage Blob Data Reader.
Shared Key authorization: Use your storage account access key to construct a connection string that your application uses at runtime to access Azure Storage
Shared access signature: A shared access signature (SAS) is a token that permits delegated access to resources in your storage account.
You can make Blobs public.
Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
Blob storage supports Azure Data Lake Storage Gen2, Microsoft's enterprise big data analytics solution for the cloud. Azure Data Lake Storage Gen2 offers a hierarchical file system as well as the advantages of Blob storage
The hierarchical namespace allows you to define ACL and POSIX permissions on directories, subdirectories or individual files
Create a user delegation SAS
A SAS token for access to a container, directory, or blob may be secured by using either Azure AD credentials or an account key.
A SAS secured with Azure AD credentials is called a user delegation SAS.
Microsoft recommends that you use Azure AD credentials when possible as a security best practice, rather than using the account key, which can be more easily compromised.
When your application design requires shared access signatures, use Azure AD credentials to create a user delegation SAS for superior security.
Every SAS is signed with a key.
To create a user delegation SAS, you must first request a user delegation key, which is then used to sign the SAS.
The user delegation key is analogous to the account key used to sign a service SAS or an account SAS, except that it relies on your Azure AD credentials.
You can revoke a user delegation SAS either by revoking the user delegation key, or by changing or removing RBAC role assignments for the security principal used to create the SAS.
Custom Domain
You can configure a custom domain for accessing blob data in your Azure storage account.
There are two ways to configure this service: Direct CNAME mapping and an intermediary domain.
Azure Storage does not yet natively support HTTPS with custom domains. You can currently Use Azure CDN to access blobs by using custom domains over HTTPS.
Storage Lifecycle Management
Azure Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle
Transition blobs from cool to hot immediately when they are accessed, to optimize for performance.
Transition blobs, blob versions, and blob snapshots to a cooler storage tier if these objects have not been accessed or modified for a period of time, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.
Delete blobs, blob versions, and blob snapshots at the end of their lifecycles.
Define rules to be run once per day at the storage account level.
Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts
Data stored in a premium block blob storage account cannot be tiered to hot, cool, or archive using Set Blob Tier or using Azure Blob Storage lifecycle management
But you can still apply other lifecycle management policies, such as delete at expiration time.
If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob.
For example, action
delete
is cheaper than actiontierToArchive
. ActiontierToArchive
is cheaper than actiontierToCool
.
You can enable last access time tracking to keep a record of when your blob is last read or written and as a filter to manage tiering and retention of your blob data.
When last access time tracking is enabled, the blob property called
LastAccessTime
is updated when a blob is read or written.daysAfterLastAccessTimeGreaterThan
The
enableAutoTierToHotFromCool
property is a Boolean value that indicates whether a blob should automatically be tiered from cool back to hot if it is accessed again after being tiered to cool.
Storage Account Creation In Portal
Access Tier
Azure Storage provides different options for accessing block blob data based on usage patterns
The access tier can be set on a blob during or after upload.
Only the hot and cool access tiers can be set at the account level. The archive access tier can only be set at the blob level.
Hot tier - An online tier optimized for storing data that is accessed or modified frequently.
The Hot tier has the highest storage costs, but the lowest access costs.
Cool tier - An online tier optimized for storing data that is infrequently accessed or modified.
Data in the Cool tier should be stored for a minimum of 30 days.
Subject to an early deletion penalty if it is deleted or moved to a different tier before 30 days has elapsed.
The Cool tier has lower storage costs and higher access costs compared to the Hot tier.
Use for:
Data that's in active use or is expected to be read from and written to frequently.
Data that's staged for processing and eventual migration to the Cool access tier.
Use for:
Short-term data backup and disaster recovery.
Older data sets that are not used frequently, but are expected to be available for immediate access.
Large data sets that need to be stored in a cost-effective way while additional data is being gathered for processing.
Archive tier - An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours.
Data in the Archive tier should be stored for a minimum of 180 days
The archive access tier can only be set at the blob level.
Data access costs: Data access charges increase as the tier gets cooler.
For data in the cool and archive access tier, you're charged a per-gigabyte data access charge for reads
You can set a blob's access tier in any of the following ways:
By setting the default online access tier (Hot or Cool) for the storage account. Blobs in the account inherit this access tier unless you explicitly override the setting for an individual blob.
By explicitly setting a blob's tier on upload. You can create a blob in the Hot, Cool, or Archive tier.
By changing an existing blob's tier with a Set Blob Tier operation or via a lifecycle management policy, typically to move from a hotter tier to a cooler one.
By copying a blob with a Copy Blob operation, typically to move from a cooler tier to a hotter one.
While a blob is in the archive access tier, it's considered to be offline and can't be read or modified.
In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the hot or cool tier.
Two options for rehydrating a blob that is stored in the archive tier:
Copy an archived blob to an online tier: You can rehydrate an archived blob by copying it to a new blob in the hot or cool tier with the Copy Blob or Copy Blob from URL operation.
Microsoft recommends this option for most scenarios.
Change a blob's access tier to an online tier: You can rehydrate an archived blob to hot or cool by changing its tier using the Set Blob Tier operation.
Changing a blob's tier doesn't affect its last modified time.
If there is a lifecycle management policy in effect for the storage account, then rehydrating a blob with Set Blob Tier can result in a scenario where the lifecycle policy moves the blob back to the archive tier after rehydration because the last modified time is beyond the threshold set for the policy.
Rehydration priority options include:
Standard priority: The rehydration request will be processed in the order it was received and may take up to 15 hours.
High priority: The rehydration request will be prioritized over standard priority requests and may complete in under one hour for objects under 10 GB in size.
call Get Blob Properties to return the value of the
x-ms-rehydrate-priority
header.The rehydration priority property returns either Standard or High.
When you copy an archived blob to a new blob an online tier, the source blob remains unmodified in the archive tier.
You must copy the archived blob to a new blob with a different name or to a different container.
Programming
The Azure Storage client libraries for .NET offer a convenient interface for making calls to Azure Storage.
Blob storage offers three types of resources:
The storage account
A container in the storage account
A blob in the container
Use the following .NET classes to interact with these resources:
BlobServiceClient: The
BlobServiceClient
class allows you to manipulate Azure Storage resources and blob containers.BlobContainerClient: The
BlobContainerClient
class allows you to manipulate Azure Storage containers and their blobs.BlobClient: The
BlobClient
class allows you to manipulate Azure Storage blobs.
Exercise: Create Blob storage resources by using the .NET client library
Set and retrieve properties and metadata for blob resources by using REST
// Create a client that can authenticate with a connection string
BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnectionString);
// Create the container and return a container client object
BlobContainerClient containerClient = await blobServiceClient.CreateBlobContainerAsync(containerName);
// Get a reference to the blob
BlobClient blobClient = containerClient.GetBlobClient(fileName);
The
Lease Blob
operation creates and manages a lock on a blob for write and delete operations. The lock duration can be 15 to 60 seconds, or can be infinite.
The Lease Blob
operation can be called in one of five modes:
Acquire
, to request a new lease.Renew
, to renew an existing lease.Change
, to change the ID of an existing lease.Release
, to free the lease if it is no longer needed so that another client may immediately acquire a lease against the blob.Break
, to end the lease but ensure that another client cannot acquire a new lease until the current lease period has expired.
If
null
, an infinite lease will be acquired. If not null, this must be 15 to 60 seconds.
BreakLeaseAsync(Nullable<TimeSpan>)
Initiates an asynchronous operation that breaks the current lease on this container.
A TimeSpan representing the amount of time to allow the lease to remain, which will be rounded down to seconds.
If
null
, the break period is the remainder of the current lease, or zero for infinite leases.an infinite lease breaks immediately.
Managing Concurrency in Blob storage
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data. For example, if two users viewing a wiki page make an update to that page, then the wiki platform must ensure that the second update does not overwrite the first update. It must also ensure that both users understand whether their update was successful. This strategy is most often used in web applications.
Pessimistic concurrency: An application looking to perform an update will take a lock on an object preventing other users from updating the data until the lock is released. For example, in a primary/secondary data replication scenario in which only the primary performs updates, the primary typically holds an exclusive lock on the data for an extended period of time to ensure no one else can update it.
Last writer wins: An approach that allows update operations to proceed without first determining whether another application has updated the data since it was read. This approach is typically used when data is partitioned in such a way that multiple users will not access the same data at the same time. It can also be useful where short-lived data streams are being processed.
Optimistic concurrency
Azure Storage assigns an identifier to every object stored.
This identifier is updated every time a write operation is performed on an object.
The identifier is returned to the client as part of an HTTP GET response in the ETag header that is defined by the HTTP protocol.
A client that is performing an update can send the original ETag together with a conditional header to ensure that an update will only occur if a certain condition has been met.
For example, if the If-Match header is specified, Azure Storage verifies that the value of the ETag specified in the update request is the same as the ETag for the object being updated.
Pessimistic concurrency for blobs
To lock a blob for exclusive use, you can acquire a lease on it.
When you acquire the lease, you specify the duration of the lease.
A finite lease may be valid from between 15 to 60 seconds.
A lease can also be infinite, which amounts to an exclusive lock.
You can renew a finite lease to extend it, and you can release the lease when you're finished with it.
Azure Storage automatically releases finite leases when they expire.
Leases enable different synchronization strategies to be supported, including exclusive write/shared read operations, exclusive write/exclusive read operations, and shared write/exclusive read operations
Modifying A Blob’s Metadata
Geet metadata.
try
{
// Get the blob's properties and metadata.
BlobProperties properties = await blob.GetPropertiesAsync();
Console.WriteLine("Blob metadata:");
// Enumerate the blob's metadata.
foreach (var metadataItem in properties.Metadata)
{
Console.WriteLine($"\tKey: {metadataItem.Key}");
Console.WriteLine($"\tValue: {metadataItem.Value}");
}
}
For .NET v11
// Fetch blob attributes in order to populate
// the blob's properties and metadata.
await blob.FetchAttributesAsync();
Set metadata
try
{
IDictionary<string, string> metadata =
new Dictionary<string, string>();
// Add metadata to the dictionary by calling the Add method
metadata.Add("docType", "textDocuments");
// Add metadata to the dictionary by using key/value syntax
metadata["category"] = "guidance";
// Set the blob's metadata.
await blob.SetMetadataAsync(metadata);
}
Reacting to Blob storage events
Azure Storage events allow applications to react to events, such as the creation and deletion of blobs.
Blob storage events are pushed using Azure Event Grid to subscribers such as Azure Functions, Azure Logic Apps, or even to your own http listener.
Event Grid provides reliable event delivery to your applications through rich retry policies and dead-lettering.