Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Azure by (5.8k points)

I am running into a huge performance bottleneck when using Azure table storage. My desire is to use tables as a sort of cache, so a long process may result in anywhere from hundreds to several thousand rows of data. The data can then be quickly queried by partition and row keys.

The querying is working pretty fast (extremely fast when only using partition and row keys, a bit slower, but still acceptable when also searching through properties for a particular match).

However, both inserting and deleting rows is painfully slow.

Here are a couple of examples. Each example is it's very own PartitionKey:

Successfully inserted 904 rows into table org1; TraceSource 'w3wp.exe' event

Elapsed time: 00:00:01.3401031; TraceSource 'w3wp.exe' event

Successfully inserted 4130 rows into table org1; TraceSource 'w3wp.exe' event

Elapsed time: 00:00:07.3522871; TraceSource 'w3wp.exe' event

Successfully inserted 28020 rows into table org1; TraceSource 'w3wp.exe' event

Elapsed time: 00:00:51.9319217; TraceSource 'w3wp.exe' event

Maybe it's my MSDN Azure account that has some performance caps? I don't know.

1 Answer

0 votes
by (9.6k points)

Enable logging and check your logs a little -

 c:\users\username\appdata\local\developmentstorage

Batch size of 100 offers real best performance. Make sure you are not inserting  duplicates because that will cause an error and will slow down everything.

Browse Categories

...