Reading large objects from Glacier Instant Retrieval #1670
-
|
Hi there, I'm curious about the potential cost implications of reading large objects in the Glacier Instant Retrieval tier with mountpoint-s3. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
Hey @IdrisMiles, thanks for your interest! To address the specific example, you would be charged once for the 1GB of data retrieved (alongside per-request GET cost). If you were to use a smaller read part size, the number of requests would increase however the amount of data read would remain the same. For pricing, the per-request cost would increase but the data retrieval cost would remain the same. A larger object would incur an increase in both pricing elements, as the amount of data retrieved would exceed 1GB and the number of 8MB requests to retrieve the data would also increase. If you wish to reduce costs here, you could explore increasing the read part size for the Mountpoint file system, possibly at the expense of worsened read performance. Do let me know if there's still any ambiguity and I can reopen! |
Beta Was this translation helpful? Give feedback.
Hey @IdrisMiles, thanks for your interest!
To address the specific example, you would be charged once for the 1GB of data retrieved (alongside per-request GET cost).
If you were to use a smaller read part size, the number of requests would increase however the amount of data read would remain the same. For pricing, the per-request cost would increase but the data retrieval cost would remain the same. A larger object would incur an increase in both pricing elements, as the amount of data retrieved would exceed 1GB and the number of 8MB requests to retrieve the data would also increase.
If you wish to reduce costs here, you could explore increasing the read part size for the Mountpoint file…