Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,8 @@ public GSBlobStorage.WriteChannel writeBlob(GSBlobIdentifier blobIdentifier) {
Preconditions.checkNotNull(blobIdentifier);

BlobInfo blobInfo = BlobInfo.newBuilder(blobIdentifier.getBlobId()).build();
com.google.cloud.WriteChannel writeChannel = storage.writer(blobInfo);
com.google.cloud.WriteChannel writeChannel =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way to junit this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can wrap the storage in the UT and ensure that proper option is passed.

For this we need to extend the Storage. Which is for InternalExtensionOnly.

Some e2e would be great to have, but from what I understand only Azure is available in CI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried briefly to make some UT and without mocking it is hard. Currently GSBlobStorageImpl is too low level and it is the component which is being mocked in the codebase.

Creating a TestStorage to catch GCP calls is also hard, notable because some of the methods returns Blob object, which cannot be created without reflections.

storage.writer(blobInfo, getBlobWriteOption(blobIdentifier));
return new WriteChannel(blobIdentifier, writeChannel);
}

Expand All @@ -76,7 +77,8 @@ public GSBlobStorage.WriteChannel writeBlob(
Preconditions.checkArgument(chunkSize.getBytes() > 0);

BlobInfo blobInfo = BlobInfo.newBuilder(blobIdentifier.getBlobId()).build();
com.google.cloud.WriteChannel writeChannel = storage.writer(blobInfo);
com.google.cloud.WriteChannel writeChannel =
storage.writer(blobInfo, getBlobWriteOption(blobIdentifier));
writeChannel.setChunkSize((int) chunkSize.getBytes());
return new WriteChannel(blobIdentifier, writeChannel);
}
Expand Down Expand Up @@ -153,11 +155,48 @@ public void compose(
for (GSBlobIdentifier blobIdentifier : sourceBlobIdentifiers) {
builder.addSource(blobIdentifier.objectName);
}
Storage.BlobTargetOption precondition = getBlobTargetOption(targetBlobIdentifier);
Storage.ComposeRequest request = builder.setTargetOptions(precondition).build();

Storage.ComposeRequest request = builder.build();
storage.compose(request);
}

/**
* This ensures that the operations become idempotent or atomic, allowing the GCS client to
* safely retry the 503 errors.
*/
private Storage.BlobTargetOption getBlobTargetOption(GSBlobIdentifier blobIdentifier) {
Blob blob = storage.get(blobIdentifier.bucketName, blobIdentifier.objectName);
if (blob == null) {
// For a target object that does not yet exist, set the DoesNotExist precondition.
// This will cause the request to fail if the object is created before the request runs.
return Storage.BlobTargetOption.doesNotExist();
} else {
// If the destination already exists in your bucket, instead set a generation-match
// precondition. This will cause the request to fail if the existing object's generation
// changes before the request runs.
return Storage.BlobTargetOption.generationMatch(blob.getGeneration());
}
}

/**
* This ensures that the operations become idempotent or atomic, allowing the GCS client to
* safely retry the 503 errors.
*/
private Storage.BlobWriteOption getBlobWriteOption(GSBlobIdentifier blobIdentifier) {
Blob blob = storage.get(blobIdentifier.bucketName, blobIdentifier.objectName);
if (blob == null) {
// For a target object that does not yet exist, set the DoesNotExist precondition.
// This will cause the request to fail if the object is created before the request runs.
return Storage.BlobWriteOption.doesNotExist();
} else {
// If the destination already exists in your bucket, instead set a generation-match
// precondition. This will cause the request to fail if the existing object's generation
// changes before the request runs.
return Storage.BlobWriteOption.generationMatch(blob.getGeneration());
}
}

@Override
public List<Boolean> delete(Iterable<GSBlobIdentifier> blobIdentifiers) {
LOGGER.trace("Deleting blobs {}", blobIdentifiers);
Expand Down