08-10-2024 08:13 PM
So the client wants aws s3 object locking when a retention rule is applied As the default s3 bucket is shared and doesn't already have object locking the idea is to have an additional s3 bucket, with object locking. But none of that is actually part of my question.
My question is, why is the document / blob (manually importing a pdf when creating a document) not showing up on the default s3 bucket? One coworker mentioned that if I was reusing an existing sample pdf, which was already in the bucket, that would only result in an additional pointer to the object. So I'm creating PDFs which are different, changing the text in a google document before saving it off as a PDF.
Also am I correct in assuming the uuid I see in the json from the document, will be the same uuid in the s3 bucket?
"blobUrl": "[https://xyz.com/csp/nxfile/default/d67ebdc9-c633-4ce8-9f52-cc69b5073a5c/file:content/Record%202024.08.10.1.pdf]
So this value d67ebdc9-c633-4ce8-9f52-cc69b5073a5c
Thx
10-10-2024 06:52 PM
The UUIDs you see in the URL are for Documents (https://doc.nuxeo.com/n/UM4), not BLOBs. I.e. a single Document may have many BLOBs.
BLOBs are stored using their digest. If you upload the same BLOB, the digest won't change, so nothing will be added to S3. See: https://doc.nuxeo.com/nxdoc/file-storage/#how-it-works Note that when I say "stored using their digest" it's an over-simplification. For example BLOBs are grouped into "folders" using part of the digest as the folder name. I'm just trying to say, the file names you see in S3 are built from the BLOB digest, not a UUID.
The only situation I can think of where a BLOB uploaded to Nuxeo wouldn't appear in the S3 bucket is when no Document was created. I.e. the batch upload API allows you to upload all the BLOBs you want, but if you never create Documents associated with the batch, the BLOBs won't end up in S3.
Find what you came for
We want to make your experience in Hyland Connect as valuable as possible, so we put together some helpful links.