Enable blob storage in your project by setting blob: true in the NuxtHub config.
export default defineNuxtConfig({
hub: {
blob: true
}
})
When building the Nuxt app, NuxtHub automatically configures the blob storage driver on many providers.
When deploying to Vercel, Nitro Storage blob is configured for Vercel Blob Storage.
@vercel/blob packagepnpm add @vercel/blob
yarn add @vercel/blob
npm install @vercel/blob
bun add @vercel/blob
deno add npm:@vercel/blob
npx nypm add @vercel/blob
When deploying to Cloudflare, Nitro Storage blob is configured for Cloudflare R2.
Add a BLOB binding to a Cloudflare R2 bucket in your wrangler.jsonc config.
{
"$schema": "node_modules/wrangler/config-schema.json",
// ...
"r2_buckets": [
{
"binding": "BLOB",
"bucket_name": "<bucket_name>"
}
]
}
Learn more about adding bindings on Cloudflare's documentation.
When deploying to Netlify, Nitro Storage blob is configured for Netlify Blobs.
@netlify/blobs packagepnpm add @netlify/blobs
yarn add @netlify/blobs
npm install @netlify/blobs
bun add @netlify/blobs
deno add npm:@netlify/blobs
npx nypm add @netlify/blobs
NETLIFY_BLOB_STORE_NAME environment variable to configure your blob store nameWhen deploying to Azure Functions, Nitro Storage blob is configured for Azure Blob Storage.
@azure/app-configuration and @azure/identity packagespnpm add @azure/app-configuration @azure/identity
yarn add @azure/app-configuration @azure/identity
npm install @azure/app-configuration @azure/identity
bun add @azure/app-configuration @azure/identity
deno add npm:@azure/app-configuration @azure/identity
npx nypm add @azure/app-configuration @azure/identity
AZURE_BLOB_ACCOUNT_NAME environment variable to configure your storage account.When deploying to AWS Lambda or AWS Amplify, Nitro Storage blob is configured for Amazon S3.
aws4fetch packagepnpm add aws4fetch
yarn add aws4fetch
npm install aws4fetch
bun add aws4fetch
deno add npm:aws4fetch
npx nypm add aws4fetch
S3_ACCESS_KEY_IDS3_SECRET_ACCESS_KEYS3_BUCKETS3_REGIONS3_ENDPOINT (optional)When deploying to DigitalOcean, Nitro Storage blob is configured for DigitalOcean Spaces.
aws4fetch packagepnpm add aws4fetch
yarn add aws4fetch
npm install aws4fetch
bun add aws4fetch
deno add npm:aws4fetch
npx nypm add aws4fetch
SPACES_KEYSPACES_SECRETSPACES_BUCKETSPACES_REGIONWhen deploying to other providers, Nitro Storage blob is configured to use the filesystem.
If you need to apply changes to automatic configuration, or would like to use a different storage driver, you can manually configure the blob mount within your Nitro Storage configuration.
blob mount in Nitro Storage overrides automatic configuration.export default defineNuxtConfig({
nitro: {
storage: {
blob: {
driver: 's3',
accessKeyId: 'your-access-key-id',
secretAccessKey: 'your-secret-access-key',
bucket: 'your-bucket-name',
region: 'your-region'
/* any additional driver options */
}
}
},
hub: {
blob: true,
},
})
NuxtHub uses the filesystem during local development. You can modify this behaviour by specifying a different development storage driver.
export default defineNuxtConfig({
nitro: {
devStorage: {
blob: {
driver: 's3',
accessKeyId: 'your-access-key-id',
secretAccessKey: 'your-secret-access-key',
bucket: 'your-bucket-name',
region: 'your-region'
}
}
},
})
hubBlob()Server composable that returns a set of methods to manipulate the blob storage.
list()Returns a paginated list of blobs (metadata only).
export default eventHandler(async () => {
const { blobs } = await hubBlob().list({ limit: 10 })
return blobs
})
1000.true, the list will be folded using / separator and list of folders will be returned.Returns BlobListResult.
To fetch all blobs, you can use a while loop to fetch the next page until the cursor is null.
let blobs = []
let cursor = null
do {
const res = await hubBlob().list({ cursor })
blobs.push(...res.blobs)
cursor = res.cursor
} while (cursor)
serve()Returns a blob's data and sets Content-Type, Content-Length and ETag headers.
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
return hubBlob().serve(event, pathname)
})
<template>
<img src="/images/my-image.jpg">
</template>
You can also set a Content-Security-Policy header to add an additional layer of security:
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
setHeader(event, 'Content-Security-Policy', 'default-src \'none\';')
return hubBlob().serve(event, pathname)
})
Returns the blob's raw data and sets Content-Type and Content-Length headers.
head()Returns a blob's metadata.
const metadata = await hubBlob().head(pathname)
Returns a BlobObject.
get()Returns a blob body.
const blob = await hubBlob().get(pathname)
Returns a Blob or null if not found.
put()Uploads a blob to the storage.
export default eventHandler(async (event) => {
const form = await readFormData(event)
const file = form.get('file') as File
if (!file || !file.size) {
throw createError({ statusCode: 400, message: 'No file provided' })
}
ensureBlob(file, {
maxSize: '1MB',
types: ['image']
})
return hubBlob().put(file.name, file, {
addRandomSuffix: false,
prefix: 'images'
})
})
See an example on the Vue side:
<script setup lang="ts">
async function uploadImage (e: Event) {
const form = e.target as HTMLFormElement
await $fetch('/api/files', {
method: 'POST',
body: new FormData(form)
}).catch((err) => alert('Failed to upload image:\n'+ err.data?.message))
form.reset()
}
</script>
<template>
<form @submit.prevent="uploadImage">
<label>Upload an image: <input type="file" name="image"></label>
<button type="submit">
Upload
</button>
</form>
</template>
true, a random suffix will be added to the blob's name. Defaults to false.Returns a BlobObject.
del()Delete a blob with its pathname.
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
await hubBlob().del(pathname)
return sendNoContent(event)
})
You can also delete multiple blobs at once by providing an array of pathnames:
await hubBlob().del(['images/1.jpg', 'images/2.jpg'])
delete() method as alias of del().Returns nothing.
handleUpload()This is an "all in one" function to validate a Blob by checking its size and type and upload it to the storage.
useUpload() Vue composable.It can be used to handle file uploads in API routes.
export default eventHandler(async (event) => {
return hubBlob().handleUpload(event, {
formKey: 'files', // read file or files form the `formKey` field of request body (body should be a `FormData` object)
multiple: true, // when `true`, the `formKey` field will be an array of `Blob` objects
ensure: {
types: ['image/jpeg', 'image/png'], // allowed types of the file
},
put: {
addRandomSuffix: true
}
})
})
<script setup lang="ts">
const upload = useUpload('/api/blob', { method: 'PUT' })
async function onFileSelect(event: Event) {
const uploadedFiles = await upload(event.target as HTMLInputElement)
// file uploaded successfully
}
</script>
<template>
<input type="file" name="file" @change="onFileSelect" multiple accept="image/jpeg, image/png" />
</template>
'files'.true, the formKey field will be an array of Blob objects.ensureBlob() options for more details.put() options for more details.Returns a BlobObject or an array of BlobObject if multiple is true.
Throws an error if file doesn't meet the requirements.
handleMultipartUpload()Handle the request to support multipart upload.
export default eventHandler(async (event) => {
return await hubBlob().handleMultipartUpload(event)
})
[action] and [...pathname] params.On the client side, you can use the useMultipartUpload() composable to upload a file in parts.
<script setup lang="ts">
async function uploadFile(file: File) {
const upload = useMultipartUpload('/api/files/multipart')
const { progress, completed, abort } = upload(file)
}
</script>
useMultipartUpload() on usage details.true, a random suffix will be added to the blob's name. Defaults to false.createMultipartUpload()handleMultipartUpload() to handle the multipart upload requests.Start a new multipart upload.
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
const mpu = await hubBlob().createMultipartUpload(pathname)
return {
uploadId: mpu.uploadId,
pathname: mpu.pathname,
}
})
true, a random suffix will be added to the blob's name. Defaults to true.Returns a BlobMultipartUpload
resumeMultipartUpload()handleMultipartUpload() to handle the multipart upload requests.Continue processing of unfinished multipart upload.
To upload a part of the multipart upload, you can use the uploadPart() method:
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
const { uploadId, partNumber } = getQuery(event)
const stream = getRequestWebStream(event)!
const body = await streamToArrayBuffer(stream, contentLength)
const mpu = hubBlob().resumeMultipartUpload(pathname, uploadId)
return await mpu.uploadPart(partNumber, body)
})
Complete the upload by calling complete() method:
export default eventHandler(async (event) => {
const { pathname, uploadId } = getQuery(event)
const parts = await readBody(event)
const mpu = hubBlob().resumeMultipartUpload(pathname, uploadId)
return await mpu.complete(parts)
})
If you want to cancel the upload, you need to call abort() method:
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
const { uploadId } = getQuery(event)
const mpu = hubBlob().resumeMultipartUpload(pathname, uploadId)
await mpu.abort()
return sendNoContent(event)
})
A simple example of multipart upload in client with above routes:
async function uploadLargeFile(file: File) {
const chunkSize = 10 * 1024 * 1024 // 10MB
const count = Math.ceil(file.size / chunkSize)
const { pathname, uploadId } = await $fetch(
`/api/files/multipart/${file.name}`,
{ method: 'POST' },
)
const uploaded = []
for (let i = 0; i < count; i++) {
const start = i * chunkSize
const end = Math.min(start + chunkSize, file.size)
const partNumber = i + 1
const chunk = file.slice(start, end)
const part = await $fetch(
`/api/files/multipart/${pathname}`,
{
method: 'PUT',
query: { uploadId, partNumber },
body: chunk,
},
)
uploaded.push(part)
}
return await $fetch(
'/api/files/multipart/complete',
{
method: 'POST',
query: { pathname, uploadId },
body: { parts: uploaded },
},
)
}
Returns a BlobMultipartUpload
ensureBlob()ensureBlob() is a handy util to validate a Blob by checking its size and type:
// Will throw an error if the file is not an image or is larger than 1MB
ensureBlob(file, { maxSize: '1MB', types: ['image']})
maxSize or types should be provided.1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024) + (B | KB | MB | GB) '512KB', '1MB', '2GB', etc.['image/jpeg'].Returns nothing.
Throws an error if file doesn't meet the requirements.
server/ directory).useUpload()useUpload is to handle file uploads in your Nuxt application.
<script setup lang="ts">
const upload = useUpload('/api/blob', { method: 'PUT' })
async function onFileSelect({ target }: Event) {
const uploadedFiles = await upload(target as HTMLInputElement)
// file uploaded successfully
}
</script>
<template>
<input
accept="image/jpeg, image/png"
type="file"
name="file"
multiple
@change="onFileSelect"
>
</template>
'files'.true.Return a MultipartUpload function that can be used to upload a file in parts.
const { completed, progress, abort } = upload(file)
const data = await completed
useMultipartUpload()Application composable that creates a multipart upload helper.
export const mpu = useMultipartUpload('/api/files/multipart')
handleMultipartUpload().10MB.1.3.query and headers will be merged with the options provided by the uploader.Return a MultipartUpload function that can be used to upload a file in parts.
const { completed, progress, abort } = mpu(file)
const data = await completed
BlobObjectinterface BlobObject {
pathname: string
contentType: string | undefined
size: number
httpEtag: string
uploadedAt: Date
httpMetadata: Record<string, string>
customMetadata: Record<string, string>
url: string | undefined
}
BlobMultipartUploadexport interface BlobMultipartUpload {
pathname: string
uploadId: string
uploadPart(
partNumber: number,
value: string | ReadableStream<any> | ArrayBuffer | ArrayBufferView | Blob
): Promise<BlobUploadedPart>
abort(): Promise<void>
complete(uploadedParts: BlobUploadedPart[]): Promise<BlobObject>
}
BlobUploadedPartexport interface BlobUploadedPart {
partNumber: number;
etag: string;
}
MultipartUploaderexport type MultipartUploader = (file: File) => {
completed: Promise<SerializeObject<BlobObject> | undefined>
progress: Readonly<Ref<number>>
abort: () => Promise<void>
}
BlobListResultinterface BlobListResult {
blobs: BlobObject[]
hasMore: boolean
cursor?: string
folders?: string[]
}