I have a variable which has the aws s3 url


I want to get the bucket_name in a variables and rest i.e /folder1/folder2/file1.json in another variable. I tried the regular expressions and could get the bucket_name like below, not sure if there is a better way.

m = re.search('(?<=s3:\/\/)[^\/]+', 's3://bucket_name/folder1/folder2/file1.json')

How do I get the rest i.e – folder1/folder2/file1.json ?

I have checked if there is a boto3 feature to extract the bucket_name and key from the url, but couldn’t find it.

Since it’s just a normal URL, you can use urlparse to get all the parts of the URL.

>>> from urlparse import urlparse
>>> o = urlparse('s3://bucket_name/folder1/folder2/file1.json', allow_fragments=False)
>>> o
ParseResult(scheme="s3", netloc="bucket_name", path="/folder1/folder2/file1.json", params="", query='', fragment="")
>>> o.netloc
>>> o.path

You may have to remove the beginning slash from the key as the next answer suggests.


With Python 3 urlparse moved to urllib.parse so use:

from urllib.parse import urlparse

Here’s a class that takes care of all the details.

    from urlparse import urlparse
except ImportError:
    from urllib.parse import urlparse

class S3Url(object):
    >>> s = S3Url("s3://bucket/hello/world")
    >>> s.bucket
    >>> s.key
    >>> s.url

    >>> s = S3Url("s3://bucket/hello/world?qwe1=3#ddd")
    >>> s.bucket
    >>> s.key
    >>> s.url

    >>> s = S3Url("s3://bucket/hello/world#foo?bar=2")
    >>> s.key
    >>> s.url

    def __init__(self, url):
        self._parsed = urlparse(url, allow_fragments=False)

    def bucket(self):
        return self._parsed.netloc

    def key(self):
        if self._parsed.query:
            return self._parsed.path.lstrip("https://stackoverflow.com/") + '?' + self._parsed.query
            return self._parsed.path.lstrip("https://stackoverflow.com/")

    def url(self):
        return self._parsed.geturl()

A solution that works without urllib or re (also handles preceding slash):

def split_s3_path(s3_path):
    return bucket, key

To run:

bucket, key = split_s3_path("s3://my-bucket/some_folder/another_folder/my_file.txt")


bucket: my-bucket
key: some_folder/another_folder/my_file.txt

For those who like me was trying to use urlparse to extract key and bucket in order to create object with boto3. There’s one important detail: remove slash from the beginning of the key

from urlparse import urlparse
o = urlparse('s3://bucket_name/folder1/folder2/file1.json')
bucket = o.netloc
key = o.path
client.put_object(Body='test', Bucket=bucket, Key=key.lstrip("https://stackoverflow.com/"))

It took a while to realize that because boto3 doesn’t throw any exception.

Pretty easy to accomplish with a single line of builtin string methods…

s3_filepath = "s3://bucket-name/and/some/key.txt"
bucket, key = s3_filepath.replace("s3://", "").split("/", 1)

If you want to do it with regular expressions, you can do the following:

>>> import re
>>> uri = 's3://my-bucket/my-folder/my-object.png'
>>> match = re.match(r's3:\/\/(.+?)\/(.+)', uri)
>>> match.group(1)
>>> match.group(2)

This has the advantage that you can check for the s3 scheme rather than allowing anything there.

This is a nice project:

s3path is a pathlib extention for aws s3 service

>>> from s3path import S3Path
>>> path = S3Path.from_uri('s3://bucket_name/folder1/folder2/file1.json')
>>> print(path.bucket)
>>> print(path.key)
>>> print(list(path.key.parents))
[S3Path('folder1/folder2'), S3Path('folder1'), S3Path('.')]

A more recent option is to use cloudpathlib, which implements pathlib functions for files on cloud services (including S3, Google Cloud Storage and Azure Blob Storage).

In addition to those functions, it’s easy to get the bucket and the key for your S3 paths.

from cloudpathlib import S3Path

path = S3Path("s3://bucket_name/folder1/folder2/file1.json")

#> 'bucket_name'

#> 'folder1/folder2/file1.json'

Here it is as a one-liner using regex:

import re

s3_path = "s3://bucket/path/to/key"

bucket, key = re.match(r"s3:\/\/(.+?)\/(.+)", s3_path).groups()

This can be done smooth

bucket_name, key = s3_uri[5:].split("https://stackoverflow.com/", 1)

I use the following regex:


If match, then S3 parsed parts as follows:

  • match group1 => S3 bucket name
  • match group2 => S3 object name

This pattern handles bucket path with or without s3:// uri prefix.

If want to allow other legal bucket name chars, modify [a-zA-Z0-9_-] part of pattern to include other chars as needed.

Complete JS example (in Typescript form)

const S3_URI_PATTERN = '^(?:[s|S]3:\\/\\/)?([a-zA-Z0-9\\._-]+)(?:\\/)(.+)$';

export interface S3UriParseResult {
  bucket: string;
  name: string;

export class S3Helper {
   * @param uri
  static parseUri(uri: string): S3UriParseResult {
    const re = new RegExp(S3_URI_PATTERN);
    const match = re.exec(uri);
    if (!match || (match && match.length !== 3)) {
      throw new Error('Invalid S3 object URI');
    return {
      bucket: match[1],
      name: match[2],

The simplest I do is:

s1 = s.split("https://stackoverflow.com/", 3)
bucket = s1[2]
object_key = s1[3]