Boto3 Introduction

Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python. It allows Python developers to write software that makes use of services like Amazon S3, EC2, e.t.c. Boto3 provides an easy-to-use API for interacting with AWS services using Python code.

Advertisements

Key features of AWS Python SDK Boto3 include:

  1. Comprehensive AWS Service Coverage: Boto3 supports a wide range of AWS services, including but not limited to Amazon S3 (Simple Storage Service), Amazon EC2 (Elastic Compute Cloud), Amazon DynamoDB (NoSQL database), Amazon RDS (Relational Database Service), AWS Lambda, AWS Identity and Access Management (IAM), and more.
  2. Ease of Use: Boto3 is designed to be developer-friendly, providing a high-level API that abstracts the underlying complexities of AWS service interactions. This makes it easier for Python developers to integrate AWS services into their applications.
  3. Resource-Oriented Interface: Boto3 introduces the concept of resource objects, which represent AWS resources such as an S3 bucket or an EC2 instance. These resource objects provide a more Pythonic and object-oriented way to interact with AWS resources.
  4. Low-Level and High-Level APIs: Boto3 provides both low-level and high-level APIs. The low-level API closely mirrors the AWS service APIs, allowing developers to have fine-grained control. The high-level API, on the other hand, abstracts away many of the details, making it easier to perform common tasks with less code.
  5. Authentication and Credentials Management: Boto3 handles AWS authentication and credential management. It can automatically use credentials from various sources, such as environment variables, AWS configuration files, and IAM roles associated with an EC2 instance.
  6. Extensibility: Boto3 can be extended and customized to support additional AWS services or custom use cases. You can also contribute to the open-source project on GitHub.

Install Boto3

To use Boto3, you need to install the library using the following command.


# Install boto3
pip install boto3

Once installed, you can import the Boto3 library in your Python code and interact with AWS services using the provided API.


# import boto3
import boto3

Boto3 Session

A session is an object in Boto3 API that stores AWS configuration state, including AWS access key ID, secret access key, and session token. The boto3.Session class is used to create a session, and it provides a way to customize and manage the configuration settings for AWS service clients.

Here’s a basic example of creating a Boto3 session. You can pass various parameters to the boto3.Session constructor to customize the configuration, such as specifying the AWS access key, secret key, and region.


# Import boto3
import boto3

# Create boto3 Session
session = boto3.Session(
    aws_access_key_id='your_access_key',
    aws_secret_access_key='your_secret_key',
    region_name='your_region'
)

In the example above, a Boto3 session is created with specific AWS credentials and a region. The session is then used to interact with Amazon S3 services.

Create Boto3 Client

Once you have a session, you can use it to create AWS service clients. Clients are specific to AWS services like S3, EC2, DynamoDB, etc. The boto3.client is a lower-level interface for making direct API calls to AWS services. It provides a more direct and service-specific way to interact with AWS services compared to the higher-level boto3.resource. The Client class is used to create service clients for different AWS services, such as S3, EC2, DynamoDB, etc.


# Creating an S3 client using the session
s3_client = session.client('s3')

The session.client('s3') creates an S3 client using the Boto3 session.

Access AWS Resources using Boto3

Once you have a Python Boto3 s3 client, use the list_buckets() to get the S3 buckets list. This method returns a response object that contains details about the S3 buckets.


# List all S3 buckets
response = s3_client.list_buckets()
print('S3 Buckets:', response['Buckets'])

# Output:
#S3 Buckets: [{'Name': 'bucket1'}, {'Name': 'bucket2'}, ...]

Explanation:

  1. Calling list_buckets Method: The s3_client.list_buckets() method is called to retrieve information about all the S3 buckets in the AWS account associated with the provided credentials.
  2. Accessing Bucket Information in the Response: The response object is a dictionary-like object containing various details about the S3 buckets. The specific information we are interested in is under the ‘Buckets’ key.
  3. Printing Bucket Names: The print('S3 Buckets:', response['Buckets']) statement prints the names of the S3 buckets. The response['Buckets'] retrieves the list of buckets from the response object.

Create Boto3 Resource

The boto3.resource is a higher-level, more Pythonic interface for interacting with AWS services. It provides an object-oriented approach to working with AWS resources, abstracting away many of the low-level details of the service API. The Resource class allows you to interact with AWS resources in a more intuitive and natural way compared to using the lower-level Client interface. Below is an example


# Using Resource
import boto3

# Create an S3 resource
s3_resource = boto3.resource('s3')

# List all S3 buckets
for bucket in s3_resource.buckets.all():
    print('S3 Bucket:', bucket.name)

Using AWS Multiple Profiles with Boto3

If you have multiple profiles in your AWS credentials file, you can use the profile_name parameter to select a specific profile when creating a session.


# Using multiple profiles
session = boto3.Session(profile_name='your_profile')

Conclusion

In this article, you have learned what Boto3 is and how to interact with AWS from a Python example. Boto3 provides an easy-to-use API for interacting with AWS services using Python code.

Happy Learning !!

Related Articles

Naveen Nelamali

Naveen Nelamali (NNK) is a Data Engineer with 20+ years of experience in transforming data into actionable insights. Over the years, He has honed his expertise in designing, implementing, and maintaining data pipelines with frameworks like Apache Spark, PySpark, Pandas, R, Hive and Machine Learning. Naveen journey in the field of data engineering has been a continuous learning, innovation, and a strong commitment to data integrity. In this blog, he shares his experiences with the data as he come across. Follow Naveen @ LinkedIn and Medium