A first look at the brand new Amazon Bedrock Studio

A first look at the brand new Amazon Bedrock Studio

A few days ago, AWS announced the preview launch of Amazon Bedrock Studio, a web interface for developers to collaborate and build generative AI applications. According to the announcement, Amazon Bedrock Studio provides a rapid prototyping environment and streamlines access to multiple foundation models, knowledge bases, agents, and guardrails.

In my view, it is crucial for developers to have the opportunity to experiment with generative AI and begin to understand how it works, even without being experts in machine learning. Generative AI holds immense potential and can have a powerful impact on businesses. Therefore, grasping the fundamental concepts behind this technology is essential. A playground environment is undoubtedly valuable in lowering the entry barrier, as it allows developers to rapidly prototype and explore generative AI applications. The ability to quickly iterate and test ideas is a significant advantage, enabling developers to grasp the core principles of generative AI more effectively.

With all that in mind, I couldn't wait to get my hands on Amazon Bedrock Studio and see what it's all about. So I jumped right in, and here's how my first look went.

First things first: Limitations

The first thing to note is that, at the moment, it's not entirely straightforward to try out Amazon Bedrock Studio.

Currently, it's only available in the us-east-1 and us-west-2 regions. While this might not seem like an issue, the login process requires SSO credentials provided by your AWS Organization's AWS IAM Identity Center. The real problem is that, at least for now, AWS IAM Identity Center must be configured in the same region where you want to use Bedrock Studio (so, either us-east-1 or us-west-2); and an AWS Organization cannot have more than one global AWS IAM Identity Center. So, if your organization already has an AWS IAM Identity Center set up in a different region... unfortunately, you'll have to wait until Bedrock Studio is enabled in your region. 🤷

Amazon Bedrock Studio is in preview release and is subject to change, so I expect this limitation to be overcome... hopefully soon.

Let's have a look!

Amazon Bedrock Studio provides access to all the foundation models offered by Amazon Bedrock, including Anthropic Claude, Mistral, Meta LLaMA 2, Cohere, and more, allowing you to evaluate and experiment with different models. However, the most intriguing aspect is the ability to use Bedrock Studio as a playground for creating apps (essentially chatbots) that not only have access to these models but can also leverage various functionalities provided by Bedrock. These include Knowledge Bases, Guardrails, and Functions, which we'll explore in detail.

Knowledge Bases

Knowledge Bases in Amazon Bedrock provide an out-of-the-box implementation of retrieval-augmented generation (RAG), a technique where relevant information is retrieved from data sources to enhance the generation of model responses. Essentially, Knowledge Bases allow you to leverage RAG without the need for manual configuration. You can simply upload your documents, which are then ingested using the Amazon Titan model and stored in an Amazon OpenSearch Serverless instance (though these technical details are abstracted away).

The true power of Knowledge Bases (and RAG in general) lies in their ability to augment prompts provided to foundation models with contextual information from your uploaded documents. This ensures that the generated responses are informed by the relevant data, enhancing their accuracy and relevance. Additionally, responses from Knowledge Bases include citations, allowing users to verify the source text and ensure factual accuracy, promoting transparency and trustworthiness.


Guardrails in Amazon Bedrock allow you to implement safeguards for your generative AI applications based on your specific use cases and responsible AI policies. You can create multiple Guardrails tailored to different scenarios and apply them across various foundation models, ensuring a consistent user experience and standardized safety controls across your generative AI applications. Guardrails enable you to configure denied topics to disallow undesirable subjects and content filters to block harmful content in user inputs and model responses. Guardrails can be used with text-only foundation models.

A Guardrail consists of the following policies to avoid content that falls into undesirable or harmful categories:

  1. Denied Topics: Define a set of topics that are undesirable in the context of your application, which will be blocked if detected in user queries or model responses.

  2. Content Filters: Adjust filter strengths to filter out harmful content in input prompts or model responses.

  3. Word Filters: Configure filters to block undesirable words, phrases, and profanity.

  4. Sensitive Information Filters: Block or mask personally identifiable information (PII) and use regular expressions to define and block or mask patterns that might correspond to sensitive information.

Additionally, you can configure the messages to be returned to the user if a user input or model response violates the policies defined in the Guardrail.


Function calling is an interesting feature that allows models to call functions to access specific capabilities or modules when handling a prompt. This enables the integration of generative AI models with "fresh" information that cannot be part of the model's training or included in Knowledge Bases. Instead, this information can be obtained by invoking APIs.

For example, you could use function calling to incorporate real-time data such as weather information, sports results, or other types of up-to-date information into the model's responses. This way, the model can provide more accurate and timely responses by leveraging external data sources beyond its initial training data or the Knowledge Bases.

Functions essentially extends the capabilities of generative AI models by allowing them to dynamically access and incorporate external information sources, ensuring that the generated responses are not limited by the static nature of the model's training data or Knowledge Bases.

An even more compelling aspect of functions is that you don't even need to write code to invoke external APIs. AWS takes care of creating a Lambda function for you automatically. All you have to do is provide an OpenAPI schema, and the necessary infrastructure is created behind the scenes. Amazon Bedrock Studio then automatically generates and manages the Lambda function, enabling seamless integration of external information into the model's responses.

It's worth noting that I encountered a potential limitation during my testing. The Lambda function was deployed with 10 GB of RAM, which became an issue since I was testing Bedrock Studio on an account where there were no other Lambda functions, and the default memory limit for functions is 3 GB. While this limit is a soft quota, it cannot be increased through the standard service quota increase request process. Instead, according to the documentation, the limit is automatically raised based on function usage. To address this, I opened a standard support ticket (lacking access to enterprise support) explaining the situation. Within a few hours, my account was able to create functions with memory exceeding 3 GB. However, it was quite inconvenient to see function creation fail without a specific explanation. I had to consult CloudFormation to understand the reason for the failure and attempt the support ticket approach, which is not a standard procedure for quota increase.

Building apps with flexible components

So basically, within Amazon Bedrock Studio you can create applications that seamlessly integrate all the capabilities we've talked about, composing them according to your specific needs. You can leverage multiple Knowledge Bases, incorporate up to 5 functions simultaneously, and create tailored Guardrails to evaluate different outcomes. Additionally, you have the flexibility to adjust model parameters such as temperature, top-p, and top-k, which control the randomness and diversity of the generated responses.

The temperature parameter regulates the "creativeness" of the model's outputs, while top-p and top-k determine the nucleus sampling and top-k sampling techniques, respectively, influencing the range of possible outputs.

Moreover, Amazon Bedrock Studio allows you to organize workspaces, enabling you to create and manage multiple apps within a collaborative environment, promoting teamwork and shared experimentation across your organization.

Wrapping up: is it worth a try?

While the capabilities within Amazon Bedrock Studio aren't entirely new, having been previously available through the Amazon Bedrock offering, the true value proposition lies in the seamless playground environment that allows users to experiment with all these features together. This unified approach significantly lowers the barrier to entry, enabling even those new to generative AI to easily explore the full range of capabilities without major hurdles.

Although Amazon Bedrock Studio is still in preview with limited access at the moment, its availability is expected to expand soon, given AWS's strong focus on enabling developers to build generative AI solutions. As the demand for these transformative technologies continues to grow, Bedrock Studio positions itself as a valuable resource, empowering developers and organizations to dive into generative AI with a comprehensive set of integrated tools at their disposal. For those looking to get hands-on with the latest in generative AI, Amazon Bedrock Studio is certainly worth a try.