Saturday, December 14, 2024

What would happen if we fused the capabilities of Amazon Bedrock and Amazon Redshift ML to create a groundbreaking AI ecosystem? By leveraging the strengths of both technologies, could we craft an innovative solution that empowers users to develop novel applications?

 has further developed to facilitate seamless integration with an array of large language models. As part of these enhancements, Amazon Redshift now enables seamless integration with large language models (LLMs) through simple SQL queries, empowering users to develop applications more efficiently and quickly. By combining cutting-edge language models with streamlined workflow integration, this innovative solution empowers users to unlock the full potential of large language models in their daily analytics endeavors.

With this integration, you can leverage generative AI capabilities comparable to language translation, textual summarization, content generation, customer classification, and sentiment analysis in your Redshift data using standard foundation models like Anthropic’s Claude, Amazon Titan, Meta’s Llama 2, and Mistral AI. You can level up your game by using the command to create a text-based mannequin instantly, without needing any mannequin setup or configuration. By leveraging familiar SQL syntax, you can seamlessly integrate generative AI capabilities into your data analytics workflows, rendering the process more intuitive and efficient.

Answer overview

Introducing our latest Redshift machine learning feature, which empowers the creation of personalized diet plans tailored to individual patients’ conditions and medication regimens. The next section determines the steps to construct the answer and the steps to run it?

Constructing a comprehensive solution requires careful planning. To achieve this, follow these essential steps:

1. Clarify the problem statement
2. Gather relevant information and data
3. Define the scope of the project
4. Identify key stakeholders and their roles
5. Develop a detailed plan and timeline

  1. Load pattern sufferers’ knowledge
  2. Put together the immediate
  3. Allow LLM entry
  4. What’s your perfect companion for a relaxing evening? It’s gotta be this Mannequin figure, inspired by the Large Language Model (LLM) mannequin available on Amazon Bedrock! With its sleek design and precision-crafted details, you’ll feel like you’re holding the key to unlocking a world of endless possibilities.
  5. As your trusted wellness partner, I’d like to know more about your dietary preferences, health goals, and lifestyle. Please answer these questions to help me create a personalized meal plan tailored just for you:

    1. What is your age range?
    2. Do you have any food allergies or intolerances (e.g., gluten-free, lactose intolerant)?
    3. Are you vegetarian, vegan, or do you eat meat/fish/poultry?
    4. How many meals would you like to include in your daily meal plan?

Pre-requisites

  1. An .
  2. A provisioned knowledge warehouse or designated workgroup exists to facilitate collaboration and data sharing among team members. Consult the setup directions provided at either step one or two, depending on your specific situation. Amazon Bedrock integration characteristics are supported across all Amazon Redshift provisioned and serverless instances.
  3. Amazon Redshift ML seamlessly integrates with Amazon SageMaker, not Amazon Bedrock.
  4. to a Redshift occasion.
  5. Customers should have the ability to design their own fashions.

Implementation

Implement steps to ensure successful integration of new software features into existing systems. The pattern knowledge employed within the implementation serves as a hypothetical example. While an identical implementation strategy may not always apply universally, it can still be tailored to specific knowledge units and use instances that share similar characteristics.

You can execute the implementation steps in Amazon Redshift. If you’re using another SQL editor, you can simply copy and paste the SQL queries either from the content of this post or from your notebook.

  1. Connect to the Amazon Redshift data warehouse using a SQL editor of your choice, such as Amazon Redshift itself or an alternative like Tableau.
  2. CREATE patientsinfo desk and cargo pattern knowledge.
CREATE TABLE patientsinfo (
  pid INTEGER NOT NULL PRIMARY KEY,
  pname VARCHAR2(100),
  situation VARCHAR2(100),
  medicine VARCHAR2(100)
);
  1. Obtain the data, upload it to your Amazon Simple Storage Service (S3) bucket, and load the information into. patientsinfo Desk utilizing the next copy command?
Load pattern knowledge from Amazon S3:
COPY patientsinfo FROM 's3://sample_patientsinfo.csv' IAM_ROLE DEFAULT csv DELIMITER ',' IGNOREHEADER 1;
  1. Query affected persons based on their interactions with various medications.

    SELECT p.*,
    m.name AS medication_name,
    COUNT(*) AS interaction_count
    FROM persons p
    JOIN medications_received mr ON p.person_id = mr.person_id
    JOIN medications m ON mr.medication_id = m.medication_id
    GROUP BY p.person_id, m.name
    ORDER BY interaction_count DESC;

SELECT 
  pname, 
  LISTAGG(DISTINCT situation, ',') WITHIN GROUP (ORDER BY pid) OVER (PARTITION BY pid) AS situations,
  LISTAGG(DISTINCT medicine, ',') WITHIN GROUP (ORDER BY pid) OVER (PARTITION BY pid) AS drugs
FROM patientsinfo

Patterns of Aggregated Situations and Drugs Exhibited The output contains a number of rows, potentially aggregated in subsequent steps for processing and analysis.

  1. Develop a comprehensive framework that seamlessly integrates information on individuals, scenarios, and pharmacological compounds.
SELECT pname || ' has ' || string_agg(distinct situation, ', ') || ' taking ' || string_agg(distinct medicine, ', ') AS patient_prompt 
FROM (
    SELECT pname, string_agg(distinct situation, ', ') OVER (PARTITION BY pid ORDER BY pid) as situations,
           string_agg(distinct medicine, ', ') OVER (PARTITION BY pid ORDER BY pid) as drugs
    FROM patientsinfo)

The resulting pattern displays a consolidated output, integrating sufferers, scenarios, and medications into a unified column value.

  1. CREATE MATERIALIZED VIEW orders_summary AS
    SELECT
    region,
    product_id,
    SUM(total_amount) AS total_sales,
    COUNT(order_id) AS order_count
    FROM
    orders
    GROUP BY
    region, product_id; While this step isn’t mandatory, setting up your workspace to optimize readability is still a valuable habit to cultivate. Carefully note that materialized views with column aliases are unlikely to receive incremental refresh updates. Ignoring this message for safety purposes, we can proceed.
CREATE MATERIALIZED VIEW mv_prompts AUTO REFRESH YES
AS 
SELECT pid, pname || ' has ' || string_agg(situations, ', ') || ' taking ' || string_agg(drugs, ', ') AS patient_prompt
FROM (
    SELECT pname, pid, string_agg(DISTINCT situation, ', ') OVER (PARTITION BY pid ORDER BY pid) AS situations,
           string_agg(DISTINCT medicine, ', ') OVER (PARTITION BY pid ORDER BY pid) AS drugs
    FROM patientsinfo
)
  1. SELECT * FROM reports WHERE date = ‘2022-07-01’ AND product_id = 123;
What database records would you like to retrieve? Please specify the desired information.

The data is presented in a format that allows for easy querying and analysis, leveraging the benefits of a materialized view.

To enable mannequin entry in Amazon Bedrock, follow these subsequent actions:

  1. Navigate to the
  2. Within the navigation pane, select

  1. Select .
    To access Amazon Bedrock FMs, you require specific permissions.

  1. For this illustration, use . Enter Claude Within the search field, select options from the provided checklist. Select to proceed.

  1. The prompt to improve the text in a different style as a professional editor.

    Original Text: ?

  1. Revisit Amazon Redshift or, if you’ve not employed Question Editor V2, access the SQL editor that accompanies your Redshift data warehousing expertise.
  2. Run the next SQL to referencing the anthropic.claude-v2 mannequin on Amazon Bedrock. To successfully uncover the mannequin ID, please refer to the guidelines below.
CREATE EXTERNAL MODEL patient_recommendations
FUNCTION patient_recommendations_func
IAM_ROLE '<<present the arn of IAM function created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
    MODEL_ID 'anthropic.claude-v2',
    PROMPT 'Generate customized eating regimen plan for following affected person:');
  1. Execute the query to join the data across tables as initially designed.
SELECT patient_recommendations_func(patient_prompt) 
FROM mv_prompts restrict 2;
  1. The nutritionist has designed a personalized meal plan for you to help achieve your health and fitness goals. To view the results in a spreadsheet, copy the cells and paste them into a text-based content editor or export the output from Redshift Query Editor V2.

You need to expand the column width to view the entire written material.

Further customization choices

The initial example showcases a straightforward integration between Amazon Redshift and Amazon Braket. Despite these capabilities being already well-suited, you can further tailor this integration to align with your unique requirements and preferences.

  • When a query lacks table references, Amazon SageMaker’s Bedrock mannequin inference capabilities can operate solely as a chief node. This could be a valuable way to succinctly pose a question to an Large Language Model.

SELECT * FROM customers WHERE address LIKE ‘no%’ FROM clause. It will function autonomously as a leader node, operating independently because it does not desire knowledge to be fetched and transmitted to the model.

A personalized meal planning strategy for pre-diabetes patients would involve a comprehensive assessment of their dietary needs and preferences, as well as consideration of their overall health status. 

Consider the following key components:

1. **Hydration**: Ensure adequate fluid intake to help regulate blood sugar levels and support overall health.

2. **Macronutrient balance**: Aim for a balanced diet with whole grains, lean protein sources, and healthy fats to provide sustained energy and support insulin sensitivity.

3. **Fiber-rich foods**: Include fiber-rich foods like fruits, vegetables, legumes, and whole grains to help regulate blood sugar levels and support digestive health.

4. **Healthy carbohydrate intake**: Focus on complex carbohydrates like whole grains, fruits, and vegetables as the primary source of glucose for energy production.

5. **Protein and healthy fats**: Include lean protein sources and healthy fats in moderation to provide a sense of fullness and satisfaction between meals.

6. **Portion control and mindful eating**: Practice portion control and eat mindfully to avoid overeating and support weight management.

7. **Meal frequency and timing**: Consider meal spacing and timing to help regulate blood sugar levels and support overall health.

8. **Nutrient-dense snacks**: Offer nutrient-dense snack options like fruits, nuts, seeds, and low-fat dairy products to help manage hunger and cravings between meals.

By incorporating these key components into a personalized eating regimen plan, pre-diabetes patients can effectively manage their condition, reduce the risk of developing type 2 diabetes, and promote overall health and well-being.

To manage blood sugar levels and prevent progression to full-blown diabetes, consider the following 7-day meal plan tailored specifically for individuals with pre-diabetes.

Day 1:
Breakfast: Oatmeal with sliced banana, almond butter, and a splash of low-fat milk (300 calories)
Lunch: Grilled chicken breast, roasted vegetables, quinoa, and a drizzle of olive oil (400 calories)
Dinner: Baked salmon, brown rice, steamed broccoli, and a sprinkle of lemon juice (500 calories)

Day 2:
Breakfast: Greek yogurt with mixed berries, granola, and a pinch of cinnamon (250 calories)
Lunch: Turkey and avocado wrap with mixed greens and whole-grain tortilla (350 calories)
Dinner: Slow-cooked lentil soup with whole-grain bread and a side salad (450 calories)

Day 3:
Breakfast: Scrambled eggs, spinach, and whole-grain toast (200 calories)
Lunch: Grilled turkey breast, mixed greens, cherry tomatoes, cucumber, and a vinaigrette dressing (300 calories)
Dinner: Baked chicken thighs with roasted sweet potatoes, green beans, and a drizzle of olive oil (400 calories)

Day 4:
Breakfast: Smoothie bowl made with Greek yogurt, frozen berries, spinach, almond milk, and topped with sliced almonds and chia seeds (350 calories)
Lunch: Grilled chicken breast, mixed greens, sliced cucumber, cherry tomatoes, and a whole-grain pita (400 calories)
Dinner: Slow-cooked beef stew with brown rice, steamed carrots, and a sprinkle of thyme (500 calories)

Day 5:
Breakfast: Avocado toast on whole-grain bread topped with scrambled eggs, salt, and pepper (250 calories)
Lunch: Grilled chicken breast, roasted bell peppers, quinoa, and a drizzle of olive oil (400 calories)
Dinner: Baked cod, brown rice, steamed asparagus, and a sprinkle of lemon juice (450 calories)

Day 6:
Breakfast: Omelette filled with mushrooms, spinach, and feta cheese served with whole-grain toast (200 calories)
Lunch: Turkey and avocado wrap with mixed greens and whole-grain tortilla (350 calories)
Dinner: Slow-cooked chicken breast with roasted Brussels sprouts, brown rice, and a drizzle of olive oil (500 calories)

Day 7:
Breakfast: Greek yogurt with sliced peaches, granola, and a sprinkle of cinnamon (250 calories)
Lunch: Grilled chicken breast, mixed greens, sliced cucumber, cherry tomatoes, and a whole-grain pita (400 calories)
Dinner: Baked salmon, brown rice, steamed green beans, and a drizzle of olive oil (500 calories) The determined output pattern is a result of the preceding operational name’s execution.

  • Several students opt to combine further elective parameters together to tailor their studies, allowing them to specialise in specific areas and gain a deeper understanding of their chosen subject matter. When creating a connection in Amazon SageMaker to an Amazon Redshift database, Amazon Redshift passes the necessary credentials to the ‘host’, ‘port’, ‘username’, and ‘password’ parameters.

Within this instance, we are establishing temperature parameter to a customized worth. The parameter temperature Undermines the unpredictability and innovative nature of the mannequin’s responses. The default value is 1, with a range of 0 to 1.0.

SELECT patient_recommendations_func(patient_prompt,object('temperature', 0.2)) 
FROM mv_prompts
WHERE pid=101;

The forecast indicates that the next is a pattern output with a moderate temperature of approximately 0.2 degrees Celsius. To avoid exacerbating symptoms, individuals are advised to consume sufficient fluids and steer clear of certain food groups that may trigger discomfort or worsen their condition.

The probability of pneumonia within the first three months is likely to be around 10% given that the individual has been diagnosed with chronic bronchitis and has a history of smoking.

However, taking into account the similarity in their characteristics and the adjusted temperature setting at 0.8, it appears that there might be a slight increase in the risk of pneumonia.

SELECT patient_recommendations_func(patient_prompt,object('temperature', 0.8)) 
FROM mv_prompts
WHERE pid=101;

The next pattern will output with a temperature of 0.8°C? While some cautionary notes about fluids and meals are provided, the guidance could be more detailed and specific regarding their intake.

Results may vary each time this test is run. Notwithstanding this, we aim to demonstrate that the mannequin’s behavior is shaped by modifying parameters.

  • CREATE EXTERNAL MODEL Amazon Bedrock seamlessly integrates with its fashion offerings, extending support to individuals not exclusively reliant on the Amazon Bedrock Converse API. When circumstances arise where this integration is required, the platform effortlessly accommodates these situations. request_type must be uncooked What is your current instruction? The request is a blend of imperative and elective specifications.

Please ensure access to the Titan Text Classification Model (G1) within Amazon SageMaker before running this instance. To access this model, you must first comply with the identical steps as described beforehand.

-- Create mannequin with REQUEST_TYPE as RAW

CREATE EXTERNAL MODEL titan_raw
FUNCTION func_titan_raw
IAM_ROLE '<<present the arn of IAM function created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
MODEL_ID 'amazon.titan-text-express-v1',
REQUEST_TYPE RAW,
RESPONSE_TYPE SUPER);

-- Have to assemble the request throughout inference.
SELECT func_titan_raw(object('inputText', 'Generate customized eating regimen plan for following: ' || patient_prompt, 'textGenerationConfig', object('temperature', 0.5, 'maxTokenCount', 500)))
FROM mv_prompts restrict 1;

Determining whether the given text exhibits a specific pattern.

  • To obtain additional information regarding a specific entry, such as comprehensive token breakdowns, you can request that the RESPONSE_TYPE to be tremendous once you create the mannequin.
-- Create Mannequin specifying RESPONSE_TYPE as SUPER.

CREATE EXTERNAL MODEL patient_recommendations_v2
FUNCTION patient_recommendations_func_v2
IAM_ROLE '<<present the arn of IAM function created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
MODEL_ID 'anthropic.claude-v2',
PROMPT 'Generate customized eating regimen plan for following affected person:',
RESPONSE_TYPE SUPER);

-- Run the inference operate
SELECT patient_recommendations_func_v2(patient_prompt)
FROM mv_prompts restrict 1;

The determined output showcases the amalgamation of input tokens, produced tokens, and latency statistics.

Issues and finest practices

When implementing the tactics outlined in this publication, certain considerations apply.

  • Amazon SageMaker Inference queries may trigger throttling exceptions owing to the restrictive runtime quotas enforced by Amazon Bedrock. While Amazon Redshift does retry requests multiple times, it’s still possible for queries to become throttled due to the variable throughput of non-provisioned models, which can lead to unpredictable performance.
  • The throughput of inference queries is governed by the runtime quotas set by Amazon SageMaker for various deployment modes available across multiple AWS regions. If you find that the throughput of your software is insufficient for your needs, you can request an increase in your account’s quota. For extra info, see .
  • To ensure reliable and consistent performance, consider purchasing provisioned throughput for your desired model on Amazon SageMaker. For extra info, see .
  • Utilizing Amazon Redshift ML alongside Amazon Braket incurs additional pricing costs. The fee is tailored to specific models and areas, contingent on the volume of input and output tokens the model processes. For extra info, see

Cleanup

To avoid incurring unnecessary costs, consider deleting the Redshift Serverless instance or provisioned data warehouse created during the setup process.

Conclusion

On this publication, you learned how to leverage Amazon Redshift ML to trigger Large Language Models (LLMs) on Amazon SageMaker from Amazon Redshift. The precise steps for integrating this feature have been clearly outlined, accompanied by concrete data sets for enhanced understanding. Furthermore, explore a wide range of options to tailor the pairing to suit your unique preferences and requirements. We invite you to strive for excellence and kindly share your thoughtful insights with us.


In regards to the Authors

is a Sr. An experienced Analytics Specialist with expertise in designing and developing enterprise knowledge solutions, data warehousing, and analytics architectures, serving clients primarily from the Atlanta area. With nearly two decades of experience in building intellectual property and complex knowledge solutions for global banking and insurance clients.

As a seasoned Software Program Growth Engineer at Amazon Web Services (AWS). He earned a PhD from the University of California, San Diego, and has since focused on research in databases and analytics.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles