Trending

#AmazonSagemaker

Latest posts tagged with #AmazonSagemaker on Bluesky

Latest Top
Trending

Posts tagged #AmazonSagemaker

Preview
Amazon SageMaker Unified Studio now supports faster data preview in Visual ETL Amazon SageMaker Unified Studio introduces data preview v2.0 for Visual ETL, a new data preview mode that delivers near-instant results when building and iterating on visual ETL jobs. With data preview v2.0, data engineers and analysts can see the output of each transform in about one second, with no session startup required and at no additional compute cost. Data preview v2.0 uses an in-browser query engine to load and process data locally, removing the dependency on server-side Spark sessions for preview operations. Source data is fetched once and cached in the browser, so subsequent transforms apply instantly without re-querying the underlying data source. For Amazon Redshift users, this means you can iterate on transforms without additional queries against your Redshift cluster, keeping your preview workflow fast and your cluster resources focused on production workloads. Data preview v2.0 supports CSV, Parquet, and JSON files from Amazon S3, in addition to data from Amazon Redshift, Amazon S3 Tables, AWS Glue Data Catalog, and third-party sources including Snowflake, MySQL, PostgreSQL, SQL Server, Oracle, Google BigQuery, Amazon DynamoDB, and Amazon DocumentDB. A toggle in the Visual ETL editor gives you the option to switch between data preview v2.0 and the original Spark-based preview at any time. Data preview v2.0 in Visual ETL is available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To learn more, visit the Amazon SageMaker Unified Studio documentation.

🆕 Amazon SageMaker Unified Studio's v2.0 data preview speeds up Visual ETL, delivering near-instant transform results in one second. It uses an in-browser query engine, caches data, and supports CSV, Parquet, JSON, and multiple sources. Available in all AWS Regions.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker Unified Studio now supports faster data preview in Visual ETL https://aws.amazon.com/sagemaker/unified-studio/introduces data preview v2.0 for Visual ETL, a new data preview mode that delivers near-instant results when building and iterating on visual ETL jobs. With data preview v2.0, data engineers and analysts can see the output of each transform in about one second, with no session startup required and at no additional compute cost. Data preview v2.0 uses an in-browser query engine to load and process data locally, removing the dependency on server-side Spark sessions for preview operations. Source data is fetched once and cached in the browser, so subsequent transforms apply instantly without re-querying the underlying data source. For Amazon Redshift users, this means you can iterate on transforms without additional queries against your Redshift cluster, keeping your preview workflow fast and your cluster resources focused on production workloads. Data preview v2.0 supports CSV, Parquet, and JSON files from Amazon S3, in addition to data from Amazon Redshift, Amazon S3 Tables, AWS Glue Data Catalog, and third-party sources including Snowflake, MySQL, PostgreSQL, SQL Server, Oracle, Google BigQuery, Amazon DynamoDB, and Amazon DocumentDB. A toggle in the Visual ETL editor gives you the option to switch between data preview v2.0 and the original Spark-based preview at any time. Data preview v2.0 in Visual ETL is available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To learn more, visit the Amazon SageMaker Unified Studio https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/visual-etl-data-previews.html.

Amazon SageMaker Unified Studio now supports faster data preview in Visual ETL

aws.amazon.com/sagemaker/unified-studio... data preview v2.0 for Visual ETL, a new data preview mode that delivers near-instant results when building and iterating on visual ET...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio adds light mode support for IAM-based domains Today, AWS announces light mode support in Amazon SageMaker Unified Studio for IAM-based domains. Customers can now configure the visual interface mode to match their preference, choosing between dark and light themes. Light mode helps improve readability in bright environments and provides a familiar visual experience for customers who prefer lighter interfaces. Combined with the existing dark mode, this update gives you full control over your development environment's appearance, improving accessibility and reducing eye strain across varying lighting conditions. In SageMaker Unified Studio settings, you can click on 'customize appearance' under your Profile settings to choose between visual modes including dark and light. The setting persists across browsers and devices. This feature is available in all regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the User Guide.

🆕 AWS adds light mode support in Amazon SageMaker Unified Studio for IAM-based domains, letting users choose between dark and light themes for improved readability and accessibility. Available in all regions, settings persist across browsers and devices.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker Unified Studio adds light mode support for IAM-based domains Today, AWS announces light mode support in https://aws.amazon.com/sagemaker/unified-studio/ for IAM-based domains. Customers can now configure the visual interface mode to match their preference, choosing between dark and light themes. Light mode helps improve readability in bright environments and provides a familiar visual experience for customers who prefer lighter interfaces. Combined with the existing dark mode, this update gives you full control over your development environment's appearance, improving accessibility and reducing eye strain across varying lighting conditions. In SageMaker Unified Studio settings, you can click on 'customize appearance' under your Profile settings to choose between visual modes including dark and light. The setting persists across browsers and devices. This feature is available in all regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/navigating-sagemaker-unified-studio.html#display-mode

Amazon SageMaker Unified Studio adds light mode support for IAM-based domains

Today, AWS announces light mode support in https://aws.amazon.com/sagemaker/unified-studio/ for IAM-based domains. Customers can now configure the visual interface mode to match their preferenc...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs Amazon SageMaker Unified Studio now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between Amazon SageMaker Catalog and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation. All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub. To learn more, visit the Amazon SageMaker Unified Studio documentation. For implementation details, see the Atlan blog post, Collibra blog post , and Alation blog post.

🆕 Amazon SageMaker Unified Studio syncs metadata with Atlan, Collibra, and Alation for consistent data and AI asset views. Key elements like projects and glossary terms sync, with Collibra offering bidirectional sync and data access requests. Integrate via Atlan, Alation, or…

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for Visual ETL, notebook, and code-based data processing jobs. With AWS Glue 5.1 in Amazon SageMaker Unified Studio, data engineers and data scientists can run jobs on Apache Spark 3.5.6 with Python 3.11 and Scala 2.12.18, and use updated open table format libraries including Apache Iceberg 1.10.0, Apache Hudi 1.0.2, and Delta Lake 3.3.2. You can use AWS Glue 5.1 in Amazon SageMaker Unified Studio when creating data processing jobs by selecting Glue 5.1 from the version dropdown in job settings. This applies to Visual ETL jobs, notebook jobs, and code-based jobs, so you can take advantage of the latest Spark runtime and open table format libraries across all your data processing workflows. AWS Glue 5.1 in Amazon SageMaker Unified Studio is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (Spain), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Malaysia), Asia Pacific (Thailand), Asia Pacific (Mumbai), and South America (Sao Paulo). To learn more, visit the Amazon SageMaker Unified Studio documentation. For details on what's included in AWS Glue 5.1, including updated open table format support and access control capabilities, see the AWS Glue documentation.

🆕 Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs, enabling Visual ETL, notebooks, and code-based jobs with Spark 3.5.6 and updated libraries like Apache Iceberg, Hudi, and Delta Lake. Available in multiple regions.

#AWS #AmazonSagemaker #AwsGlue

0 0 0 0
Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs https://aws.amazon.com/sagemaker/unified-studio/now supports https://aws.amazon.com/about-aws/whats-new/2025/11/aws-glue-5-1/for Visual ETL, notebook, and code-based data processing jobs. With AWS Glue 5.1 in Amazon SageMaker Unified Studio, data engineers and data scientists can run jobs on Apache Spark 3.5.6 with Python 3.11 and Scala 2.12.18, and use updated open table format libraries including Apache Iceberg 1.10.0, Apache Hudi 1.0.2, and Delta Lake 3.3.2. You can use AWS Glue 5.1 in Amazon SageMaker Unified Studio when creating data processing jobs by selecting Glue 5.1 from the version dropdown in job settings. This applies to Visual ETL jobs, notebook jobs, and code-based jobs, so you can take advantage of the latest Spark runtime and open table format libraries across all your data processing workflows. AWS Glue 5.1 in Amazon SageMaker Unified Studio is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (Spain), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Malaysia), Asia Pacific (Thailand), Asia Pacific (Mumbai), and South America (Sao Paulo). To learn more, visit the Amazon SageMaker Unified Studio https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/smus-creating-jobs.html. For details on what's included in AWS Glue 5.1, including updated open table format support and access control capabilities, see the AWS Glue https://docs.aws.amazon.com/glue/latest/dg/release-notes.html.

Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs

https://aws.amazon.com/sagemaker/unified-studio/now supports aws.amazon.com/about-aws/whats-new/2025... Visual ETL, notebook, and code-based data processi...

#AWS #AmazonSagemaker #AwsGlue

0 0 0 0
Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs https://aws.amazon.com/sagemaker/unified-studio/ now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between https://aws.amazon.com/sagemaker/catalog/ and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation. All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub. To learn more, visit the Amazon SageMaker Unified Studio https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/third-party-catalog-integrations.html. For implementation details, see the https://aws.amazon.com/blogs/big-data/unifying-governance-and-metadata-across-amazon-sagemaker-unified-studio-and-atlan/, https://aws.amazon.com/blogs/big-data/unifying-metadata-governance-across-amazon-sagemaker-and-collibra/, and https://aws.amazon.com/blogs/big-data/build-a-trusted-foundation-for-data-and-ai-using-alation-and-amazon-sagemaker-unified-studio/.

Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs

https://aws.amazon.com/sagemaker/unified-studio/ now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between https://aws.a

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the SageMaker user guide.

🆕 AWS now connects Kiro IDE to Amazon SageMaker Unified Studio, letting data scientists use Kiro's tools with SageMaker's compute, all in one place for smooth analytics and AI/ML workflows.

#AWS #AmazonMachineLearning #AmazonSagemaker

0 0 0 0
Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. https://aws.amazon.com/sagemaker/unified-studio/, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/local-ide-support.html

Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE

Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, an...

#AWS #AmazonMachineLearning #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker HyperPod now supports API-driven Slurm configuration Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or through the AWS Console. SageMaker HyperPod helps you provision resilient clusters for running machine learning (ML) workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). With this new API-driven configuration, you can now specify Slurm node types including Controller, Login, and Compute for cluster instance groups; instance group to partition mappings; and FSx for Lustre and FSx for OpenZFS filesystem mounts per instance group directly in the cluster API definition or through the advanced configuration section in the AWS Console. When you modify partition-node mappings directly in Slurm's native configuration files to fine-tune cluster resource assignments, Slurm's partition-node configurations can drift from HyperPod's view. A new cluster-level SlurmConfigStrategy helps you manage drift with three options: Managed, Overwrite, and Merge. The Managed strategy allows you to manage instance group to partition mappings completely via the API or Console, and automatically detects drift in partition-to-node mappings during scale-up or scale-down operations. When drift is detected, cluster updates are paused until you resolve it by switching to the Overwrite strategy to force API-defined mappings, the Merge strategy to preserve manual customizations, or by directly updating Slurm configurations to align with HyperPod. API-driven Slurm configuration is available in all AWS Regions where SageMaker HyperPod is available. To get started, you can use the AWS Management Console, AWS CLI, AWS CloudFormation, or AWS SDKs. For more information, see the Amazon SageMaker HyperPod documentation for creating clusters using the Console or the CLI, and the API reference for CreateCluster and UpdateCluster.

🆕 Amazon SageMaker HyperPod now supports API-driven Slurm setup, enabling direct cluster topology and shared filesystem configuration via cluster create/update APIs or AWS Console, managing Slurm partition-node mappings and drift. Available globally.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker HyperPod now supports API-driven Slurm configuration Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or through the AWS Console. SageMaker HyperPod helps you provision resilient clusters for running machine learning (ML) workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). With this new API-driven configuration, you can now specify Slurm node types including Controller, Login, and Compute for cluster instance groups; instance group to partition mappings; and FSx for Lustre and FSx for OpenZFS filesystem mounts per instance group directly in the cluster API definition or through the advanced configuration section in the AWS Console. When you modify partition-node mappings directly in Slurm's native configuration files to fine-tune cluster resource assignments, Slurm's partition-node configurations can drift from HyperPod's view. A new cluster-level SlurmConfigStrategy helps you manage drift with three options: Managed, Overwrite, and Merge. The Managed strategy allows you to manage instance group to partition mappings completely via the API or Console, and automatically detects drift in partition-to-node mappings during scale-up or scale-down operations. When drift is detected, cluster updates are paused until you resolve it by switching to the Overwrite strategy to force API-defined mappings, the Merge strategy to preserve manual customizations, or by directly updating Slurm configurations to align with HyperPod. API-driven Slurm configuration is available in all AWS Regions where SageMaker HyperPod is available. To get started, you can use the AWS Management Console, AWS CLI, AWS CloudFormation, or AWS SDKs. For more information, see the Amazon SageMaker HyperPod documentation for creating clusters using the https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-quickstart.html or the https://docs.aws.amazon.com/sagemaker/latest/dg/smcluster-getting-started-slurm-cli.html, and the API reference for https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateCluster.html and https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateCluster.html.

Amazon SageMaker HyperPod now supports API-driven Slurm configuration

Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or throu...

#AWS #AmazonSagemaker

0 0 0 0
AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026) Last week, my team met many developers at Developer Week in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent software—a new way of building and evolving applications where humans and AI collaborate as co-developers using Kiro. Other colleagues spoke about building and deploying production-ready AI agents. Everyone stayed to ask and […]

AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026)

Last week, my team met many developers at Developer Week in ...

#AWS #AmazonAurora #AmazonBedrock #AmazonEc2 #AmazonNova #AmazonSagemaker #Launch #News #WeekInReview

0 0 0 0

🚀 AWS launches Amazon SageMaker Inference for custom Nova models
• Supports Nova Micro, Lite & 2 Lite with reasoning
• Deploy on EC2 G5, G6, P5 with auto-scaling
#AWS #AmazonSageMaker
aws.amazon.com/blogs/aws/announcing-ama...

0 0 0 0

🚀 AWS launches Amazon SageMaker custom Nova models
• Deploy Nova 2 Lite models with auto-scaling
• Configure instance types, concurrency, and security
#AWS #AmazonSageMaker
aws.amazon.com/blogs/aws/announcing-ama...

0 0 0 0
Preview
AWS Launches SageMaker Inference for Custom Nova Models AWS has launched SageMaker Inference for custom Nova models, completing a full fine-tuning-to-deployment pipeline for Nova Micro, Nova Lite, and Nova 2 Lite.

winbuzzer.com/2026/02/17/a...

AWS Launches SageMaker Inference for Custom Nova Models

#AI #Amazon #AmazonWebServicesAWS ##CloudComputing #EnterpriseAI #AgenticAI #FoundationModels #AIInference #NovaMicro #NovaLite #AmazonNova #AmazonSagemaker #Nova2Lite

0 0 0 0
Cartesia Sonic 3 text-to-speech model is now available on Amazon SageMaker JumpStart Cartesia’s Sonic 3 model is now available in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. Sonic 3 is Cartesia's latest state space model (SSM) for streaming text-to-speech (TTS), delivering high naturalness, accurate transcript following, and industry-leading latency with fine-grained control over volume, speed, and emotion. Sonic 3 supports 42 languages and provides advanced controllability through API parameters and SSML tags for volume, speed, and emotion adjustments. The model includes natural laughter support, stable voices optimized for voice agents, and emotive voices for expressive characters. With sub-100ms latency, Sonic 3 enables real-time conversational AI that captures human speech nuances including emotions and tonal shifts. With SageMaker JumpStart, customers can deploy Sonic 3 with just a few clicks to address their voice AI use cases. To get started with this model, navigate to the SageMaker JumpStart model catalog in the SageMaker Studio or use the SageMaker Python SDK to deploy the model to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html.

Cartesia Sonic 3 text-to-speech model is now available on Amazon SageMaker JumpStart

Cartesia’s Sonic 3 model is now available in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. Sonic ...

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

0 1 0 0
Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason. NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure. These models bring advanced capabilities spanning protein design, reasoning with configurable outputs, and physical world understanding, enabling customers to accelerate biosciences research, drug discovery, and embodied AI applications on AWS infrastructure. ProteinMPNN enables fast and efficient protein sequence optimization guided by structural data. This NIM generates high-quality sequences with enhanced binding affinity and stability, validated through experimental results. Designed for scalability and flexibility, ProteinMPNN integrates seamlessly into protein engineering workflows, transforming applications like enzyme design and therapeutic development. MSA Search NIM supports GPU-accelerated Multiple Sequence Alignment (MSA) of a query amino acid sequence against a set of protein sequence databases. These databases are searched for similar sequences to the query and then the collection of sequences are aligned to establish similar regions even when the proteins have different lengths and motifs. Nemotron-3.5B-Instruct delivers high reasoning performance, native tool calling support, and extended context processing with 256k token context window. This model employs an efficient hybrid Mixture-of-Experts (MoE) architecture to ensure higher throughput than its predecessors for agentic and coding workloads, while maintaining the reasoning depth of a larger model. It is ideal for building multi-agent workflows, developer productivity tools, processes automation, and for scientific and mathematical reasoning analysis, amongst others. Cosmos Reason is an open , customizable, reasoning vision language model (VLM) for physical AI and robotics. It enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding, and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html.

Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart

Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: Prot...

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

0 0 0 0
Preview
Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason. NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure. These models bring advanced capabilities spanning protein design, reasoning with configurable outputs, and physical world understanding, enabling customers to accelerate biosciences research, drug discovery, and embodied AI applications on AWS infrastructure. ProteinMPNN enables fast and efficient protein sequence optimization guided by structural data. This NIM generates high-quality sequences with enhanced binding affinity and stability, validated through experimental results. Designed for scalability and flexibility, ProteinMPNN integrates seamlessly into protein engineering workflows, transforming applications like enzyme design and therapeutic development. MSA Search NIM supports GPU-accelerated Multiple Sequence Alignment (MSA) of a query amino acid sequence against a set of protein sequence databases. These databases are searched for similar sequences to the query and then the collection of sequences are aligned to establish similar regions even when the proteins have different lengths and motifs. Nemotron-3.5B-Instruct delivers high reasoning performance, native tool calling support, and extended context processing with 256k token context window. This model employs an efficient hybrid Mixture-of-Experts (MoE) architecture to ensure higher throughput than its predecessors for agentic and coding workloads, while maintaining the reasoning depth of a larger model. It is ideal for building multi-agent workflows, developer productivity tools, processes automation, and for scientific and mathematical reasoning analysis, amongst others. Cosmos Reason is an open , customizable, reasoning vision language model (VLM) for physical AI and robotics. It enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding, and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.

🆕 Amazon SageMaker JumpStart provides four NVIDIA NIMs models for biosciences and AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason, for quick AI deployment in drug discovery, protein design, and robotics on AWS with o…

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

0 0 0 0
DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts. MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications. Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html. 

DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart

Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portf...

#AWS #AmazonSagemaker #AmazonSagemakerJumpstart

0 0 0 0
Preview
DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts. MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications. Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.

🆕 AWS now offers DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct on SageMaker JumpStart, enhancing foundation models for document intelligence, coding, and multimodal reasoning. Deploy with clicks to tackle diverse AI challenges.

#AWS #AmazonSagemaker #AmazonSagemakerJumpstart

0 0 0 0
Amazon SageMaker Unified Studio now supports AWS PrivateLink Today, Amazon SageMaker announced a new capability allowing you to establish connectivity between your Amazon Virtual Private Cloud (VPC) and Amazon SageMaker Unified Studio without customer data traffic going through the public internet. Customers needing to go beyond the standard data transfer protocol (HTTPS/TLS2) can choose to configure their VPC so data transfer stays within the AWS network. Through https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html, Network Administrators can now onboard AWS service endpoints to their VPC used by Amazon SageMaker Unified Studio. With the endpoints are onboarded, IAM policies used by Amazon SageMaker will enforce that customer data stay within the AWS network. Amazon SageMaker private access using AWS PrivateLink is available in all https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/ where Amazon SageMaker Unified Studio is supported, including: Asia Pacific (Tokyo), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), South America (São Paulo), Asia Pacific (Seoul), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Asia Pacific (Mumbai), Europe (Paris), Europe (Stockholm) To learn more, visit https://aws.amazon.com/sagemaker/ then get started with the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/network-isolation.html.

Amazon SageMaker Unified Studio now supports AWS PrivateLink

Today, Amazon SageMaker announced a new capability allowing you to establish connectivity between your Amazon Virtual Private Cloud (VPC) and Amazon SageMaker Unified Studio without customer data traffic going ...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio now supports AWS PrivateLink Today, Amazon SageMaker announced a new capability allowing you to establish connectivity between your Amazon Virtual Private Cloud (VPC) and Amazon SageMaker Unified Studio without customer data traffic going through the public internet. Customers needing to go beyond the standard data transfer protocol (HTTPS/TLS2) can choose to configure their VPC so data transfer stays within the AWS network. Through AWS PrivateLink, Network Administrators can now onboard AWS service endpoints to their VPC used by Amazon SageMaker Unified Studio. With the endpoints are onboarded, IAM policies used by Amazon SageMaker will enforce that customer data stay within the AWS network. Amazon SageMaker private access using AWS PrivateLink is available in all AWS Regions where Amazon SageMaker Unified Studio is supported, including: Asia Pacific (Tokyo), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), South America (São Paulo), Asia Pacific (Seoul), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Asia Pacific (Mumbai), Europe (Paris), Europe (Stockholm) To learn more, visit Amazon SageMaker then get started with the network isolation documentation.

🆕 Amazon SageMaker Unified Studio now supports AWS PrivateLink for VPC connectivity, keeping data within the AWS network and avoiding public internet. Available in all supported regions, it uses AWS PrivateLink endpoints and IAM policies to enforce data isolation.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker HyperPod introduces enhanced lifecycle scripts debugging Amazon SageMaker HyperPod now provides enhanced troubleshooting capabilities for lifecycle scripts, making it easier to identify and resolve issues during cluster node provisioning. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). When lifecycle scripts encounter issues during cluster creation or node operations, you now receive detailed error messages that include the specific CloudWatch log group and log stream names where you can find execution logs for lifecycle scripts. You can view these error messages by running the DescribeCluster API or by viewing the cluster details page in the SageMaker console. The console also provides a "View lifecycle script logs" button that navigates directly to the relevant CloudWatch log stream, making it easier to locate logs. Additionally, CloudWatch logs for lifecycle scripts now include specific markers to help you track lifecycle script execution progress, including indicators for when the lifecycle script log begins, when scripts are being downloaded, when downloads complete, and when scripts succeed or fail. These markers help you quickly identify where issues occurred during the provisioning process. These enhancements reduce the time required to diagnose and fix lifecycle script failures, helping you get your HyperPod clusters up and running faster. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more, see https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-cluster-management-slurm.html in the Amazon SageMaker Developer Guide.

Amazon SageMaker HyperPod introduces enhanced lifecycle scripts debugging

Amazon SageMaker HyperPod now provides enhanced troubleshooting capabilities for lifecycle scripts, making it easier to identify and resolve issues during cluster node provisioning. SageMaker Hyper...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker HyperPod introduces enhanced lifecycle scripts debugging Amazon SageMaker HyperPod now provides enhanced troubleshooting capabilities for lifecycle scripts, making it easier to identify and resolve issues during cluster node provisioning. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). When lifecycle scripts encounter issues during cluster creation or node operations, you now receive detailed error messages that include the specific CloudWatch log group and log stream names where you can find execution logs for lifecycle scripts. You can view these error messages by running the DescribeCluster API or by viewing the cluster details page in the SageMaker console. The console also provides a "View lifecycle script logs" button that navigates directly to the relevant CloudWatch log stream, making it easier to locate logs. Additionally, CloudWatch logs for lifecycle scripts now include specific markers to help you track lifecycle script execution progress, including indicators for when the lifecycle script log begins, when scripts are being downloaded, when downloads complete, and when scripts succeed or fail. These markers help you quickly identify where issues occurred during the provisioning process. These enhancements reduce the time required to diagnose and fix lifecycle script failures, helping you get your HyperPod clusters up and running faster. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more, see SageMaker HyperPod cluster management in the Amazon SageMaker Developer Guide.

🆕 Amazon SageMaker HyperPod now offers enhanced debugging for lifecycle scripts, providing detailed error messages and CloudWatch logs to quickly identify and resolve issues during cluster node provisioning, speeding up cluster setup for AI/ML workloads.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker Studio now supports SOCI indexing for faster container startup times Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon SageMaker Studio is a fully integrated, browser-based environment for end-to-end machine learning development. SageMaker Studio provides pre-built container images for popular ML frameworks like TensorFlow, PyTorch, and Scikit-learn that enable quick environment setup. However, when data scientists need to tailor environments for specific use cases with additional libraries, dependencies, or configurations, they can build and register custom container images with pre-configured components to ensure consistency across projects. As ML workloads become increasingly complex, these custom container images have grown in size, leading to startup times of several minutes that create a bottlenecks in iterative ML development where quick experimentation and rapid prototyping are essential. https://github.com/awslabs/soci-snapshotter addresses this challenge by enabling lazy loading of container images, downloading only the necessary components to start applications with additional files loaded on-demand as needed. Instead of waiting several minutes for complete custom image downloads, users can begin productive work in seconds while the environment completes initialization in the background. To use SOCI indexing, create a SOCI index for your custom container image using tools like Finch CLI, nerdctl, or Docker with SOCI CLI, push the indexed image to Amazon Elastic Container Registry (ECR), and reference the image index URI when creating SageMaker Image resources. SOCI indexing is available in all AWS Regions where Amazon SageMaker Studio is available. To learn more about implementing SOCI indexing for your SageMaker Studio custom images, see https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi.html in the Amazon SageMaker Developer Guide. 

Amazon SageMaker Studio now supports SOCI indexing for faster container startup times

Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon ...

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker HyperPod now validates service quotas before creating clusters on console Amazon SageMaker HyperPod console now validates service quotas for your AWS account before initiating cluster creation, enabling you to confirm sufficient quota availability before provisioning begins. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). When creating large-scale AI/ML clusters, you need to ensure your account has sufficient quotas for instances, storage, and networking resources, but quota validation previously required manual checks across multiple AWS services, often resulting in failed cluster creation attempts and wasted time if you miss requesting quota limit increases. The new quota validation capability in the SageMaker HyperPod console automatically checks your account-level quotas against your cluster configuration, including instance type limits, EBS volume sizes, and VPC-related quotas when creating new resources. The validation displays a clear table showing expected utilization, applied quota values, and compliance status for each quota. When quotas may be exceeded, you receive a warning alert with direct links to the Service Quotas console to request increases. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. For a complete list of service quota validation checks performed, refer to the https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-prerequisites.html#sagemaker-hyperpod-prerequisites-quotas.

Amazon SageMaker HyperPod now validates service quotas before creating clusters on console

Amazon SageMaker HyperPod console now validates service quotas for your AWS account before initiating cluster creation, enabling you to confirm sufficient quota availability befor...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker HyperPod now validates service quotas before creating clusters on console Amazon SageMaker HyperPod console now validates service quotas for your AWS account before initiating cluster creation, enabling you to confirm sufficient quota availability before provisioning begins. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). When creating large-scale AI/ML clusters, you need to ensure your account has sufficient quotas for instances, storage, and networking resources, but quota validation previously required manual checks across multiple AWS services, often resulting in failed cluster creation attempts and wasted time if you miss requesting quota limit increases. The new quota validation capability in the SageMaker HyperPod console automatically checks your account-level quotas against your cluster configuration, including instance type limits, EBS volume sizes, and VPC-related quotas when creating new resources. The validation displays a clear table showing expected utilization, applied quota values, and compliance status for each quota. When quotas may be exceeded, you receive a warning alert with direct links to the Service Quotas console to request increases. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. For a complete list of service quota validation checks performed, refer to the Amazon SageMaker HyperPod User Guide.

🆕 Amazon SageMaker HyperPod checks service quotas before cluster creation in the console, ensuring quota availability for instances, storage, and networking, reducing failed attempts and manual checks. Available globally.

#AWS #AmazonSagemaker

0 0 0 0
Accelerate AI development using Amazon SageMaker AI with serverless MLflow Simplify AI experimentation with zero-infrastructure MLflow that launches in minutes, scales automatically, and seamlessly integrates with SageMaker's model customization and pipeline capabilities.

Accelerate AI development using Amazon SageMaker AI with serverless MLflow

Simplify AI experimentation with zero-infrastructure MLflow that launches in minutes, scales automatically, and seamlessly integrates wit...

#AWS #AmazonSagemaker #AmazonSagemakerUnifiedStudio #Announcements #Launch #News

0 0 0 0
Amazon SageMaker Studio now supports SOCI indexing for faster container startup times Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon SageMaker Studio is a fully integrated, browser-based environment for end-to-end machine learning development. SageMaker Studio provides pre-built container images for popular ML frameworks like TensorFlow, PyTorch, and Scikit-learn that enable quick environment setup. However, when data scientists need to tailor environments for specific use cases with additional libraries, dependencies, or configurations, they can build and register custom container images with pre-configured components to ensure consistency across projects. As ML workloads become increasingly complex, these custom container images have grown in size, leading to startup times of several minutes that create a bottlenecks in iterative ML development where quick experimentation and rapid prototyping are essential. https://github.com/awslabs/soci-snapshotter addresses this challenge by enabling lazy loading of container images, downloading only the necessary components to start applications with additional files loaded on-demand as needed. Instead of waiting several minutes for complete custom image downloads, users can begin productive work in seconds while the environment completes initialization in the background. To use SOCI indexing, create a SOCI index for your custom container image using tools like Finch CLI, nerdctl, or Docker with SOCI CLI, push the indexed image to Amazon Elastic Container Registry (ECR), and reference the image index URI when creating SageMaker Image resources. SOCI indexing is available in all AWS Regions where Amazon SageMaker Studio is available. To learn more about implementing SOCI indexing for your SageMaker Studio custom images, see https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi.html in the Amazon SageMaker Developer Guide. 

Amazon SageMaker Studio now supports SOCI indexing for faster container startup times

Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon ...

#AWS #AmazonSagemaker

1 0 0 0