Front-End Web & Mobile
Announcing the new Predictions category in Amplify Framework
The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, AWS announces a new category called “Predictions” in the Amplify Framework.
Using this category, you can easily add and configure AI/ML uses cases for your web and mobile application using few lines of code. You can accomplish these use cases with the Amplify CLI and either the Amplify JavaScript library (with the new Predictions category) or the generated iOS and Android SDKs for Amazon AI/ML services. You do not need any prior experience with machine learning or AI services to use this category.
Using the Amplify CLI, you can set up your backend by answering simple questions in the CLI flow. In addition, you can orchestrate advanced use cases such as on-demand indexing of images to auto-update a collection in Amazon Rekognition. The actual image bytes are not stored by Amazon Rekognition. For example, this enables you to securely upload new images using an Amplify storage object which triggers an auto-update of the collection. You can then identify the new entities the next time you make inference calls using the Amplify library. You can also set up or import a SageMaker endpoint by using the “Infer” option in the CLI.
The Amplify JavaScript library with Predictions category includes support for the following use cases:
1. Translate text to a target language.
2. Generate speech from text.
3. Identify text from an image.
4. Identify entities from an image. (for example, celebrity detection).
5. Label real world entities within an image/document. (for example, recognize a scene, objects and activity in an image).
6. Interpret text to find insights and relationships in text.
7. Transcribe text from audio.
8. Indexing of images with Amazon Rekognition.
The supported uses cases leverage the following AI/ML services:
- Amazon Rekognition
- Amazon Translate
- Amazon Polly
- Amazon Transcribe
- Amazon Comprehend
- Amazon Textract
The iOS and Android SDKs now include support for SageMaker runtime which you can use to call inference on your custom models hosted on SageMaker. You can also extract text and data from scanned documents using the newly added support for Amazon Textract in the Android SDK. These services add to the list of existing AI services supported in iOS and Android SDKs.
In this post, you build and host a React.js web application that uses text in English language as an input and translates it to Spanish language. In addition, you can convert the translated text to speech in the Spanish language. For example, this type of use case can be added to a travel application, where you can type text in English and playback the translated text in a language of your choice. To build this app you use two capabilities from the Predictions category: Text translation and Generate speech from text.
Secondly, we go through the flow of indexing images to update a collection from the Amplify CLI and an application when using Amazon Rekognition.
Building the React.js Application
Prerequisites:
Install Node.js and npm if they are not already installed on your machine.
Steps
To create a new React.js app
Create a new React.js application using the following command:
To set up your backend
Install and configure the Amplify CLI using the following command:
To create a new Amplify project
Run the following command from the root folder of your React.js application:
Choose the following default options as shown below:
To add text translation
Add the new Predictions category to your Amplify project using the following command:
The command line interface asks you simple questions to add AI/ML uses cases. There are 4 option: Identify, Convert, Interpret, and Infer.
- Choose the “Convert” option.
- When prompted, add authentication if you do not have one.
- Select the following options in CLI:
To add text to speech
Run the following command to add text to speech capability to your project:
Push changes
Next, we push the configuration to the cloud using the following command from the root of your application folder:
To integrate the predictions library in a React.js application
Now that you set up the backend, integrate the Predictions library in your React.js application.
The application UI shows “Text Translation” and “Text to Speech” with a separate button for each functionality. The output of the text translation is the translated text in JSON format. The output of Text to Speech is an audio file that can be played from the application.
First, install the Amplify and Amplify React dependencies using the following command:
import React, { useState } from 'react'; import './App.css'; import Amplify from 'aws-amplify'; import Predictions, { AmazonAIPredictionsProvider } from '@aws-amplify/predictions'; import awsconfig from './aws-exports'; Amplify.addPluggable(new AmazonAIPredictionsProvider()); Amplify.configure(awsconfig); function TextTranslation() { const [response, setResponse] = useState("Input text to translate") const [textToTranslate, setTextToTranslate] = useState("write to translate"); function translate() { Predictions.convert({ translateText: { source: { text: textToTranslate, language : "en" // defaults configured in aws-exports.js }, targetLanguage: "es" } }).then(result => setResponse(JSON.stringify(result, null, 2))) .catch(err => setResponse(JSON.stringify(err, null, 2))) } function setText(event) { setTextToTranslate(event.target.value); } return ( <div className="Text"> <div> <h3>Text Translation</h3> <input value={textToTranslate} onChange={setText}></input> <button onClick={translate}>Translate</button> <p>{response}</p> </div> </div> ); } function TextToSpeech() { const [response, setResponse] = useState("...") const [textToGenerateSpeech, setTextToGenerateSpeech] = useState("write to speech"); const [audioStream, setAudioStream] = useState(); function generateTextToSpeech() { setResponse('Generating audio...'); Predictions.convert({ textToSpeech: { source: { text: textToGenerateSpeech, language: "es-MX" // default configured in aws-exports.js }, voiceId: "Mia" } }).then(result => { setAudioStream(result.speech.url); setResponse(`Generation completed, press play`); }) .catch(err => setResponse(JSON.stringify(err, null, 2))) } function setText(event) { setTextToGenerateSpeech(event.target.value); } function play() { var audio = new Audio(); audio.src = audioStream; audio.play(); } return ( <div className="Text"> <div> <h3>Text To Speech</h3> <input value={textToGenerateSpeech} onChange={setText}></input> <button onClick={generateTextToSpeech}>Text to Speech</button> <h3>{response}</h3> <button onClick={play}>play</button> </div> </div> ); } function App() { return ( <div className="App"> <TextTranslation /> <hr /> <TextToSpeech /> <hr /> </div> ); } export default App;
In the previous code, the source language for translate is set by default in aws-exports.js. Similarly, the default language is set for text-to-speech in aws-exports.js. You can override these values in your application code.
To add hosting for your application
You can enable static web hosting for our react application on Amazon S3 by running the following command from the root of our application folder:
To publish the application run:
The application is now hosted on S3 and you can access it at a link that looks like http://my-appXXXXXXXXXXXX-hostingbucket-dev.s3-website-us-XXXXXX.amazonaws.com/
On-demand indexing of images
The “Identify entities” option in Amplify CLI using Amazon Rekognition can detect entities like celebrities by default. However, you can use Amplify to index new entities to auto-update the collection in Amazon Rekognition. This enables you to develop advanced use cases such as uploading a new image and thereafter having the new entities in an input image being recognized if it matches an entry in the collection. Note that Amazon Rekognition does not store any image bytes.
Here is how it works on a high level for reference:
Note, if you delete the image from S3 the entity is removed from the collection.
You easily can set up the indexing feature from the Amplify CLI using the following flow:
If you have already set up storage from the Amplify CLI by running `amplify add storage`
, the bucket that was created is reused. To upload images for indexing from the CLI, you can run `amplify predictions console`
and select 'Identify'.
This opens the S3 bucket location in the AWS Console for you to upload images for indexing.
After you have set up the backend through the CLI, you can use an Amplify storage object to add images to S3 bucket which will trigger the auto-indexing of images and update the collection in Amazon Rekognition.
In your src/App.js add the following function that uploads image test.jpg to Amazon S3:
function PredictionsUpload() {
function upload(event) {
const { target: { files } } = event;
const [file,] = files || [];
Storage.put('test.jpg', file, {
level: 'protected',
customPrefix: {
protected: 'protected/predictions/index-faces/',
}
});
}
return (
<div className="Text">
<div>
<h3>Upload to predictions s3</h3>
<input type="file" onChange={upload}></input>
</div>
</div>
);
}
Next, call the Predictions.identify() function to identify entities in an input image using the following code. Note, that we have to set “collections: true”
in the call to identify.
function EntityIdentification() {
const [response, setResponse] = useState("Click upload for test ")
const [src, setSrc] = useState("");
function identifyFromFile(event) {
setResponse('searching...');
const { target: { files } } = event;
const [file,] = files || [];
if (!file) {
return;
}
Predictions.identify({
entities: {
source: {
file,
},
collection: true
celebrityDetection: true
}
}).then(result => {
console.log(result);
const entities = result.entities;
let imageId = ""
entities.forEach(({ boundingBox, metadata: { name, externalImageId } }) => {
const {
width, // ratio of overall image width
height, // ratio of overall image height
left, // left coordinate as a ratio of overall image width
top // top coordinate as a ratio of overall image height
} = boundingBox;
imageId = externalImageId;
console.log({ name });
})
if (imageId) {
Storage.get("", {
customPrefix: {
public: imageId
},
level: "public",
}).then(setSrc);
}
console.log({ entities });
setResponse(imageId);
})
.catch(err => console.log(err))
}
return (
<div className="Text">
<div>
<h3>Entity identification</h3>
<input type="file" onChange={identifyFromFile}></input>
<p>{response}</p>
{ src && <img src={src}></img>}
</div>
</div>
);
}
To learn more about the predictions category, visit our documentation.
Feedback
We hope you like these new features! Let us know how we are doing, and submit any feedback in the Amplify Framework Github Repository. You can read more about AWS Amplify on the AWS Amplify website.