Thursday, November 2, 2017

IBM Watson Conversation Service – Simple greeting



Objective:
The objective of this tutorial is to develop a simple conversation app to greet anyone for specific keywords like Hello and bye, as shown in the below screen shot.

Overview:
Conversation service is one of the module in IBM Watson cognitive services which helps to quickly build and deploy chatbots. Conversation is maintained by chatbots that understand natural language and deploy them on messaging platforms and websites, on any device. The features of this service include the following:
Developer friendly
Easy to begin, easy to use. Get faster time to value, and integrate across channels, networks and environments.

Enterprise grade
Conversation features a reliable infrastructure that scales with individual use cases. Platform support from IBM gives you the backing you need.

Robust and secure
Own your data. IBM protects your privacy, allowing you to opt out of data sharing. Built on IBM Cloud and featuring reliable tooling with industry-leading security.

Steps:
1. Go to the Conversation service (https://console.eu-gb.bluemix.net/catalog/services/conversation/) and either sign up for a free Bluemix account or log in.
2. After you log in, type conversation-tutorial in the Service name field of the Conversation page and click Create.

Step 1: Launch the tool:
3. After you create the service instance, you'll land on the dashboard for the instance. Launch the Conversation tool from here. Click Manage, then Launch tool.

You might be prompted to log in to the tool separately. If so, provide your IBM Bluemix credentials to log in.
Step 2: Create a workspace:
4. Now create a workspace. A workspace is a container for the artifacts that define the conversation flow.
5. In the Conversation tool, click Create.

6. Give your workspace the name Conversation tutorial and click Create. Youʼll land on the Intents tab of your new workspace.

Step 3: Create intents
7. An intent represents the purpose of a user's input. You can think of intents as the actions your users might want to perform with your application. For this example, we're going to keep things simple and define only two intents: one for saying hello, and one for saying goodbye.
8. Make sure you're on the Intents tab. (You should already be there, if you just created the workspace.)
9. Click Create new. Name the intent "hello".
10. Type hello as a User example and press Enter.
Examples tell the Conversation service what kinds of user input you want to match to the intent. The more examples you provide, the more accurate the service can be at recognizing user intents.

On hitting Enter, you will see the below confirmation message.

11. Add four more examples and click Done to finish creating the #hello intent:
good morning
greetings
hi
howdy


12. Create another intent named #goodbye with these five examples:
bye
farewell
goodbye
I'm done
see you later

13. You've created two intents, #hello and #goodbye, and provided example user input to train Watson to recognize these intents in your users' input.


Step 4: Build a dialog
14. A dialog defines the flow of your conversation in the form of a logic tree. Each node of the tree has a condition that triggers it, based on user input. We'll create a simple dialog that handles our #hello and #goodbye intents, each with a single node.
Adding a start node
15. In the Conversation tool, click the Dialog tab. Click Create

You'll see two nodes:
Welcome: Contains a greeting that is displayed to your users when they first engage with the bot.
Anything else: Contains phrases that are used to reply to users when their input is not recognized.

16. Click the Welcome node to open it in the edit view.
17. Replace the default response with the text, Welcome to the Conversation tutorial!.

 18.  Clickthe edit view.
You created a dialog node that is triggered by the welcome condition, which is a special condition that indicates that the user has started a new conversation. Your node specifies that when a new conversation starts, the system should respond with the welcome message.
Testing the start node
19. You can test your dialog at any time to verify the dialog. Let's test it now.
20. Click theto open the "Try it out" pane. You should see your welcome message.

Adding nodes to handle intents
21. Now let's add nodes to handle our intents between the Welcome node and the Anything else node.
22. Click the More icon  on the Welcome node, and then select Add node below.

23. Type #hello in the Enter a condition field of this node. Then select the #hello option.
24. Add the response, Good day to you.
25. Clickto close the edit view.

26. Click the More icon  on this node and then select Add node below to create a peer node. In the peer node, specify #goodbye as the condition, and OK. See you later! as the response.

Testing intent recognition
27. You built a simple dialog to recognize and respond to both hello and goodbye inputs. Let's see how well it works.
28. Click theto open the "Try it out" pane. There's that reassuring welcome message.
29. At the bottom of the pane, type Hello and press Enter. The output indicates that the #hello intent was recognized, and the appropriate response (Good day to you.) appears.
30. Try the following input:
bye
howdy
see ya
good morning
sayonara


30. Watson can recognize your intents even when your input doesn't exactly match the examples you included. The dialog uses intents to identify the purpose of the user's input regardless of the precise wording used, and then responds in the way you specify.

Result of building a dialog
31. That's it. You created a simple conversation with two intents and a dialog to recognize them.


Wednesday, November 1, 2017

IBM Watson Overview



What is IBM Watson?
IBM Watson is a system based on cognitive computing. You can say, IBM Watson is a system which will provide answer(s) to your question(s).
It is not programmed; like humans of course not emotionally, they learn from experts and from every interaction, and they are uniquely able to find patterns in big data. They learn by using advanced algorithms to sense, predict and infer. Doing so, they augment human intelligence, allowing individuals to make faster and more informed decisions.

What is Cognitive computing?
Cognitive computing describes technology platforms that encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, among other technologies.

What are the features of Cognitive Computing?
Here are some of the features of Cognitive Computing:
Adaptive: They may learn as information changes, and as goals and requirements evolve. They may resolve ambiguity and tolerate unpredictability. They may be engineered to feed on dynamic data in real time, or near real time.
Interactive: They may interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices, and Cloud services, as well as with people.
Iterative and stateful: They may aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They may "remember" previous interactions in a process and return information that is suitable for the specific application at that point in time.
Contextual: They may understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

What are IBM Watson Services?
IBM’s Watson services are classified under the following categories:

  • Conversation
  • Knowledge
  • Vision
  • Speech
  • Language
  • Empathy

Conversation:
This module helps to quickly build and deploy chatbots and virtual agents across a variety of channels, including mobile devices, messaging platforms, and even robots.

Conversation
Conversation is maintained by chatbots that understand natural language and deploy them on messaging platforms and websites, on any device with the following features:

Developer friendly
Easy to begin, easy to use. Get faster time to value, and integrate across channels, networks and environments.

Enterprise grade
Conversation features a reliable infrastructure that scales with individual use cases. Platform support from IBM gives you the backing you need.

Robust and secure
Own your data. IBM protects your privacy, allowing you to opt out of data sharing. Built on IBM Cloud and featuring reliable tooling with industry-leading security.

Virtual Agent
This is a bot for customer service with the following features:
  • Pre-trained industry & domain knowledge
  • Personalized configuration
  • Engagement metrics dashboard
  • User friendly tooling
  • Self-service delivered by bots
  • Deep analytic capabilities
  • Up and running in no time


Knowledge:
This module gets insights through accelerated data optimization capabilities.
Discovery
Unlock hidden value in data to find answers, monitor trends and surface patterns with the world’s most advanced cloud-native insight engine with the following features:

Rapid results
Spend less time struggling with your data. Automated ingestion and integrated natural language processing in a fully managed cloud service removes the complexity from dealing with natural language content.

Domain intelligence
Easily adapt Discovery’s understanding of your corpus with integrated machine learning to surface the most relevant answers. Teach Discovery to apply the knowledge of unique entities and relations in your industry or organization with Watson Knowledge Studio.

AI ready for business
Uncover deep connections throughout your data by using advanced AI functions out of the box, such as natural language queries, passage retrieval, relevancy training, relationship graphs and anomaly detection.
Discovery News
Access pre-enriched news content in real-time with the following features:

Intelligence infused news:
Explore news and blogs with smarter news from Watson that includes concepts, sentiment, relationships and categories. Watson also identifies important meta-information – like authors, publication dates, and relevant keywords.

See the big picture:
By discovering trends and patterns in sentiment with aggregate analysis, you’ll see new perspectives on how news unfolds across the globe. You can also track recent historical trends across millions of articles and stories.

Stay alert:
Surface anomalies, key events and embed news alerts into your application and workflows. Stay abreast of the latest information about key competitors, product and brand perception, events, industry experts and more.
Natural Language Understanding
Natural language processing for advanced text analytics with the following features:

Uncover insights from structured and unstructured data:
Analyze text to extract meta-data from content such as concepts, entities, keywords, categories, relations and semantic roles.

Understand sentiment and emotion:
Returns both overall sentiment and emotion for a document, and targeted sentiment and emotion towards keywords in the text for deeper analysis.

Grasp multiple languages:
NLU understands text in nine languages, and through customization with Watson Knowledge Studio.
Knowledge studio
Teach Watson to discover meaningful insights in unstructured text.

It is a cloud-based application that enables developers and domain experts to collaborate and create custom annotator components for unique industries

These annotators can identify mentions and relationships in unstructured data and be easily administered throughout their lifecycle using one common tool.
Document Conversion
Document Conversion features are now available in Watson Discovery. Its features include:

Convert to different file types:
Convert a single HTML, PDF, or Microsoft Word™ document into HTML, plain text, or a set of JSON-formatted Answer units that can be used with other Watson services.

Convert in multiple languages:
Document Conversion supports content in English, French, German, Japanese, Italian, Brazilian Portuguese and Spanish.

Vision:
Identify, tag content then analyze and extract detailed information found in an image.
Visual Recognition
Tag, classify and search visual content using Machine Learning with the following features:

Classify virtually any visual content:
Visual Recognition understands the contents of images. Analyze images for scenes, objects, faces, colors, food, and other subjects that can give you insights into your visual content.

Create your own classifiers:
Create and train your custom image classifiers using your own image collections.

Speech:
Convert text and speech with the ability to customize models.
Speech to Text
Easily convert the audio and voice into written text with the following features:

Powerful real-time speech recognition:
Automatically transcribe audio from 7 languages in real-time. Rapidly identify and transcribe what is being discussed, even from lower quality audio, across a variety of audio formats and programming interfaces (HTTP REST, Websocket, Asynchronous HTTP)

Highly accurate speech engine:
Customize your model to improve accuracy for language and content you care most about, such as product names, sensitive subjects or names of individuals. Recognizes different speakers in your audio Spot specified keywords in real-time with high accuracy and confidence

Built to support various use cases:
Transcribe audio for various use cases ranging from real-time transcription for audio from a microphone, to analyzing 1000s of audio recording from your call center to provide meaningful analytics
Text to Speech
Convert written text into natural-sounding audio in a variety of languages and voices with the following features:

Enable your systems to “speak” like humans:
Develop interactive toys for children, automate call center interactions, communicate directions hands-free, and beyond

Customize and control pronunciation:
Deliver a seamless voice interaction that caters to your audience with control over every word

Synthesizes across languages and voices:
Convert in English, French, German, Italian, Japanese, Spanish and Brazilian Portuguese. Detects different dialects, such as U.S. and UK English and Castilian, Latin American, and North American Spanish.

Language:
Analyze text and extract meta-data from unstructured content.
Language Translator
Translate text from one language to another. Take news from across the globe and present it in your language, communicate with your customers in their own language, and more with the following features:

Translate your content
Language Translator translates text from one language to another. The service offers multiple domain-specific models: Conversational (English, Arabic, French, Portuguese and Spanish), News (English, Arabic, French, German, Italian, Japanese, Portuguese, Korean and Spanish) and Patent (English, Chinese, Spanish, Korean and Portuguese).

Customize your translations
Customize the translations based on your unique terminology and language. Language Translator supports three types of customization: forced glossary, parallel phrases and corpus-level customization.

It’s your data
Unlike other translation services, IBM protects your privacy, you own your data.
Natural Language Classifier
Interpret and classify natural language with confidence with the following features:

Classify text passages
Understand the intent behind text and returns a corresponding classification, complete with a confidence score.

Evaluate results
Update training data based on classification results and create and train a classifier using updated training data.

Build conversational applications
Answer questions in a contact center, create chatbots, and categorize volumes of written content and more.
Retrieve and Rank
Retrieve and Rank features are now available in Watson Discovery and is deprecated as a stand-alone service.

Empathy:
Understand tone, personality, and emotional state.
Personality Insights
Predict personality characteristics, needs and values through written text. Understand your customers’ habits and preferences on an individual level, and at scale. The following are few of its features:

Get detailed personality portraits:
Use linguistic analytics to infer individuals' personality characteristics, including Big Five, Needs, and Values, from digital communications such as email, blogs, tweets, and forum posts.

Understand consumption preferences:
Look at a user’s inclination to pursue different products, services, and activities, including shopping, music, movies, and more.

Tailor the customer experience:
Understand individual customers for segmentation, personalized product recommendations, and highly targeted messaging.
Tone Analyzer
Understand emotions and communication style in text with the following features:

Conduct social listening:
Analyze emotions and tones in what people write online, like tweets or reviews. Predict whether they are happy, sad, confident, and more.

Enhance customer service:
Monitor customer service and support conversations so you can respond to your customers appropriately and at scale. See if customers are satisfied or frustrated, and if agents are polite and sympathetic.

Integrate with chatbots:
Enable your chatbot to detect customer tones so you can build dialog strategies to adjust the conversation accordingly.

Monday, October 16, 2017

Android: WebView Load Webpages from URL And Assets



Objective: 
The objective of this Android tutorial is to use Webview and load Webpages from URL and Assets. Refer the screen shots below.

Screenshots:
Part 1: Opening an online website (like google.com)
1. Home page to load the WebView URL http://www.google.com


2. Search for any keyword (like gmail) in Google, which displays the search results as follows:


Part 2: Opening a custom website (html) from local assets folder
1. Let’s imagine we have local html files and images under \src\main\assets folder of the project


2. Index.html will display the image one.jpg file and has hyperlink  (here) to index2.html.


3. On-click of the hyperlink “here” in index1.html, it will take you to index2.html, as shown below

4. Now if you want to open it in mobile, you will see the following output:

5. On-click of “here”, it will open the index2.html, as shown below.


6. On-click of Back button , it will bring it back to previous page. On-click of Back button again, it will close the application.

Instructions to setup:
1. Git clone the project and you should see the project \5_Seekbar_N_Progressbar downloaded.
2. Now open the project in Android Studio
3. Press the keyboard – Shift + F10 to run the App, select the configured virtual device and click OK.

4. You can now see the output in the Android Emulator.

Source Code Explanation:
1. Activity_main.xml ((under \src\res\layout\):
1. Overall Layout is set as "RelativeLayout"
2. Set the Webview component with:
id as "webView"
layout_width as “match_parent”

2. MainActivity.java:
1. Instantiate WebView by using findViewById() by passing their respective id’s as defined in activity_main.xml
2. Set the localURL to point to index.html under assets folder using file:///(i.e  file:///android_asset/index.html).
3. Load the URL using webView.loadURL() function, enable the Javascript (setJavascriptEnabled(true).
4. Set the WebViewClient to instance of new WebViewClient() as shown in the code below.
5. The above logic will load the index.html file.
6. Just incase, if you want to load the google.com, just uncomment webView.loadURL(“http://www.google.com”) and comment out the localURL call.
webView.loadUrl("http://www.google.com");
//webView.loadUrl(localURL);


webView.loadUrl("http://www.google.com");
//webView.loadUrl(localURL);

3. strings.xml (under \src\res\values\)
All string values defined


4. colors.xml (under \src\res\values\):
Default standard colors defined


5. styles.xml (under \src\res\values\)
All the default styles defined.


6. AndroidManifest.xml (under \src\main):
Just add additional line highlighted in yellow to enable the permission to enable Internet.


7. index.html (under src\main\assets\)
“here” having hyperlink (a href) to index2.html.


8. index2.html (under src\main\assets\):


Android: Seekbar and Progressbar



Objective: 
The objective of this Android tutorial is to use Seekbar and Progressbar. Refer the screen shots below.

Screenshots:
1. On swipe of Seekbar to the right, the progress bar below also moves along.


Instructions to setup:
1. Git clone the project and you should see the project \5_Seekbar_N_Progressbar downloaded.
2. Now open the project in Android Studio
3. Press the keyboard – Shift + F10 to run the App, select the configured virtual device and click OK.

4. You can now see the output in the Android Emulator.

Source Code Explanation:
1. Activity_main.xml ((under \src\res\layout\):
1. Overall Layout is set as "RelativeLayout"
2. Set the Seekbar component with:
id as "seekBar"
layout_width as “match_parent”
3. Set the Progressbar component with:
id as "progressBar"
layout_width as “match_parent”

2. MainActivity.java:
1. Instantiate ProgressBar, SeekBar, Button by using findViewById() by passing their respective id’s as defined in activity_main.xml
2. Set the max value of Seekbar to 100.
3. Set OnSeekBarChangeListener() function for the button and inside onPressChanged() function set progressBar value (using setProgress() method) to incoming progress value as shown in the code below.

3. strings.xml (under \src\res\values\)
All string values defined

4. colors.xml (under \src\res\values\):
Default standard colors defined


5. styles.xml (under \src\res\values\)
All the default styles defined.