28th Dec 2023

Unlocking the Multimodal: Your Guide to Using Gemini AI

gemini-model

Google's next-generation language model, Gemini AI, is not your average chatbot.It's a multimodal phenomenon that can interact and comprehend law in addition to textbook, plates, and voice. However, how can its full potential be realised? This blog serves as your companion to understanding Gemini AI, from navigating its interface to discovering all of its varied features.

What is Gemini AI?

Simply put, Gemini is a family of cutting-edge multimodal large language models (LLMs). Unlike its forerunners, LaMDA and win 2, Gemini is not just complete at recycling textbook. It's a sensory genius, capable of handling and comprehending a diverse range of inputs, including:

  • Text: Like any good LLM, Gemini devours and understands written language with impressive fluency.
  • Images: From photographs to paintings, Gemini can analyze visual information, extracting meaning and context.
  • Audio: Whether it's spoken language, music, or even environmental sounds, Gemini listens intently and deciphers the aural landscape.

This multimodal mastery sets Gemini apart, allowing it to grasp the world in a way that previous AI models could only dream of. Imagine having a conversation with a machine that not only understands your words but also picks up on your tone, facial expressions, and the background music playing in the room. That's the kind of nuanced interaction Gemini promises.

The Gemini Family Tree:

Gemini comes in three flavors, each catering to specific needs:

  • Gemini Ultra: The big kahuna, Ultra tackles the most complex tasks, flexing its muscles in areas like scientific exploration and advanced reasoning.
  • Gemini Pro: The protean each- rounder, Pro shines in a wide range of operations, from creative jotting and law generation to data analysis and education.
  • Gemini Nano:
  • The pocket-sized powerhouse, Nano brings AI smarts to resource-constrained devices, making it ideal for on-device tasks like voice assistants and personalized recommendations.

Why is Gemini a Game Changer?

Gemini's potential applications are vast and exciting. Here are just a few glimpses into its future:

  • Revolutionizing Education: Imagine textbooks that come alive, explaining complex concepts with interactive visuals and personalized learning paths. Gemini could be the key to unlocking a new era of engaging and effective education.
  • Boosting Scientific Discovery: By assaying vast quantities of data from different sources, Gemini could accelerate scientific improvements in fields like drug, accoutrements wisdom, and astronomy.
  • Enhancing Creativity: Need a fresh story idea or a catchy air? Gemini can be your AI muse, sparking creative inspiration and helping you bring your ideas to life.
  • Building Better Machines: From self-driving cars that understand the world around them to robots that can collaborate with humans seamlessly, Gemini paves the way for a future where AI is our intelligent partner, not just a tool.

Step 1: Bard to Launchpad

Bard, the AI that you are currently engaging with, is powered by Gemini! Just go to https://bard.google.com/ and use your Google account to log in. Let's dig right in—this opens up a world of possibilities!

Step 2: Use the interface to navigate

Navigating Bard is a seamless experience tailored to your needs.Jumpstart your creativity with a library ofpre-written prompts, also upgrade Bard's responses through malleable settings and writing styles. Unleash its full potential with powerful add-ons like YouTube Vision, Maps, and Google Workspace, opening a world of possibilities for exploration and discovery.So, dive in, epitomize, and let Bard guide you on a trip of measureless creation.

Step 3: Learn the Magic of Multimodality

Unleash the full power of Gemini and unlock hidden dimensions of communication through its vibrant multimodalities. Step into an enriching world of text: engage in profound dialogues, shatter language barriers with seamless translations, spark your creativity with original content, and delve deeper through insightful questions. Beyond words, let visuals come alive: upload or link photos, and watch as Bard dives into their depths, weaving captivating descriptions and crafting text formats that dance with the essence of the image. Let sounds tell their stories: play any audio, from symphonies to simple snippets, and witness Bard's keen interpretations and responsive reactions. Indeed law finds its air in Gemini's grasp write lines of sense, and admit inestimable support, from remedying results to creative optimization suggestions, and indeed multilingual law restatements.

Step 4: Look Past the Fundamentals

shape your own AI companions through bespoke fine-tuning, crafted with specific tasks and datasets in mind. Weave Gemini into the tapestry of your workflow: leverage powerful APIs to unlock automation on a grand scale, integrating its prowess seamlessly into your applications. Embrace the vibrant Gemini community join a hive mind of enthusiastic druggies, participating tips, gaining knowledge, andco-creating the future of this dynamic platform. With each step forward, you unveil new angles of Gemini's brilliance, pushing the boundaries of communication and forging connections across realms. So, dive deeper, trial valorously, and let Gemini guide you on an extraordinary adventure of discovery.

Get the API Key for Gemini AI.

Remember that these application templates are made to work with the Gemini AI API that you may use with Google AI Studio before we get started running the application. To obtain an API Key, go ahead and visit Google AI Studio (https://ai.google.dev/tutorials/setup).

Using the Gemini API in Python apps:

Prerequisites

  • Python 3.9+

Setup

Install the Python SDK

The google-generativeai package includes the Python SDK for the Gemini API. Use pip to install the dependency

                                
                                     
  pip install -q -U google-generativeai 
                                
                            
1. Make a Python file (such as app.py) and import the required modules into it.
                                
                                    
  import os
  import google.generativeai as genai
  from flask import Flask, request, jsonify
  
  genai.configure(api_key= "Your_api_key")
  
  app = Flask(__name__)
  
  @app.route("/create_text", methods=["POST"])
  def create_text():
      prompt = request.json.get("prompt")
      try:
          model = genai.GenerativeModel('gemini-pro')
          response = model.generate_content(prompt)
          return jsonify({"response": response.text})
      except Exception as e:
          return jsonify({"error": str(e)}), 500
  
  if __name__ == "__main__":
      app.run(debug=True)
  
                                
                            
2. Run the application
                                
                                    
  python app.py.
                                
                            
3. Test in Postman
  • Send a fresh POST request to http://127.0.0.1:5000/create_text
  • Click the "raw" and "JSON" format options in the Body tab.
  • In the body of the request, provide the prompt.
payload-image
  • Send the request. You should receive a JSON response with the generated text.
python-desc-image
Multimodal and Gemini Chat

We have only used text-based prompts and questions to test the Gemini Model thus far. But according to Google, the Gemini Pro Model is educated to be multi-modal from the beginning. As a result, Gemini includes a model known as gemini-pro-vision that can generate text from photos and other input. I have the picture below.

bird-image
                                
                                    
  import os
  import google.generativeai as genai
  from flask import Flask, request, jsonify
  from werkzeug.utils import secure_filename
  import PIL.Image
  
  genai.configure(api_key= "Your_api_key")
  
  app = Flask(__name__)
  
  UPLOAD_FOLDER = "uploads"  # Folder to store uploaded images
  ALLOWED_EXTENSIONS = {"png", "jpg", "jpeg"}
  
  def allowed_file(filename):
      return "." in filename and filename.rsplit(".", 1)[1].lower() in ALLOWED_EXTENSIONS
  
  @app.route("/generate_text", methods=["POST"])
  def generate_text():
      if "image" not in request.files:
          return jsonify({"error": "No image file uploaded"}), 400
  
      file = request.files["image"]
      if file.filename == "":
          return jsonify({"error": "No image selected"}), 400
  
      if file and allowed_file(file.filename):
          filename = secure_filename(file.filename)
          file.save(os.path.join(UPLOAD_FOLDER, filename))
  
          try:
              image = PIL.Image.open(os.path.join(UPLOAD_FOLDER, filename))
              vision_model = genai.GenerativeModel('gemini-pro-vision')
              response = vision_model.generate_content(["provide a details about Picture",image])
              return jsonify({"response": response.text})
  
          except Exception as e:
              return jsonify({"error": str(e)}), 500
  
  if __name__ == "__main__":
      app.run(debug=True)
  
                                
                            
Response:
python-text

Using the Gemini API in Node.js apps:

Prerequisites

  • This quickstart is predicated on your familiarity with Node.js application development. Make sure your development environment satisfies the following prerequisites in order to finish this quickstart
  • Node.js version 18+ npm

Setup

You must first set up your project, which includes installing the SDK package, initialising the model, and setting up your API key, before you can contact the Gemini API.

Install the SDK package

To use the Gemini API in your own operation, you need to install the GoogleGenerativeAI package for Node.js:

                                
                                    
  npm install @google/generative-ai
                                
                            
From text-only input, generate text

To create text output when the prompt input consists solely of text, utilise the gemini-pro model and the generateContent function:

                                
                                     	
  const express = require('express');
  const { GoogleGenerativeAI } = require('@google/generative-ai');
  
  const app = express();
  app.use(express.json());
  
  const genAI = new GoogleGenerativeAI("Your_api_key");
  
  app.post('/generate-text', async (req, res) => {
      try {
          const { prompt } = req.body;
          const model = genAI.getGenerativeModel({ model: "gemini-pro" });
          const result = await model.generateContent(prompt);
          const response = await result.response;
          const text = response.text();
          res.json({ "response": text });
  
      } catch (error) {
          console.error(error);
          res.status(500).json({ error: 'Failed to generate text' });
      }
  });
  
  app.listen(3000, () => {
      console.log('Server listening on port 3000');
  });
                                
                            

To test in Postman:

1. Start your Node.js server:
                                
                                    
  node index.js
                                
                            
2. Create a POST request in Postman:
  • Set URL to http://localhost:3000/generate-text
  • Set body to raw and JSON format
  • Add a JSON object with a prompt property, e.g., "prompt": " Write a one line quote about node js".
3. Send the request and view the response:
  • The response should contain the generated text.
generate-text

Generate text from text-and-image input (multimodal)

Gemini provides a multimodal model( gemini-pro-vision), so you can input both textbook and images.

Use the gemini-pro-vision model and the generateContent function to produce textbook affair when the prompt input contains both images and text:

                                
                                    
  const express = require('express');
  const multer = require('multer');
  const { GoogleGenerativeAI } = require('@google/generative-ai');
  const fs = require("fs");
  const app = express();
  const upload = multer({ dest: 'uploads/' }); // Set storage path
  const genAI = new GoogleGenerativeAI("Your_api_key");
  
  app.post('/process-image', upload.single('image'), async (req, res) => {
      try {
          const imagePath = req.file.path;
          const imagePart = fileToGenerativePart(imagePath, "image/jpeg");
          const model = genAI.getGenerativeModel({ model: "gemini-pro-vision" });
          const prompt = "provide a details about Picture";
          const result = await model.generateContent([prompt, imagePart]);
          const response = await result.response;
          const text = response.text();
          res.json({ "response": text });
  
      } catch (error) {
          console.error(error);
          res.status(500).json({ error: 'Failed to process image' });
      }
  });
  
  function fileToGenerativePart(path, mimeType) {
      return {
          inlineData: {
              data: Buffer.from(fs.readFileSync(path)).toString("base64"),
              mimeType
          },
      };
  }
  
  app.listen(3000, () => {
      console.log('Server listening on port 3000');
  });
  
                                
                            

To test in Postman:

1. Start your Node.js server:
                                
                                    
  node index.js
                                
                            
2. Create a POST request in Postman:
  • Set URL to http://localhost:3000/process-image
  • Go to the "Body" tab, select "form-data".
  • To upload an image file, add a key called image and choose the file.
3. Send the request and view the response:
  • The response will contain the placeholder apiResponse data. Once you integrate with an image API, it will contain the actual image processing results.
process-image

Conclusion

With its boundless capacity to promote innovation, boost output, and create new avenues for exploration, Gemini AI is an incredibly formidable instrument. You may go beyond the surface to become a true Gemini specialist. Fundamentals and embracing its multimodal potential. Recall that the passage holds equal significance to the final destination. So let's push the limits of what AI is able of together by continuing to explore, pose questions, and become involved in the community. Gemini AI has a promising future, and you have the power to mould it.

Let's develop your ideas into reality