Skip to main content

9 posts tagged with "Arakoo"

View All Tags

· 11 min read
Aditya Pandey

Generates an article for you using OpenAI API in Java and EdgeChain.

Consider you have to write an article about Global Warming: Normally you have to find information about it in various websites, but this particular model helps you in generating the article by just providing the title of the article. EdgeChain is a streamlined solution for developing GenAI applications, offering simplicity through a single script file and jsonnet file setup. It emphasizes versioning for prompts, automatic parallelism across various processors, fault tolerance, and scalability, making it a robust choice for chain-of-thought applications with extensive API integration and data sets. While LangChain primarily focuses on a specific set of principles, EdgeChain takes a unique stance, emphasizing declarative prompt and chain orchestration as pivotal components of its architecture. To delve deeper into EdgeChain and explore its capabilities, you can refer to GitHub repository https://github.com/arakoodev/edgechains#why-do-you-need-declarative-prompt--chain-orchestration-. This platform offers a comprehensive view of EdgeChains' vision and how it differentiates itself from LangChain.

Pre Requisites

  1. You need to make an account in OpenAI, Postgres so that from there you can retrieve the AUTH key, org id and etc. which are needed for the code. OpenAI Landing Page
  2. You need to download the edgechains jar file from this url https://github.com/arakoodev/EdgeChains/releases.
  3. Download the .java and .jsonnet file and put them in the same folder.
  4. In the code according to the folder structure you have to write about the path.

Configuration of the Database

  1. Go to the Supabase website (https://supabase.io) and sign up for an account.
  2. Create a new Project by clicking the “New Project” button.
  3. Configure your project settings including the project name, region, and the plan.
  4. Once your project is created, you’ll be directed to the project dashboard.
  5. Click the “Create Database” button to create a new PostgreSQL database.
  6. After the database is created, you can access its credentials, including the database URL, API URL and service role key. Supabase Landing Page

Explanation of the Code

  • Load the edgechains package.

  • Import the OPENAI_Chat_Completion API. Here we have to import the static constants from other classes. These classes are of OpenAI.

  • Import the Spring Framework related classes and annotations.

  • The code relies on external libraries and dependencies, such as com.edgechain.lib and io.reactivex.rxjava3.These dependencies provide additional functionality and utilities for the code.

  • Classes such as OpenAiEndpoint, WikiEndpoint, ArkRequest, and CompletionRequest are used to interact with specific endpoints or APIs, such as OpenAI and Wikipedia.

  • RxJava and Retry Logic: The code uses RxJava and includes classes like ExponentialDelay and EdgeChain.These are used for implementing retry logic and handling asynchronous operations.

  • The code includes a constant OPENAI_CHAT_COMPLETION_API, which represents the endpoint for OpenAI chat completion.

  • A class named WikiExample is present that includes several static variables and a JsonnetLoader instance. Here's an explanation of the that:

  • Static Variables:

    • OPENAI_AUTH_KEY: This variable represents the OpenAI authentication key. It is a string that should be replaced with your actual OpenAI authentication key.
    • OPENAI_ORG_ID: This variable represents the OpenAI organization ID. It is a string that should be replaced with your actual OpenAI organization ID.
    • gpt3Endpoint: This variable is an instance of the OpenAiEndpoint class, which is used to communicate with OpenAI services.
    • gpt3StreamEndpoint: This variable is another instance of the OpenAiEndpoint class, which is likely used for streaming communication with OpenAI services.
    • wikiEndpoint: This variable is an instance of the WikiEndpoint class, which is used to communicate with the Wikipedia API.
  • JsonnetLoader

-   `loader`: This variable is an instance of the  `JsonnetLoader`  class, which is used to load and process Jsonnet files.
- `FileJsonnetLoader`: This class is a specific implementation of the `JsonnetLoader` interface that loads Jsonnet files from the file system.
- The `FileJsonnetLoader` constructor takes three arguments:
- The first argument represents the probability (in percentage) of executing the first file (`./wiki1.jsonnet`). In this case, there is a 70% chance of executing `./wiki1.jsonnet`.
- The second argument is the path to the first Jsonnet file (`./wiki1.jsonnet`).
- The third argument is the path to the second Jsonnet file (`./wiki2.jsonnet`).

The purpose of this code is to create an instance of FileJsonnetLoader that loads Jsonnet files with a certain probability. Depending on the probability, either ./wiki1.jsonnet or ./wiki2.jsonnet will be executed. Main Method Page

  • The main method is the entry point of the application.

  •  Setting Server Port:
    • System.setProperty("server.port", "8080"): This line sets the server port to 8080. It configures the application to listen on port 8080 for incoming requests.
  • Configuring Properties:
    • Properties properties = new Properties(): This line creates a new instance of the Properties class, which is used to store key-value pairs of configuration properties.
    • properties.setProperty("cors.origins", "http://localhost:4200"): This line sets the CORS (Cross-Origin Resource Sharing) origins property to allow requests from http://localhost:4200. CORS is used to control access to resources from different origins.
  • Configuring JPA and Hibernate Properties:
    • properties.setProperty("spring.jpa.show-sql", "true"): This line sets the property to show SQL queries executed by JPA (Java Persistence API).
    • properties.setProperty("spring.jpa.properties.hibernate.format_sql", "true"): This line sets the property to format the SQL queries executed by Hibernate.
  • Configuring PostgreSQL Database Properties:
    • properties.setProperty("postgres.db.host", "jdbc:postgresql://db.rkkbllhnexkzjyxhgexm.supabase.co:5432/postgres"): This line sets the PostgreSQL database host URL.
    • properties.setProperty("postgres.db.username", "postgres"): This line sets the username for the PostgreSQL database.
    • properties.setProperty("postgres.db.password", "jtGhg7?JLhUF$fK"): This line sets the password for the PostgreSQL database.
  • Starting the Spring Boot Application:
    • new SpringApplicationBuilder(WikiExample.class).properties(properties).run(args): This line creates a new instance of SpringApplicationBuilder with the WikiExample class as the main application class. It sets the configured properties and runs the Spring Boot application.
  • Initializing Endpoints:
    - `wikiEndpoint = new WikiEndpoint()`: This line creates an instance of the `WikiEndpoint` class, which is used to communicate with the Wikipedia API.
    - `gpt3Endpoint = new OpenAiEndpoint(...)`: This line creates an instance of the `OpenAiEndpoint` class, which is used to communicate with OpenAI services. It sets various parameters such as the OpenAI chat completion API, authentication key, organization ID, model, temperature, and delay.
    - `gpt3StreamEndpoint = new OpenAiEndpoint(...)`: This line creates another instance of the `OpenAiEndpoint` class, which is likely used for streaming communication with OpenAI services. It sets similar parameters as the `gpt3Endpoint`, but with an additional flag for streaming.

    Endpoints

Article Writer Controller

  • It is a RestController class named ArticleController that handles HTTP GET requests for the /article endpoint. Here's an explanation of the code within the class:
  1. @RestController Annotation:

    • This annotation is used to indicate that the class is a REST controller, which means it handles HTTP requests and returns the response in a RESTful manner.
  2. @GetMapping("/article") Annotation:

    • This annotation is used to map the HTTP GET requests with the /article endpoint to the generateArticle method.
  3. generateArticle Method:

    • This method is responsible for generating an article based on the provided query parameter.
    • It takes an ArkRequest object as a parameter, which is likely a custom request object that contains query parameters.
    • The method throws an Exception if any error occurs during the generation process.
  4. Generating the Prompt:

    • The method prepares a prompt for the article generation by concatenating the string "Write an article about " with the value of the title query parameter from the arkRequest object.
  5. Sending a Request to the OpenAI API:

    • The method uses the gpt3Endpoint instance (which is an instance of the OpenAiEndpoint class) to send a request to the OpenAI API for generating the article.
    • It uses the chatCompletion method of the gpt3Endpoint to perform the chat completion.
    • The chatCompletion method takes the prompt, a chat model name ("React-Chain"), and the arkRequest object as parameters.
    • The generated article is stored in the gptre variable.
  6. Returning the Generated Article:

    • The method returns the generated article as a response to the HTTP GET request.

Postman Testing

After all this we will be using the postman to test and give the requests for the same in the following manner: Title

Postman Page

Full Working Code

package com.edgechain;

import com.edgechain.lib.endpoint.impl.llm.OpenAiChatEndpoint;
import com.edgechain.lib.endpoint.impl.wiki.WikiEndpoint;
import com.edgechain.lib.jsonnet.JsonnetLoader;
import com.edgechain.lib.jsonnet.impl.FileJsonnetLoader;
import com.edgechain.lib.request.ArkRequest;
import com.edgechain.lib.rxjava.retry.impl.ExponentialDelay;
import com.edgechain.lib.rxjava.transformer.observable.EdgeChain;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.Properties;
import java.util.concurrent.TimeUnit;

import static com.edgechain.lib.constants.EndpointConstants.OPENAI_CHAT_COMPLETION_API;

@SpringBootApplication
public class WikiExample {

private static final String OPENAI_AUTH_KEY = ""; // YOUR OPENAI AUTH KEY
private static final String OPENAI_ORG_ID = ""; // YOUR OPENAI ORG ID

/* Step 3: Create OpenAiEndpoint to communicate with OpenAiServices; */
private static OpenAiChatEndpoint gpt3Endpoint;

private static OpenAiChatEndpoint gpt3StreamEndpoint;

private static WikiEndpoint wikiEndpoint;

// There is a 70% chance that file1 is executed; 30% chance file2 is executed....
private final JsonnetLoader loader =
new FileJsonnetLoader(70, "./wiki1.jsonnet", "./wiki2.jsonnet");

public static void main(String[] args) {
System.setProperty("server.port", "8080");

// Optional, for logging SQL queries (shouldn't be used in prod)
Properties properties = new Properties();

// Adding Cors ==> You can configure multiple cors w.r.t your urls.;
properties.setProperty("cors.origins", "http://localhost:4200");

properties.setProperty("spring.jpa.show-sql", "true");
properties.setProperty("spring.jpa.properties.hibernate.format_sql", "true");

properties.setProperty("postgres.db.host", "");
properties.setProperty("postgres.db.username", "");
properties.setProperty("postgres.db.password", "");

new SpringApplicationBuilder(WikiExample.class).properties(properties).run(args);

wikiEndpoint = new WikiEndpoint();

gpt3Endpoint =
new OpenAiChatEndpoint(
OPENAI_CHAT_COMPLETION_API,
OPENAI_AUTH_KEY,
OPENAI_ORG_ID,
"gpt-3.5-turbo",
"user",
0.7,
new ExponentialDelay(3, 5, 2, TimeUnit.SECONDS));

gpt3StreamEndpoint =
new OpenAiChatEndpoint(
OPENAI_CHAT_COMPLETION_API,
OPENAI_AUTH_KEY,
OPENAI_ORG_ID,
"gpt-3.5-turbo",
"user",
0.7,
true,
new ExponentialDelay(3, 5, 2, TimeUnit.SECONDS));
}
@RestController
public class ArticleController {
@GetMapping("/article")
public String generateArticle(ArkRequest arkRequest) throws Exception {
// Prepare the prompt
String prompt = "Write an article about " + arkRequest.getQueryParam("title") + ".";

// Use the OpenAiService to send a request to the OpenAI API
// GPT-3 model is used with the provided prompt, max tokens is set to 500 for the article length
// and temperature is set to 0.7 which is a good balance between randomness and consistency
// Echo is set to true to include the prompt in the response
/*CompletionRequest completionRequest = CompletionRequest.builder()
.prompt(prompt)
.model("gpt-3.5-turbo")
.maxTokens(500)
.temperature(0.7)
//.echo(true)
.build();*/
String gptre=new EdgeChain<>(gpt3Endpoint.chatCompletion(prompt, "React-Chain", arkRequest))
.get()
.getChoices()
.get(0)
.getMessage()
.getContent();
// Send the request
//ChatCompletionResponse response = gpt3Endpoint.chat(completionRequest);
//Observable<ChatCompletionResponse> response = gpt3Endpoint.chatCompletion(completionRequest.getPrompt(), "", arkRequest);

// Extract the generated text from the response
//String generatedArticle = response.getChoices().get(0).getGeneratedText();
//return response.blockingFirst().getChoices().get(0).toString();
return gptre;

// Return the generated article
// return generatedArticle;
}
}
}

JSONnet for the Code

Data is at the heart of nearly every aspect of technology. Whether you're configuring software, managing infrastructure, or exchanging information between systems, having a clean and efficient way to structure and manipulate data is essential. This is where JSONnet steps in as a valuable tool.

JSONnet is a versatile and human-friendly programming language designed for one primary purpose: simplifying the way we work with structured data. At its core, JSONnet takes the familiar concept of JSON (JavaScript Object Notation), a widely-used format for data interchange, and elevates it to a whole new level of flexibility and expressiveness. It has a declarative way of defining and describing the prompts and chains. The JSONnet for the above code

local keepMaxTokens = payload.keepMaxTokens;
local maxTokens = if keepMaxTokens == "true" then payload.maxTokens else 5120;

local preset = |||
You are a Summary Generator Bot. For any question other than summarizing the data, you should tell that you cannot answer it.
You should detect the language and the characters the user is writing in, and reply in the same character set and language.

You should follow the following template while answering the user:

```
1. <POINT_1> - <DESCRIPTION_1>
2. <POINT_2> - <DESCRIPTION_2>
...
```
Now, given the data, create a 5-bullet point summary of:
|||;
local keepContext = payload.keepContext;
local context = if keepContext == "true" then payload.context else "";
local prompt = std.join("\n", [preset, context]);
{
"maxTokens": maxTokens,
"typeOfKeepContext": xtr.type(keepContext),
"preset" : preset,
"context": context,
"prompt": if(std.length(prompt) > xtr.parseNum(maxTokens)) then std.substr(prompt, 0, xtr.parseNum(maxTokens)) else prompt
}
  • KeepMaxTokens and maxTokens - These are used to determine the maximum number of the tokens that will be considered while generating the article.
  • preset- This is a string that contains the instructions. Here it tells the bot that it should only answer questions pertaining to giving the summary of the data and it should detect the language and character set of the user's input and respond in the same language and character set ,it kind of gives you the structure of its responses.
  • keepContext and context -Used to determine whether the bot should consider the context from the payload when generating the article
  • prompt -It is here where the context and preset are combined to create the final prompt for the bot .If the length of the prompt exceeds the maximum number of the tokens, then the prompt is truncated to fit within the limit.
  • The Final object- This is the output of the script. It includes everything.

· 12 min read
Aditya Pandey

Search for your document from a large pack of existing documents in the most effective manner.

Consider you have a notice of opening of campus of some Institute :
You want to find a particular campus that is similar to some campuses of Kolkata. The best effective way to do so would be by using the Semantic Search.

Semantic Search is the task of retrieving the documents from a collection of the documents in response to a query asked .It allows you to access the best matches from your document collection within seconds and the best thing is the fact is on the basis of meaning rather than keyword matches .A Semantic Search model has been created using Java.
Edgechains is a streamlined solution for developing GenAI applications, offering simplicity through a single script file and jsonnet file setup. It emphasizes versioning for prompts, automatic parallelism across various processors, fault tolerance, and scalability, making it a robust choice for chain-of-thought applications with extensive API integration and data sets. While Langchain primarily focuses on a specific set of principles, EdgeChains takes a unique stance, emphasizing declarative prompt and chain orchestration as pivotal components of its architecture. To delve deeper into EdgeChains and explore its capabilities, you can refer to our GitHub repository .

Pre Requisites

  1. You need to make an account in OPENAI, Pinecone, Postgres so that from there you can retrieve the auth key, org id and etc which are needed for the code.
  2. You need to download the edgechains jar file from this url https://github.com/arakoodev/EdgeChains/releases.
  3. Download the .java and .jsonnet file and put them in the same folder.
  4. In the code according to the folder structure you have to write about the path.

Configuration of the Database:

  1. Go to the Supabase website (https://supabase.io) and sign up for an account.
  2. Create a new Project by clicking the “New Project” button.
  3. Configure your project settings including the project name, region, and the plan.
  4. Once your project is created, you’ll be directed to the project dashboard.
  5. Click the “Create Database” button to create a new PostgreSQL database.
  6. After the database is created, you can access its credentials, including the database URL, API URL and service role key.

Explanation of the Code

  1. Load the edgechains package.

  2. Import the OPENAI_Chat_Completion API. Here we have to import the static constants from other classes. These classes are Pinecone, OpenAI, and PDF reading.

  3. Import the Spring Framework related classes and annotations.
    Spring Page

  4. Fill in the details of the auth key, org_id and others of OpenAI and Pinecone.
    OpenAI Page

  5. Create variables that may be interacting with the OpenAI services used to store authentication keys and API endpoints.

  6. Load the jsonnet file into the variable, and then load the data of that file into the variable.

  7. In the main method, set the system server port to the desired port or by default it is 8080. Properties properties = new Properties (); this property is often used in Java to manage the key-value pairs.

  8. Then you have to configure the Hibernate to format the SQL queries for better readability. The lines written below are used to access the database. They enable SQL query logging and formatting in Spring JPA (Java Persistence API). CORS Page

  9. Set the CORS (Cross-Origin Resource Sharing) by specifying the allowed origins.

  10. Then you have to initialize several endpoints and related configurations.

  • ada002Embedding: This variable is an instance of the OpenAiEndpoint class, which is used to interact with OpenAI for text embeddings. It is a configuration object that allows your Java code to communicate with OpenAI's text embedding service using the "Ada 002" model and handle authentication and retry logic. It's a part of integrating OpenAI's capabilities into your application for NLP tasks.
  • gpt3Endpoint: This variable is another instance of the OpenAiEndpoint class, configured for GPT-3.5 Turbo, a language model for chat completions. It has similar configuration parameters as ada002Embedding but with additional settings for chat completions. It is a configuration object that facilitates communication with OpenAI's "GPT-3.5 Turbo" model for chat-based language generation tasks.
  • upsertPineconeEndpoint: This variable is an instance of the PineconeEndpoint class, which is used to interact with Pinecone for upserting vectors. It is a configuration object that enables your application to interact with the Pinecone service for adding or updating vectors in a vector index. It plays a crucial role in enabling similarity-based search and retrieval functionality in your application.
    These variables are essential for communicating with external services such as OpenAI and Pinecone.
    Endpoint Page
  1. After this, define a Spring class which is responsible for handling the HTTP requests and contains various methods for interacting with Pinecone and OpenAI services.
  2. Inject a PdfReader instance into the controller so that it can be used to read PDF files in the code.

Upsert Controller

Create a method upsertPinecone that handles HTTP POST requests to the endpoint which takes an ArkRequest object as a parameter which contains data required for the operation. Here pdfReader is used to read the input stream in chunks of 512 bytes and stores the result in a string array. An instance of PineconeRetrieval is created passing the string array created, the ada002Embedding endpoint, the upsertPineconeEndpoint and the arkRequest. The upsert method of the PineconeRetrieval instance is called to upsert the data into Pinecone. The pdf that you uploaded is divided into chunks and then the embedding vector sends that to the embedding endpoint to perform the similarity search on the query sent. NOTE: Upsert is done only once as you can upload as many pdf files one at a time as after upserting the major work you would be doing is querying.
Upsert Page

Query Controller

Define a method ‘query’ to handle HTTP POST requests to the /pinecone/query endpoint. It extracts the various parameters such as namespace, query from the arkRequest object. Here EdgeChain performs a Pinecone query using the queryPineconeEndpoint and retrieves word embeddings. This helps in transforming the results of the query into a list of the objects using a method. It returns the response containing the result of the query. topk is used to divide the pdf into chunks.
Query Page
Overall this model manages the interaction with Pinecone and OpenAI services including upserting data, querying based on the received HTTP requests.

Postman Testing

After all this we will be using the postman to test and give the requests for the same in the following manner:

  1. Upsert

Also one of the important to consider while uploading the files, upload only in pdf format, json format is not allowed. You can only upload one pdf file at a time.

  1. Query

    • Description: Perform a query to retrieve results from Open AI in the Pinecone namespace.
    • Method: POST
    • URL:http://localhost:8080/pinecone/query?topK=6&namespace=machine-learning
    • Headers: Content-Type: application/json
    • Body:
    * Mode: raw  
    • Data: {"query": "When is the Kolkata campus opening?" }

Postman Page

Semantic Search can change the way searching, ranking or retrieval systems work with their ability to index and store embedding vectors. Here EdgeChain helps in inserting the chunk and calculating the embedding vector. It provides methods which make it very easy for the customer so that they have just to write the business logic and their work happens easily, instead EdgeChain does everything. It ranks according to the score where the score is according to how close their similarity is with the query. And the model will return the answer of the query according to the ranking.

The Full Working Code of the Model is.

package com.edgechain;

import static com.edgechain.lib.constants.EndpointConstants.OPENAI_CHAT_COMPLETION_API;
import static com.edgechain.lib.constants.EndpointConstants.OPENAI_EMBEDDINGS_API;

import com.edgechain.lib.chains.PineconeRetrieval;
import com.edgechain.lib.context.domain.HistoryContext;
import com.edgechain.lib.embeddings.WordEmbeddings;

import com.edgechain.lib.endpoint.impl.context.RedisHistoryContextEndpoint;
import com.edgechain.lib.endpoint.impl.embeddings.OpenAiEmbeddingEndpoint;
import com.edgechain.lib.endpoint.impl.index.PineconeEndpoint;
import com.edgechain.lib.endpoint.impl.llm.OpenAiChatEndpoint;
import com.edgechain.lib.jsonnet.JsonnetArgs;
import com.edgechain.lib.jsonnet.JsonnetLoader;
import com.edgechain.lib.jsonnet.enums.DataType;
import com.edgechain.lib.jsonnet.impl.FileJsonnetLoader;
import com.edgechain.lib.openai.response.ChatCompletionResponse;
import com.edgechain.lib.reader.impl.PdfReader;
import com.edgechain.lib.request.ArkRequest;
import com.edgechain.lib.response.ArkResponse;
import com.edgechain.lib.rxjava.retry.impl.ExponentialDelay;
import com.edgechain.lib.rxjava.transformer.observable.EdgeChain;
import java.io.IOException;
import java.io.InputStream;
import java.util.*;
import java.util.concurrent.TimeUnit;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.web.bind.annotation.*;

@SpringBootApplication
public class PineconeExample {

private static final String OPENAI_AUTH_KEY = ""; // YOUR OPENAI AUTH KEY
private static final String OPENAI_ORG_ID = ""; // YOUR OPENAI ORG ID

private static final String PINECONE_AUTH_KEY = "";
private static final String PINECONE_API = ""; // Only API
private static OpenAiChatEndpoint gpt3Endpoint;
private static OpenAiChatEndpoint gpt3StreamEndpoint;

private static PineconeEndpoint pineconeEndpoint;

private static RedisHistoryContextEndpoint contextEndpoint;

// It's recommended to perform localized instantiation for thread-safe approach.
private JsonnetLoader queryLoader = new FileJsonnetLoader("./pinecone/pinecone-query.jsonnet");
private JsonnetLoader chatLoader = new FileJsonnetLoader("./pinecone/pinecone-chat.jsonnet");

public static void main(String[] args) {
System.setProperty("server.port", "8080");
Properties properties = new Properties();

properties.setProperty("spring.jpa.show-sql", "true");
properties.setProperty("spring.jpa.properties.hibernate.format_sql", "true");

// Adding Cors ==> You can configure multiple cors w.r.t your urls.;
properties.setProperty("cors.origins", "http://localhost:4200");

// Redis Configuration
properties.setProperty("redis.url", "");
properties.setProperty("redis.port", "");
properties.setProperty("redis.username", "default");
properties.setProperty("redis.password", "");
properties.setProperty("redis.ttl", "3600");

// If you want to use PostgreSQL only; then just provide dbHost, dbUsername & dbPassword.
// If you haven't specified PostgreSQL, then logs won't be stored.
properties.setProperty("postgres.db.host", "");
properties.setProperty("postgres.db.username", "postgres");
properties.setProperty("postgres.db.password", "");

new SpringApplicationBuilder(PineconeExample.class).properties(properties).run(args);

gpt3Endpoint =
new OpenAiChatEndpoint(
OPENAI_CHAT_COMPLETION_API,
OPENAI_AUTH_KEY,
OPENAI_ORG_ID,
"gpt-3.5-turbo",
"user",
0.85,
new ExponentialDelay(3, 5, 2, TimeUnit.SECONDS));

gpt3StreamEndpoint =
new OpenAiChatEndpoint(
OPENAI_CHAT_COMPLETION_API,
OPENAI_AUTH_KEY,
OPENAI_ORG_ID,
"gpt-3.5-turbo",
"user",
0.7,
true,
new ExponentialDelay(3, 5, 2, TimeUnit.SECONDS));

OpenAiEmbeddingEndpoint ada002 =
new OpenAiEmbeddingEndpoint(
OPENAI_EMBEDDINGS_API,
OPENAI_AUTH_KEY,
OPENAI_ORG_ID,
"text-embedding-ada-002",
new ExponentialDelay(3, 3, 2, TimeUnit.SECONDS));

pineconeEndpoint =
new PineconeEndpoint(
PINECONE_API,
PINECONE_AUTH_KEY,
ada002,
new ExponentialDelay(3, 3, 2, TimeUnit.SECONDS));

contextEndpoint =
new RedisHistoryContextEndpoint(new ExponentialDelay(2, 2, 2, TimeUnit.SECONDS));
}

/**
* By Default, every API is unauthenticated & exposed without any sort of authentication; To
* authenticate, your custom APIs in Controller you would need @PreAuthorize(hasAuthority(""));
* this will authenticate by JWT having two fields: a) email, b) role:"authenticated,user_create"
* To authenticate, internal APIs related to historyContext & Logging, Delete Redis/Postgres we
* need to create bean of AuthFilter; you can uncomment the code. Note, you need to define
* "jwt.secret" property as well to decode accessToken.
*/
// @Bean
// @Primary
// public AuthFilter authFilter() {
// AuthFilter filter = new AuthFilter();
// // ======== new MethodAuthentication(List.of(APIs), authorities) =============
// filter.setRequestPost(new MethodAuthentication(List.of("/v1/postgresql/historycontext"),
// "authenticated")); // define multiple roles by comma
// filter.setRequestGet(new MethodAuthentication(List.of(""), ""));
// filter.setRequestDelete(new MethodAuthentication(List.of(""), ""));
// filter.setRequestPatch(new MethodAuthentication(List.of(""), ""));
// filter.setRequestPut(new MethodAuthentication(List.of(""), ""));
// return filter;
// }

@RestController
public class PineconeController {

@Autowired private PdfReader pdfReader;

/********************** PINECONE WITH OPENAI ****************************/

/**
* Namespace: VectorDb allows you to partition the vectors in an index into namespaces. Queries
* and other operations are then limited to one namespace, so different requests can search
* different subsets of your index. If namespace is null or empty, in pinecone it will be
* prefixed as "" empty string & in redis it will be prefixed as "knowledge" For example, you
* might want to define a namespace for indexing books by finance, law, medicine etc.. Can be
* used in multiple use-cases.... such as User uploading book, generating unique namespace &
* then querying/chatting with it..
*
* @param arkRequest
* @return
*/
// Namespace is optional (if not provided, it will be using Empty String "")
@PostMapping("/pinecone/upsert") // /v1/examples/openai/upsert?namespace=machine-learning
public void upsertPinecone(ArkRequest arkRequest) throws IOException {
String namespace = arkRequest.getQueryParam("namespace");
InputStream file = arkRequest.getMultiPart("file").getInputStream();
String[] arr = pdfReader.readByChunkSize(file, 512);
PineconeRetrieval retrieval =
new PineconeRetrieval(arr, pineconeEndpoint, namespace, arkRequest);

retrieval.upsert();
}

@PostMapping(value = "/pinecone/query")
public ArkResponse query(ArkRequest arkRequest) {

String query = arkRequest.getBody().getString("query");
int topK = arkRequest.getIntQueryParam("topK");
String namespace = arkRequest.getQueryParam("namespace");

// Chain 1 ==> Query Embeddings from Pinecone
EdgeChain<List<WordEmbeddings>> queryChain =
new EdgeChain<>(pineconeEndpoint.query(query, namespace, topK, arkRequest));

// Chain ===> Our queryFn passes takes list and passes each response with base prompt
EdgeChain<List<ChatCompletionResponse>> gpt3Chain =
queryChain.transform(wordEmbeddings -> queryFn(wordEmbeddings, arkRequest));

return gpt3Chain.getArkResponse();
}
}

JSONnet for the Code

Data is at the heart of nearly every aspect of technology. Whether you're configuring software, managing infrastructure, or exchanging information between systems, having a clean and efficient way to structure and manipulate data is essential. This is where JSONnet steps in as a valuable tool.

JSONnet is a versatile and human-friendly programming language designed for one primary purpose: simplifying the way we work with structured data. At its core, JSONnet takes the familiar concept of JSON (JavaScript Object Notation), a widely-used format for data interchange, and elevates it to a whole new level of flexibility and expressiveness. It has a declarative way of defining and describing the prompts and chains.

The JSONnet for the query

local maxTokens = if(payload.keepMaxTokens == "true") then payload.maxTokens else 10000;
local preset = |||
Use the following pieces of context to answer the question at the end. If
you don't know the answer, just say that you don't know, don't try to make up an answer.
|||;
local context = if(payload.keepContext == "true") then payload.context else "";
local prompt = std.join("\n", [preset, context]);
{
"maxTokens": maxTokens,
"preset" : preset,
"context": context,
"prompt": if(std.length(prompt) > xtr.parseNum(maxTokens)) then std.substr(prompt, 0, xtr.parseNum(maxTokens)) else prompt
}
  1. maxTokens: This line of code is used to determine the maximum number of tokens that the bot should consider when generating a response. If keepMaxTokens in the payload is set to "true", then the maxTokens value from the payload is used. Otherwise, it defaults to 10000.

  2. preset: This is a string that contains the instructions for the bot. It tells the bot to use the provided context to answer the question at the end. If the bot doesn't know the answer, it should admit that it doesn't know instead of making up an answer.

  3. context: This line of code is used to determine whether the bot should consider the context from the payload when generating a response. If keepContext in the payload is set to "true", then the context value from the payload is used. Otherwise, it defaults to an empty string.

  4. prompt: This is where the preset and context are combined to create the final prompt for the bot. The std.join function is used to join the preset and context with a newline character in between.

  5. The Final object- This is the output of the script. It includes everything.

· 3 min read
Arakoo

Introduction

Supabase is a powerful, open-source platform that simplifies the creation of secure and high-performance Postgres backends, offering functionalities similar to Firebase, such as authentication and real-time database. When used with EdgeChains, Supabase enhances the backend capabilities of applications built with the framework, enabling developers to create advanced and interactive applications powered by large language models.

Supabase Integration with EdgeChains

In the EdgeChains configuration, the following parameters need to be configured for Supabase integration:

  • SupabaseURL: The URL of the Supabase backend, which allows EdgeChains to communicate with the Supabase service.

  • Supabase AnnonKey: The anonymous key used for authentication when interacting with the Supabase backend.

  • Supabase JWTSecret: The JSON Web Token (JWT) secret for secure communication and user authentication.

  • Supabase DBhost:: The JDBC URL for connecting to the PostgreSQL database in Supabase. This URL provides the necessary information to establish a connection to the database. ie, jdbc:postgresql://${SUPABASE_DB_URK}/postgres

  • DbUsername: The username for the PostgreSQL database in Supabase. In this example, it is set to postgres.

  • DbPassword: The password for the PostgreSQL database in Supabase, which is required for authentication and access to the database.

By providing the appropriate values for these configuration parameters, EdgeChains can seamlessly integrate with Supabase, enabling developers to leverage the features and functionalities of Supabase as the backend for their language model-powered applications.


How to Get Configuration Parameters for Supabase Integration

To integrate Supabase with EdgeChains and obtain the necessary configuration parameters, follow these step-by-step instructions:

Step 1: Visit the Supabase website

  • If you already have an account, click on the Dashboard button in the top right corner and and log in using your credentials.
  • If you don't have an account, click on the Sign in button to either log in or create a new account.

Pinecone landing page

  • You can sign up using your email address or opt for a seamless registration process using your GitHub or SSO credentials.

Pinecone landing page

Step 2: Create a New Project

  • After logging in or signing up, you will be directed to the dashboard. Click on the New project button to initiate the project creation process.

Pinecone landing page

  • Enter the necessary details for the database, including the Name, and set up a strong password for added security.
  • Select your preferred region. For the free pricing plan, choose the default option.

Pinecone landing page

  • Finally, click on the Create new project button to have Supabase set up your new project.

Step 3: Access Project Settings

  • After your project is set up, go to the databases section and then proceed to the settings section of the database.

Pinecone landing page

Pinecone landing page

  • In this section, you will find the Host, User, and Password, which you need to take note of for using with EdgeChains. These parameters will facilitate the integration of Supabase with EdgeChains and enable seamless communication between the two platforms.

Step 4: Obtain API Credentials

  • In the Supabase dashboard, navigate to the API section, where you can access the required URL and anonymous key.

Pinecone landing page

  • Continue scrolling down to find the JWT settings, where you can obtain the JWT secret as well.

Pinecone landing page

By following these steps and obtaining the necessary configuration parameters, you will successfully integrate Supabase with EdgeChains. These parameters will enable you to leverage Supabase's powerful features as the backend for your language model-powered applications, creating secure and high-performance Postgres backends with ease.


· 5 min read
Arakoo

Introduction

Pinecone is a powerful vector database designed for efficient storage and querying of high-dimensional vectors. It provides a scalable and fast solution for applications involving similarity search, recommendation systems, natural language processing, and machine learning. With its simple API, advanced indexing algorithms, and real-time capabilities, Pinecone empowers developers to build high-performance applications that rely on vector-based data, delivering near-instantaneous search results and enabling personalized user experiences.

Pinecone seamlessly integrates with EdgeChains to enhance the performance of your language models. To get started, you'll need to obtain the URL from Pinecone and configure it in your EdgeChains application. Follow the steps below to achieve this.

Key Features

Pinecone offers a range of key features and benefits that make it a powerful tool for working with high-dimensional vectors and enabling efficient similarity search. Some of its key features include:

Scalability: Pinecone is designed to scale effortlessly, allowing you to handle massive amounts of data and millions to billions of vectors efficiently. It can handle high read and write throughput, making it suitable for demanding real-time applications.

High-Performance Search: Pinecone leverages advanced indexing algorithms to provide fast and accurate similarity search. It enables efficient nearest neighbor search, allowing you to find the most similar vectors to a given query vector with sublinear time complexity.

Real-Time Updates: With Pinecone, you can easily update your vector database in real-time, enabling you to handle dynamic data that changes frequently. This makes it ideal for applications that require continuous updates and real-time recommendations.

Flexible Vector Storage: Pinecone provides flexible options for storing and representing vectors, supporting various data types and formats. It allows you to work with diverse types of data, including numerical embeddings, textual data, images, and more.

API and Query Language: Pinecone offers a simple and intuitive API for managing and querying vector data. It provides a powerful query language that allows you to express complex similarity queries and filter results based on specific criteria.

Obtaining the Pinecone URL

To get started with Pinecone and obtain your Pinecone URL, follow these steps:

Step 1: Visit the Pinecone website.

If you already have an account, click on the Login button in the top right corner and enter your credentials. Otherwise, click on the Sign Up Free button to create a new account.

Pinecone landing page

You can sign up using your email address alone, or choose to sign up with your Gmail, GitHub, or Microsoft account for a seamless registration process.

Pinecone sign up page

Step 2: Once you have logged in or signed up, you will be redirected to the dashboard where it may take a few moments to load your indexes. Please be patient during this process.

Step 3: After the indexes have finished loading, you can create a new index by clicking on the Create Index button.

Create a new index

  • Provide a suitable name for your index. The name should only contain lowercase letters, numbers, and hyphens. It cannot be more than 45 characters long.
  • Specify the dimension and metric for your index. The dimension refers to the length of the vectors you will be working with, and the metric determines the similarity measurement used for search operations. Choose the appropriate values for your use case.
  • Once you have entered the necessary information, click on the Create Index button to create your index.

Enter details

Step 4: After the index is successfully created, you will be provided with your Pinecone URL. This URL represents the endpoint for accessing and interacting with your newly created index.

Pinecone vector DB URL

With your Pinecone URL in hand, you can now integrate Pinecone into your applications and leverage its powerful vector similarity search capabilities.

Integration with EdgeChains

To seamlessly integrate Pinecone with EdgeChains, you can leverage the Pinecone URL obtained in the previous step and configure it in your EdgeChains application. This configuration enables your language models in EdgeChains to harness the powerful vector similarity search capabilities provided by Pinecone.

To achieve this integration, you will need to provide the following data in the Starter class of your EdgeChainApplication.java file:

  • Query Endpoint: This endpoint allows you to send queries to Pinecone for similarity search. The URL format is ${PINECONE_URL}/query.

  • Upsert Endpoint: Use this endpoint to insert or update vectors in your Pinecone index. The URL format is ${PINECONE_URL}/vectors/upsert.

  • Delete Endpoint: This endpoint enables you to remove vectors from your Pinecone index. The URL format is ${PINECONE_URL}/vectors/delete.

For example, if your Pinecone URL is https://pinecone-sample-ccc3dd8.svc.asia-southeast-gcp-free.pinecone.io, you would configure the following variables in the Starter class of your EdgeChainApplication.java file:

// YOUR PINECONE API KEY
private final String PINECONE_AUTH_KEY =
"https://pinecone-sample-ccc3dd8.svc.asia-southeast-gcp-free.pinecone.io";
// YOUR PINECONE QUERY API
private final String PINECONE_QUERY_API =
"https://pinecone-sample-ccc3dd8.svc.asia-southeast-gcp-free.pinecone.io/query";
// YOUR PINECONE UPSERT API
private final String PINECONE_UPSERT_API =
"https://pinecone-sample-ccc3dd8.svc.asia-southeast-gcp-free.pinecone.io/upsert";
// YOUR PINECONE DELETE
private final String PINECONE_DELETE =
"https://pinecone-sample-ccc3dd8.svc.asia-southeast-gcp-free.pinecone.io/delete";

Additional Resources

For more information on Pinecone, tutorials, and community resources, you can refer to the following links:

· 3 min read
Arakoo

I. Introduction

Redis is an open-source, in-memory data structure store that can be used as a database, cache, or message broker. It is known for its fast performance and versatility, making it a popular choice for various applications, including EdgeChains.

One of the key benefits of Redis is its ability to store and retrieve data in memory, which allows for extremely fast read and write operations. This makes Redis well-suited for use cases that require low-latency data access, such as real-time applications and caching. Redis supports a wide range of data structures, including strings, lists, sets, hashes, and sorted sets. This flexibility enables developers to model complex data scenarios and perform advanced operations on the data stored in Redis.

II. Creating a free Redis Instance

To integrate Redis into your EdgeChains application, you'll first need to create a Redis instance. Follow these step-by-step instructions to create a Redis instance:

1. Sign up for Redis: Visit Redis website and click on the Try Free button in the top right corner. You can sign up using your preferred email address and password, or utilize your existing Google or GitHub account for a seamless registration process.

Redis landing page

During the signup process, you can also select your desired cloud vendor and region and click on Let's start free to continue. Once registered, you will be directed to the Redis dashboard.

Redis signup page

2. Access the Subscriptions Section: In the Redis dashboard, navigate to the Subscriptions section. Here, you will find information about your free subscription, which includes a storage limit of 30 MB.

Redis Subscription section

3. Configure Your Free Subscription:. Within the Subscriptions section, click on the configuration settings for your free subscription. This will provide you with the public endpoint of your Redis instance, which is crucial for establishing a connection.

Redis Configuration section

Copy the public endpoint provided in the configuration settings. The endpoint typically looks like this:

redis-19222.c1.us-east-1-3.ec2.cloud.redislabs.com:19222

In this example, redis-19222 represents the hostname of the Redis server, c1 is an identifier for a specific Redis instance or deployment, us-east-1-3.ec2.cloud represents the region or data center, and 19222 is the port number.

To establish a connection to your Redis instance, you need to extract the URL and port information from the endpoint. A Redis URL follows the format:

redis://<host>:<port>

where <host> represents the hostname or IP address of the Redis server, and<port> represents the port number on which Redis is running. In the given example, redis-19222.c1.us-east-1-3.ec2.cloud.redislabs.com is the URL and 19222 is the port.

You will need to take note of the Redis URL and port to use them in your EdgeChains application.

By obtaining the Redis endpoint, you will have the necessary information to establish a connection to your Redis instance and start utilizing its features.

· 4 min read
Arakoo

Introduction

In today's fast-paced software development world, efficient support and issue resolution is paramount to a project's success. Building a powerful GitHub support bot with GPT-3 and chain-of-thought techniques can help streamline the process and enhance user experience. This comprehensive guide will delve into the intricacies of creating such a bot, discussing the benefits, implementation, and performance optimization.

Benefits of a GitHub Support Bot

  1. Faster issue resolution: A well-designed support bot can quickly and accurately answer user queries or suggest appropriate steps to resolve issues, reducing the burden on human developers.
  2. Improved user experience: A support bot can provide real-time assistance to users, ensuring a seamless and positive interaction with your project.
  3. Reduced workload for maintainers: By handling repetitive and straightforward questions, the bot frees up maintainers to focus on more complex tasks and development work.
  4. Enhanced project reputation: A responsive and knowledgeable support bot can boost your project's credibility and attract more contributors.

GPT-3: An Overview

OpenAI's GPT-3 (Generative Pre-trained Transformer 3) is a state-of-the-art language model that can generate human-like text based on a given prompt. GPT-3 can be used for various tasks, such as question-answering, translation, summarization, and more. Its massive size (175 billion parameters) and pre-trained nature make it an ideal tool for crafting intelligent support bots.

Implementing a GitHub Support Bot with GPT-3

To build a GitHub support bot using GPT-3, follow these steps:

Step 1: Acquire API Access

Obtain access to the OpenAI API for GPT-3. Once you have API access, you can integrate it into your bot's backend.

Step 2: Set Up a GitHub Webhook

Create a GitHub webhook to trigger your bot whenever an issue or comment is created. The webhook should be configured to send a POST request to your bot's backend with relevant data.

Step 3: Process Incoming Data

In your bot's backend, parse the incoming data from the webhook and extract the necessary information, such as issue title, description, and user comments.

Step 4: Generate Responses with GPT-3

Using the extracted information, construct a suitable prompt for GPT-3. Query the OpenAI API with this prompt to generate a response. Tools like Arakoo EdgeChains help developers deal with the complexity of LLM & chain of thought.

Step 5: Post the Generated Response

Parse the response from GPT-3 and post it as a comment on the relevant issue using the GitHub API.

Enhancing Support Bot Performance with Chain-of-Thought

Chain-of-thought is a technique that enables AI models to maintain context and coherence across multiple response generations. This section will discuss incorporating chain-of-thought into your GitHub support bot for improved performance.

Retaining Context in Conversations

To preserve context, store previous interactions (such as user comments and bot responses) in your bot's backend. When generating a new response, include the relevant conversation history in the GPT-3 prompt.

Implementing Multi-turn Dialogues

For complex issues requiring back-and-forth communication, implement multi-turn dialogues by continuously updating the conversation history and generating appropriate GPT-3 prompts.

Optimizing GPT-3 Parameters

Experiment with GPT-3's API parameters, such as temperature and top_p, to control the randomness and quality of generated responses. Tools like Arakoo EdgeChains help developers deal with the complexity of LLM & chain of thought.

Monitoring and Improving Your Support Bot's Performance

Regularly assess your bot's performance to ensure it meets user expectations and adheres to E-A-T (Expertise, Authoritativeness, Trustworthiness) and YMYL (Your Money or Your Life) guidelines.

Analyzing User Feedback

Monitor user reactions and feedback to identify areas of improvement and optimize your bot's performance.

Refining GPT-3 Prompts

Iteratively improve your GPT-3 prompts based on performance analysis to generate more accurate and helpful responses.

Automating Performance Evaluation

Implement automated performance evaluation metrics, such as response time and issue resolution rate, to gauge your bot's effectiveness.

Conclusion

Building a GitHub support bot with GPT-3 and chain-of-thought techniques can significantly improve user experience and accelerate issue resolution. By following the steps outlined in this guide and continuously monitoring and optimizing performance, you can create a highly effective support bot that adds immense value to your project.

· 5 min read
Arakoo

Chain of Thought

Why You Should Be Using Chain-of-Thought Instead of Prompts in ChatGPT

Introduction

Chatbot development has progressed considerably in recent years, with the advent of powerful algorithms like GPT-3. However, there exists a common problem where simple prompts do not suffice in effectively controlling the AI's output. Chain-of-thought, a more complex method for handling AI inputs, offers a better solution to this issue. In this article, we will dive deep into why chain-of-thought should play a significant role in your ChatGPT applications.

Benefits of Chain-of-Thought

While prompts might seem like a more straightforward approach, the advantages of using chain-of-thought in ChatGPT far outweigh their simplicity. By employing chain-of-thought, developers can enjoy various benefits that ultimately lead to improved capabilities in AI interactions.

Improved Controllability

One of the most notable benefits of chain-of-thought is its ability to provide better controllability over AI-generated responses. Traditional prompt-based strategies often result in unexpected outputs that render the final outcomes unfit for their intended purpose. Chain-of-thought empowers developers to generate more precise responses, benefiting users in need of accurate and tailor-made outcomes.

Enhanced Flexibility

Chain-of-thought allows developers to make adjustments and fine-tune their AI-generated responses in a more flexible manner. Unlike the prompt-based approach, which is burdened by its rigidity, chain-of-thought readily accommodates alterations in input parameters or context. This heightened adaptability makes it ideal for applications where the AI has to handle a broad range of evolving scenarios.

Greater Clarity and Context

In many situations, prompts fail to provide sufficient information for generating coherent outputs. Chain-of-thought, on the other hand, emphasizes the importance of context, ensuring the AI fully understands the user's instructions. This results in more accurate and coherent responses, ultimately making communication with the AI more efficient and productive.

Better Conversational Flow

In contrast to prompt-centric approaches, chain-of-thought excels at maintaining natural and engaging conversations. By incorporating an ongoing dialogue within the input, chain-of-thought helps ensure the AI's responses align seamlessly with the conversation's existing context. This promotes uninterrupted and more fluent exchanges between the AI and its users.

A Solution for Complex Applications

For applications that demand a high degree of complexity, chain-of-thought serves as an invaluable tool in the developer's arsenal. Its emphasis on context, adaptability, and precision allows it to tackle demanding tasks that might otherwise prove unsuitable for simpler methods like prompts. Tools like Arakoo EdgeChains help developers deal with the complexity of LLM & chain of thought.

Implementing Chain-of-Thought in Your Applications

To maximize the benefits of chain-of-thought in ChatGPT, it's essential to have a firm grasp of its key components and best practices for integration. By focusing on proper implementation and optimal usage, developers can unlock its full potential.

Methodological Considerations

Chain-of-thought requires developers to shift their perspective from isolated prompts to a continuous stream of linked inputs. This necessitates a new approach to AI input formulation, where developers must construct sets of interconnected queries and statements in sequence, carefully ensuring each response is taken into consideration before constructing further inputs.

Effective Feedback Mechanisms

With chain-of-thought, implementing an effective feedback mechanism is vital to improving the AI's understanding of the given context. Developers should leverage reinforcement learning approaches and constantly update their models with feedback gathered from users, progressively fine-tuning the AI to ensure higher quality outputs over time.

Tools and Technologies

To facilitate chain-of-thought implementation, developers should familiarize themselves with relevant tools and technologies that simplify and streamline the process. Tools like Arakoo EdgeChains help developers deal with the complexity of LLM & chain of thought, while robust APIs and SDKs support the development of coherent input-output sequences for improved AI interactions.

Use Cases for Chain-of-Thought in ChatGPT

The versatility of chain-of-thought has made it an increasingly popular choice for various applications across multiple industries, bolstering its reputation as an essential component of modern AI-powered solutions.

Customer Support

Chain-of-thought can greatly enhance virtual customer support agents by providing them with the necessary context to handle diverse user queries accurately. This results in more personalized support experiences for users and increased efficiency for support teams.

Virtual Assistants

Virtual assistants can benefit from chain-of-thought by maintaining a continuous dialogue with users, making the interactions feel more natural and engaging. This ensures the AI maintains relevancy to the evolving user needs, thereby increasing its overall utility.

Interactive Gaming and Storytelling

The dynamic nature of chain-of-thought makes it well-suited for complex applications in interactive gaming and storytelling. By allowing the virtual characters to respond intelligently based on the player's choices, it can cultivate more immersive and engaging experiences.

Conclusion

In an era where AI applications are growing increasingly sophisticated, relying on traditional prompts is no longer sufficient. Chain-of-thought provides a more advanced and efficient approach to handling AI interactions, which, when implemented correctly, can lead to significant improvements in AI-generated outputs. By leveraging the power of chain-of-thought, developers can create transformative AI applications, ensuring their ChatGPT solutions remain at the cutting edge of innovation.

· 7 min read
Arakoo

OpenAI logo

Introduction

Integrating AI services into your projects has become increasingly important, and obtaining an OpenAI API key is a vital step in this process. By acquiring an API key, you unlock access to OpenAI's robust natural language processing capabilities, empowering you to optimize the efficiency and precision of your applications. In this comprehensive guide, we will walk you through the step-by-step process of obtaining an OpenAI API key. Tools like Arakoo EdgeChains can greatly assist you in utilizing the OpenAI API seamlessly.

Is an OpenAI API Key Free?

Free Trial, Credit and Billing Information

You can create an OpenAI API key free of charge. As a new user, you will receive $5 (USD) worth of credit as part of the free trial. However, please note that this credit expires after three months. Once your credit has been utilized or expired, you have the option to enter your billing information to continue using the API according to your requirements. It's important to remember that if you do not provide billing information, you will still have login access but won't be able to make additional API requests.

Rate Limits:

OpenAI implements rate limits at the organizational level, and if you are using their services for business purposes, payment may be required based on certain factors. Rate limits are measured in two ways: RPM (requests per minute) and TPM (tokens per minute).

Cost and Pricing:

If you are interested in specific costs associated with the AI model you intend to use (e.g., GPT-4 or gpt-3.5-turbo, as employed in ChatGPT), you can refer to OpenAI's AI model pricing page. In many cases, utilizing the API could be more cost-effective than a paid ChatGPT Plus subscription, although the actual expenses depend on your usage.

For a comprehensive overview of precise rate limits, examples, and other valuable details, we recommend visiting OpenAI's Rate Limits page.

How do I get an OpenAI API Key?

To begin with, follow the steps below.

1. Create an OpenAI account

To get started, please navigate to the OpenAI platform website and proceed with creating an account by following the provided instructions. You have the option to sign up using your preferred email address and password, or alternatively, you can utilize your existing Google or Microsoft account for a seamless registration process.

OpenAI login page

After completing the registration, OpenAI will send you a confirmation email to verify your account. Please locate the email in your inbox and click on the verification link provided to ensure the utmost security of your account. Once you have verified your account, return to the OpenAI website and click on the "Log In" button.

2. Navigate to the API section

Upon logging in, you will locate your name and profile icon situated in the upper-right corner of the OpenAI platform homepage. Please click on your name to unveil a dropdown menu, and proceed to select the View API keys option.

Alternatively you can navigate to the apps section and click on API.

OpenAI API page

3. Generate a new API key

In the API keys section, find the Create new secret key button and proceed to click on it in order to generate a fresh API key. A dialog box will promptly appear, requesting you to provide a descriptive name for your secret API key. It is advisable to choose a name that conveys its purpose clearly, facilitating future identification.

OpenAI API Key page

Ensure that you save the API key promptly, as the window displaying it cannot be reopened once closed.

OpenAI API Key page

4. Set up billing

OpenAI charges for API usage based on your usage volume. Therefore, if you haven't already set up a payment method for billing, it's necessary to do so before your newly created API key can function.

To initiate the billing setup process, navigate to the Billing section located in the left-hand menu, followed by selecting the Payment methods option.

OpenAI Billing page

Within the payment methods interface, you will find an option labeled Set up paid account. This option allows you to choose between two methods: For Individual and Company.

OpenAI Billing page

By clicking on any option, a pop-up window will emerge, facilitating the input of your credit card details and pertinent billing information. Once you have provided all the required information, please proceed by clicking Submit to finalize the process.

OpenAI Billing page

5. Set usage limits

To ensure efficient management of your monthly API expenditure, it is advisable to establish usage limits after setting up the billing process.

To proceed, navigate to the left menu and select the option Usage limits. Here, you can define both hard and soft usage limits based on your specific requirements. Once you have determined the desired limits, simply click on the Save button to save your changes.

By following these steps, you will successfully obtain an OpenAI API key and be ready to harness the power of OpenAI's natural language processing capabilities in your projects.

OpenAI Usage page

Ensure you follow OpenAI's usage guidelines‍

As a final note, be sure to familiarize yourself with OpenAI's use case policy and terms of use.You can find detailed information regarding these policies at the OpenAI usage policies .

API Key Best Practices

Key Security

When it comes to securing your OpenAI API key, it is crucial to follow best practices to protect sensitive information. Here are some key security measures to consider:

  1. Secure Storage: Store your API key in a secure location, such as a password-protected and encrypted storage system. Avoid storing it in plain text or easily accessible locations, such as code repositories.
  2. Restricted Access: Limit access to your API key to authorized individuals only. Implement robust access controls and authentication mechanisms to ensure that only trusted parties can retrieve and use the key.
  3. Device Limitations: Minimize the number of devices that store your API key. By reducing the number of endpoints where the key is stored, you can reduce the potential attack surface and enhance overall security.

Key Rotation

Regularly rotating your API key is essential for maintaining its security. By frequently changing your key, you mitigate the risks associated with long-term exposure or compromise. Follow these guidelines for effective key rotation:

  1. Timely Updates: Whenever you change your API key, make sure to promptly update it across all integrations and applications that rely on it. This ensures that any potential vulnerabilities associated with the previous key are eliminated.
  2. Automation Tools: Consider leveraging automation tools specifically designed for managing and rotating multiple API keys. One such tool is Arakoo EdgeChains, which provides seamless key management capabilities to simplify the process.

Integrating the OpenAI API

Selecting the Appropriate API Endpoint

Depending on your use case, you may need to interact with different API endpoints provided by OpenAI, such as Completion or Translation. Ensure to review OpenAI's documentation to understand which endpoint best fits your needs.

API Request and Response Handling

When integrating the OpenAI API, be sure to handle requests and responses properly. Construct the appropriate request headers and payloads based on OpenAI's documentation, and handle potential errors gracefully. Implement appropriate timeouts and error-handling mechanisms to maintain the stability of your application.

Common Use Cases for the OpenAI API

AI-Generated Content

The Completion endpoint enables the generation of human-like, context-relevant content for a variety of purposes, such as article drafting, email composition, and social media posting.

Natural Language Translation

With OpenAI's Translation endpoint, you can easily translate text between various languages, assisting with communication in multilingual environments.

Sentiment Analysis

By analyzing the emotion or tone of content, OpenAI can provide valuable insights for customer relationship management or market research.

Text Summarization

The API can help in condensing long documents, articles, or emails into brief, coherent summaries, saving valuable time and improving readability.

Question and Answer Systems

Leveraging OpenAI's natural language understanding, creating intelligent chatbots and automated customer support systems is simplified.

Conclusion

Acquiring an OpenAI API key will unlock the potential of OpenAI's powerful language processing capabilities for your projects. Following best practices and carefully integrating the API into your projects will help you make the most of these powerful tools. Remember, tools like Arakoo EdgeChains can assist you in the integration process, enabling seamless use of the OpenAI API.

· 6 min read
Arakoo

![Vector Database](./Screenshot 2023-08-28 at 5.22.13 PM.png)

Vector Database At A Glance

Introduction –

A vector database is a type of database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes. Each vector has a certain number of dimensions, which can range from tens to thousands, depending on the complexity and granularity of the data. The vectors are usually generated by applying some kind of transformation or embedding function to the raw data, such as text, images, audio, video, and others. The embedding function can be based on various methods, such as machine learning models, word embeddings, feature extraction algorithms. Edegchian uses vector database and enhances the backend capabilities of applications built with the framework, enabling developers to create advanced and interactive applications powered by large language models.

Vector Database?

Information comes in many forms. Some information is unstructured—like text documents, rich media, and audio—and some is structured—like application logs, tables, and graphs. Innovations in artificial intelligence and machine learning (AI/ML) have allowed us to create a type of ML model—embedding models. Embeddings encode all types of data into vectors that capture the meaning and context of an asset. This allows us to find similar assets by searching for neighboring data points. Vector search methods allow unique experiences like taking a photograph with your smartphone and searching for similar images. Vector databases provide the ability to store and retrieve vectors as high-dimensional points. They add additional capabilities for efficient and fast lookup of nearest-neighbors in the N-dimensional space. They are typically powered by k-nearest neighbor (k-NN) indexes and built with algorithms like the Hierarchical Navigable Small World (HNSW) and Inverted File Index (IVF) algorithms. Vector databases provide additional capabilities like data management, fault tolerance, authentication and access control, and a query engine. For example, you can use a vector database to: • find images that are similar to a given image based on their visual content and style • find documents that are similar to a given document based on their topic and sentiment • find products that are similar to a given product based on their features and ratings To perform similarity search and retrieval in a vector database, you need to use a query vector that represents your desired information or criteria. The query vector can be either derived from the same type of data as the stored vectors (e.g., using an image as a query for an image database), or from different types of data (e.g., using text as a query for an image database). Then, you need to use a similarity measure that calculates how close or distant two vectors are in the vector space. The similarity measure can be based on various metrics, such as cosine similarity, euclidean distance, hamming distance, jaccard index. The result of the similarity search and retrieval is usually a ranked list of vectors that have the highest similarity scores with the query vector. You can then access the corresponding raw data associated with each vector from the original source or index.

Importance of Vector Database

Your developers can index vectors generated by embeddings into a vector database. This allows allowing them to find similar assets by querying for neighboring vectors. Vector databases provide a method to operationalize embedding models. Application development is more productive with database capabilities like resource management, security controls, scalability, fault tolerance, and efficient information retrieval through sophisticated query languages. Vector databases ultimately empower developers to create unique application experiences. For example, your users could snap photographs on their smartphones to search for similar images. Developers can use other types of machine learning models to automate metadata extraction from content like images and scanned documents. They can index metadata alongside vectors to enable hybrid search on both keywords and vectors. They can also fuse semantic understanding into relevancy ranking to improve search results. Innovations in generative artificial intelligence (AI) have introduce new types of models like ChatGPT that can generate text and manage complex conversations with humans. Some can operate on multiple modalities; for instance, some models allow users to describe a landscape and generate an image that fits the description. Generative models are, however, prone to hallucinations, which could, for instance, cause a chatbot to mislead users. Vector databases can complement generative AI models. They can provide an external knowledge base for generative AI chatbots and help ensure they provide trustworthy information. How are vector databases used? Vector databases are typically used to power vector search use cases like visual, semantic, and multimodal search. More recently, they’re paired with generative artificial intelligence (AI) text models to create intelligent agents that provide conversational search experiences. The development process starts with building an embedding model that’s designed to encode a corpus like product images into vectors. The data import process is also called data hydration. The application developer can now use the database to search for similar products by encoding a product image and using the vector to query for similar images. Within the model, the k-nearest neighbor (k-NN) indexes provide efficient retrieval of vectors and apply a distance function like cosine to rank results by similarity.

Use-Case Of Vector Database

Vector databases are for developers who want to create vector search powered experiences. An application developer can use open-source models, automated machine learning (ML) tools, and foundational model services to generate embeddings and hydrate a vector database. This requires minimal ML expertise. A team of data scientists and engineers can build expertly tuned embeddings and operationalize them through a vector database. This can help them deliver artificial intelligence (AI) solution faster. Operations teams benefit from managing solutions as familiar database workloads. They can use existing tools and playbooks. What are the benefits of vector databases? Vector databases allow developers to innovate and create unique experiences powered by vector search. They can accelerate artificial intelligence (AI) application development and simplify the operationalization of AI-powered application workloads. Vector databases provide an alternative to building on top of bare k-nearest neighbor (k-NN) indexes. That kind of index requires a great deal of additional expertise and engineering to use, tune and operationalize. A good vector database provides applications with a foundation through features like data management, fault tolerance, critical security features, and a query engine. These capabilities allow users to operationalize their workloads to simplify scaling, maintain high scalability, and support security requirements. Capabilities like the query engine and SDKs simplify application development. They also allow developers to perform more advanced queries (like searching and filtering) on metadata as part of a k-NN search. They also have the option to use hybrid relevancy scoring models that blend traditional term frequency models like BM25 with vector scores to enhance information retrieval.

Edgechain and Vector Database-

The following API Examples will give you brief description of how edgechain uses vector databases -