Enrich Code Creation with Project Insights for Code Suggestions and Chat Requests

The target is to create much better code suggestions especially in the context of larger codebases and for natural language tasks, that can be done through comments or for example a chat message. Right now our solution is only taking the actual lines into consideration, the target should be to add as much extra information to the prompt as possible to have a better output. We can leverage this by building Project Insights (TL;DR - A Metabase about the project) and then injecting into the sent prompt information about the project (e.g. which libraries are used, functions that could help, etc.)

SCR-20230614-dev

Example

Lets assume the following task (these are real prompts + answers done with code-bison) by having the user write in his javascript file:

// A function to call the Spotify API to get all my playlists

If we now send this prompt + Info on the Language + Filename we get a prompt like this:

In a Javascript file called 'playlistdisplay.js':

// A function to call the Spotify API to get all my playlists

Which will result in a large variety of results as there are so many possibilities to call the API (native, fetch, axios, spotify sdk, etc.)

Result from Vertex:

function getSpotifyPlaylists() {
  // Get the user's access token.
  var accessToken = localStorage.getItem("spotifyAccessToken");

  // Create the request URL.
  var url = "https://api.spotify.com/v1/me/playlists";

  // Create the request headers.
  var headers = {
    "Authorization": "Bearer " + accessToken
  };

  // Make the request.
  var request = new XMLHttpRequest();
  request.open("GET", url, true);
  request.setRequestHeader("Authorization", "Bearer " + accessToken);
  request.send();

  // Handle the response.
  request.onload = function() {
    // Get the response body.
    var responseBody = request.responseText;

    // Parse the response body into a JSON object.
    var playlists = JSON.parse(responseBody);

    // Display the playlists.
    for (var i = 0; i < playlists.length; i++) {
      var playlist = playlists[i];
      console.log(playlist.name);
    }
  };
}

Example Evolution - Add libraries

As we have on the server with Project Insights a metabase of information around the project, we for example know which frontend libraries are used. We might also can figure out if this file is a javascript file for Frontend or rather Backend in a node context which would make a huge difference for the generated function.

So lets extend the prompt by that:

In a Node Javascript file called 'playlistdisplay.js':

// A function to call the Spotify API to get all my playlists

The project has the following Javascript libraries: axios, Vue

Which immediately gives us a much better matching result:

import axios from "axios";

export const getSpotifyPlaylists = async () => {
  // Get the access token from the environment variables.
  const accessToken = process.env.SPOTIFY_ACCESS_TOKEN;

  // Create the request headers.
  const headers = {
    Authorization: `Bearer ${accessToken}`,
  };

  // Make the request to the Spotify API.
  const response = await axios.get(
    "https://api.spotify.com/v1/me/playlists",
    headers
  );

  // Return the playlists.
  return response.data.items;
};

Example Evolution - Add possible relevant functions

As we would index by creating embeddings for both the code and a explanation of all files and functions in a repo we can also extend by hinting to possible relevant functions like :

In a Node Javascript file called 'playlistdisplay.js':

// A function to call the Spotify API to get all my playlists

The project has the following Javascript libraries: axios, Vue

A function called getSpotifyToken() exists which gives back the current spotify token

And this improved the generated function even more:

import axios from "axios";

export const getSpotifyPlaylists = async () => {
  // Get the current Spotify token.
  const token = await getSpotifyToken();

  // Create the request headers.
  const headers = {
    Authorization: `Bearer ${token}`,
  };

  // Make the request to the Spotify API.
  const response = await axios.get(
    "https://api.spotify.com/v1/me/playlists",
    headers
  );

  // Return the playlists.
  return response.data.items;
};

More evolution

These are just 2 examples but we can find for sure way more evolution steps to extend prompts. Taking into account the actual project/repository gives us way more context and information. More information/instructions always means better results with LLM's.

  • We could even go and use the linting setup of a project, create with some temperature multiple suggestion, lint them and only give back the ones that fit

Advantage

  • These additions can be done both for Code Suggestions as Chat Requests (Special Tool)
  • This can be done easily incremental and with clear prioritisation (focusing on certain languages, etc.)
  • We have information on a project in the Rails Application
Edited by Tim Zallmann