There is a video version of this post in my YouTube Channel.

So we have our boilerplate app, and we are ready to start adding fun stuff. If you do not have that, check the part 1 of this series.

We will start by making sure we have Semantic.Kernel in our usings:

using Microsoft.SemanticKernel;

Now we will configure the “Kernel”, which as the name says is the heart of SK (Semantic Kernel). We will add this to our services configuration helper method:

  .AddSingleton((sp) => 
          {
              var builder = Kernel.CreateBuilder();

              builder.AddAzureOpenAIChatCompletion(
                  Environment.GetEnvironmentVariable("AZURE_OPENAI_MODEL")!,
                  Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
                  Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!);

              return builder.Build();
          });

As you can see we will need to set 3 environment variables in order to get it working. We could use the configuration extension, documented here, if we prefer to use a .json file or other options. We can even add these variables as global options in our System.CommandLine.

I like environment variables because these are easier to set, work in pretty much any OS (or platform like GitHub actions), and I don’t need to worry on secrets being push to the repo. The cons, is that you need to set them up each time.

Now we are ready to start doing queries against OpenAI.

Step 2

Now we will create a class that will do that LLM operation. I like to call it skill or service, but in different platforms could be called plugin, method, chain, etc.

It will look like this, make sure to read the comments:

// PostGeneratorService.cs
using Microsoft.SemanticKernel;

namespace PostGenerator;

public class PostGeneratorService
{
    // Function that represents a prompt
    private readonly KernelFunction _createUserStoryFunction;

    private readonly Kernel _kernel;

    public PostGeneratorService(Kernel kernel, KernelFunction promptFunction)
    {
        _createUserStoryFunction = promptFunction;
        _kernel = kernel;
    }

    public async Task<string?> CreatePost(string topic, string persona, string style)
    {
        // This prompt will receive 3 arguments to be replaced in the template
        var context = new KernelArguments
        {
            { "Topic", topic },
            { "Persona", persona },
            { "Style", style }
        };

        // This is where it goes to Azure OpenAI to invoke the LLM
        var result = await _createUserStoryFunction.InvokeAsync(_kernel, context);

        return result.ToString();
    }
}

As you can see we are passing the parameters as something call KernelArguments. These are filled into the prompt template. If you remember, in our part 1, uur prompt should look like this:

You are a expert. Generate a short tweet in a