There is a video version of this post in my YouTube Channel.

So we have our boilerplate app, and we are ready to start adding fun stuff. If you do not have that, check the part 1 of this series.

We will start by making sure we have Semantic.Kernel in our usings:

using Microsoft.SemanticKernel;

Now we will configure the “Kernel”, which as the name says is the heart of SK (Semantic Kernel). We will add this to our services configuration helper method:

  .AddSingleton((sp) => 
          {
              var builder = Kernel.CreateBuilder();

              builder.AddAzureOpenAIChatCompletion(
                  Environment.GetEnvironmentVariable("AZURE_OPENAI_MODEL")!,
                  Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
                  Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!);

              return builder.Build();
          });

As you can see we will need to set 3 environment variables in order to get it working. We could use the configuration extension, documented here, if we prefer to use a .json file or other options. We can even add these variables as global options in our System.CommandLine.

I like environment variables because these are easier to set, work in pretty much any OS (or platform like GitHub actions), and I don’t need to worry on secrets being push to the repo. The cons, is that you need to set them up each time.

Now we are ready to start doing queries against OpenAI.

Step 2

Now we will create a class that will do that LLM operation. I like to call it skill or service, but in different platforms could be called plugin, method, chain, etc.

It will look like this, make sure to read the comments:

// PostGeneratorService.cs
using Microsoft.SemanticKernel;

namespace PostGenerator;

public class PostGeneratorService
{
    // Function that represents a prompt
    private readonly KernelFunction _createUserStoryFunction;

    private readonly Kernel _kernel;

    public PostGeneratorService(Kernel kernel, KernelFunction promptFunction)
    {
        _createUserStoryFunction = promptFunction;
        _kernel = kernel;
    }

    public async Task<string?> CreatePost(string topic, string persona, string style)
    {
        // This prompt will receive 3 arguments to be replaced in the template
        var context = new KernelArguments
        {
            { "Topic", topic },
            { "Persona", persona },
            { "Style", style }
        };

        // This is where it goes to Azure OpenAI to invoke the LLM
        var result = await _createUserStoryFunction.InvokeAsync(_kernel, context);

        return result.ToString();
    }
}

As you can see we are passing the parameters as something call KernelArguments. These are filled into the prompt template. If you remember, in our part 1, uur prompt should look like this:

You are a expert. Generate a short tweet in a style about .

Semantic Kernel understands the variables with the following syntax:

You are a {{$Persona}} expert. Generate a short tweet in a {{$Style}} style about {{$Topic}}.

As with any input that comes directly from the user, we need to be careful of any injection, in this case prompt injection. For now, we will just make the note and implement it later.

Semantic Kernel allows you to define the prompts in text files and loads them programatically, so we do not need to modify the code to tweak them. So we will create something like this:

- prompts
|-- generatePost
  |-- skprompt.txt
  |-- config.json

We will put our prompt into skprompt.txt:

Generate a short tweet ready to be published given a writting style, persona, and topic.
You should include hashtags at the end.

Style: {{$Style}}
Persona: {{$Persona}}
Topic: {{$Topic}}

On config.json we will put this:

{
    "schema": 1,
    "type": "completion",
    "description": "Post generation",
    "completion": {
      "max_tokens": 5000
    },
    "input": {
      "parameters": [
        {
          "name": "persona",
          "description": "Persona",
          "defaultValue": ""
        },
        {
          "name": "style",
          "description": "Style",
          "defaultValue": ""
        },
        {
          "name": "topic",
          "description": "Topic",
          "defaultValue": ""
        }
      ]
    },
    "default_backends": []
}

Feel free to play with the config and the prompt. Here you can find more information about what to modify.

Now let’s update our services to add this new class also as singleton.

.AddSingleton((sp) =>
    {
        var kernel = sp.GetRequiredService<Kernel>();

        var kernelFunctions = kernel.CreatePluginFromPromptDirectory("prompts");

        return new PostGeneratorSkill(kernel, kernelFunctions["generatePost"]);
    });

We can do some improvements, for example getting rid of the magic strings (“generatePost”, “prompts”) and make them configurable (e.g. as environment variables)

Finally let’s call our skill from the handler:

createPostCommand.SetHandler(async (persona, topic, option) =>
{
    var logger = loggerFactory.CreateLogger<Program>();

    logger?.LogDebug($"Command requested for {persona} {topic} {option}");

    var postGenerator = serviceProvider.GetRequiredService<PostGeneratorService>();

    logger?.LogDebug($"Calling post generator skill");

    var post = await postGenerator.CreatePost(topic, persona, option);

    logger?.LogInformation(post);
}, personaOption, topicOption, styleOption);

You will notice that we also made the lambda function async, to support the await call to OpenAI.

We update our usings to include the namespace using PostGenerator; and let’s test it:

$  dotnet run create-post --persona "raccoon" --topic "coding in c#" --style "sarcastic"
05:16:06 info: Program[0] "Just spent another night wrestling with C#. Who knew garbage collection could be so confusing, I mean I'm just a raccoon, masters of dealing with trash, right? Guess I'll stick to rummaging through bins. #CodingFun #RaccoonLife #CSharpProblems"

We did it! We have a working LLM toy. In the next series we will see how to prepare our solution for integration with other systems, including evaluation and testing of our prompts and model.

You can find the code of this and previous parts here and the video version here.

Thanks for following!