This is a rather long article. We're going to discuss the pros and cons of each platform and cover some Kotlin basics as well. Be warned.

Kotlin is a fantastic language for building servers and back-end APIs for web and mobile apps. Kotlin is fast, it's typed, it's null-safe, it's functional, it supports immutability, there are lots of options to easily interact with external services like databases, it has great tooling, its scope functions are incredibly powerful and convenient, and asynchronous programming is relatively easy using coroutines. Many developers, including this author, were initially exposed to Kotlin when building Android apps, given Google's promotion of Kotlin to a first-class language for that platform. And yet, many choose Node or Python or PHP to build servers and APIs.

One of the reasons is the popularity of web/API frameworks for these languages. Express, Flask, Rails, Laravel, Django and others all have enormous popularity, and along with that, a lot of resources detailing how to deploy these apps on various cloud platforms. Kotlin programs, on the other hand, run on the Java Virtual Machine (JVM), a platform which cloud platforms have been slower to embrace (note: Kotlin Multiplatform, Kotlin Native and Kotlin/JS are all in various stages of production-readiness. This article, however, is only about Kotlin/JVM).

Kotlin actually offers plenty of choices for building an API framework. We'll use http4k today, because it is easy, powerful, includes a bunch of integrations with other services (like Lambda). With http4k, we can literally spin up a server in one line of code. However, Ktor, Vert.x, Micronaut, Javalin, and Spring all make it relatively easy to respond to http calls whether that means serving web content, enabling an API, or triggering internal functions.

Packaging a Kotlin app

What trips up a lot of Kotlin developers is the packaging and execution details. Many of us never had any exposure to classic Java, so all the terms Maven, classpath, fat jar, -D arguments that are important for making a Java program work, are foreign to us. Ideally, we should be able to simply package our entire application in a single file and run it with a single command. Fortunately, the Gradle (JVM package/dependency manager) shadowJar plugin allows for exactly that. This actually makes a Kotlin/JVM application easier to deploy than a Node/Express app, for example, which requires a huge node_modules folder, or a Python/Flask app, which likely requires a venv (virtual environment) and specific dependencies downloaded with pip. As time goes on, the dependencies installed on the server drift further away from the updated dependencies on the dev's machine. A normal Java JAR requires a similar set of external dependencies, but a shadowJar packs everything into a single file, always containing the same versions the dev is using locally.

There are much better tutorials on preparing a shadowJar task, but the basics are:

// NOT a complete example -- just showing the minimum to add to existing build.gradle

import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar

// add to existing plugins block
plugins {
  id 'com.github.johnrengelman.shadow' version '7.0.0'

// add below tasks block
shadowJar {
  manifest {
    attributes {
      'Main-Class' : 'MainKt' // or name of class with fun main()

  exclude 'config.*' // any secrets or config files

  // versioning in the file name will require updating the launch command every time
  // keeping the JAR file name constant is easier for deployment
  archiveFileName = "MyApplicationName.jar"

Gradle offers a Kotlin (rather than Groovy, above) syntax, which may be tempting for Kotlin developers to use. Unfortunately, the low number of Gradle-Kotlin code samples and limited documentation can often lead to a great deal of frustration trying to figure out how to translate a Gradle-Groovy code sample to Gradle-Kotlin.

// again, NOT a complete example -- just what to add to existing build.gradle.kts

import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar

plugins {
    id("com.github.johnrengelman.shadow") version "7.0.0"

val shadowJar: ShadowJar by tasks
shadowJar.apply {

    manifest.attributes.apply {
        put("Main-Class", "MainKt")



Instead of building with the normal ./gradlew build command (on Windows, just gradlew build), we will run ./gradlew shadowJar. If you are using IntelliJ IDEA (and every Kotlin dev should be), there's a Gradle tab on the far right. When opened, there should be a new shadow set of Tasks, you can simply double-click shadowJar to build. When complete, your output will appear in <source code root>/build/libs. Note that shadowJars (and all Java JARs) are just ZIP files, you can inspect the contents with any ZIP file viewer. This will allow you to see the differences between a normal build and a shadowJar build (if the massive difference in file size doesn't give you some clues). The application can be run via: java -jar /path/to/jarfile.jar (note, shadowJar will package your application code and all of its dependencies, however, it cannot include Java itself. You must ensure your server or computer has either a Java SDK (JDK) or Java Runtine (JRE) installed, and that it's a version capable of running your app. Setting Java up is best learned in a separate tutorial.)

Side note: what the hell is gradlew? Why not just gradle? gradlew refers to the Gradle Wrapper. Basically, it's Gradle's way of ensuring proper versioning of itself. Gradle wrapper will download an exact Gradle version, if necessary, before performing a build. Therefore, you can define a specific Gradle version in, and be confident that wherever the build occurs, it will use the same Gradle version that you're using on your dev machine. For example, we'll soon see that Heroku's servers will build our JAR. In this case, we want to be sure that Heroku's version of Gradle is predictable and to be confident that Gradle is installed at all.

Application configuration: separate from source

Both Gradle scripts above carved out an exclusion for configuration files. No matter how little you know about the Java ecosystem, anyone with any development experience should undertand that hard-coding configurations, and especially secrets, in your source code, is a definite no-no. By secrets, we mean passwords, API tokens, crypto wallets, whatever. Other configuration details, such as a database connection string or open port numbers or the path to a temp file directory, may not be critical to hide, but are likely to change separately from your codebase, possibly depending on the OS, or dev/test/production, etc. and should also be set outside the code itself.

It is certainly possible to set an Environment Variable to represent every individual parameter. You can pass variables into your program by adding -Dvariable=value in the execution command. But the number of variables can add up quickly, making it burdensome to define each one individually, especially when we explore various cloud platforms. A Kotlin library called Hoplite is the best config manager I've found, primarily due to strong typing and lots of ways to define your variables. It can read individual environment variables, or JSON strings, JSON, YAML or TOML files.

In the following example, we will define a configuration that includes database connection info, API credentials, a port number to expose and the path to a temp directory. You'll note that everything is strongly typed, every variable is part of a Kotlin data class, and we're able to set defaults, for cases when a variable is optional.

// in Main.kt
data class GeoAPI(val apiKey: String, val host: String = "")
data class DBConfig(val url: String, val username: String, val password: String)
data class Destination(val name: String, val lat: Float, val lon: Float, val timeZone: String)
data class AppConfig(val portNumber: Int = System.getenv("PORT")?.toInt() ?: 0,
                     val platform: String = "dev",
                     val destination: Destination, val db: DBConfig, val geo: GeoAPI)

The loading of the configurations is where Hoplite shines. All three of our deployment targets (plus our local dev machine) will have unique configurations, but also unique ways of passing the config variables. Hoplite's Builder makes it easy to step through multiple options for receiving the config. In the code block below, Hoplite will first look for an environment variable named CONFIG_JSON which itself should contain a JSON string of all our config variables (or load an empty JSON object otherwise). Next, if there's an environment variable naming a config file, Hoplite will attempt to load that, if not, it will attempt to load config_dev.yaml. And since we set this line as optional, if neither file exists, Hoplite won't raise any errors. As long as one of the 3 options exists a JSON string, a filename that points to a config definition, or config_dev.yaml Hoplite can create the config object, which is universally accessble throughout the app. We won't need any more System.getenv(), where the name string can't be validated, instead, we have a strongly-typed object whose properties can easily be accessed via config.cdn.apiKey, for example.

val config: AppConfig = ConfigLoader.Builder()
    .addSource(JsonPropertySource(System.getenv("HOPLITE_JSON") ?: System.getProperty("HOPLITE_JSON") ?: "{}"))
    .apply {
        // Hoplite handles files on the classPath very well
        //   a little extra work to just load any file on the filesystem      
        System.getenv("HOPLITE_FILENAME") ?: System.getProperty("HOPLITE_FILENAME")?.let { configFile ->
            if (!File(configFile).exists())
                throw Exception("specified config file ($configFile) doesn't exist")
                    ConfigSource.FileSource(File(configFile)), optional = true))
    .addSource(PropertySource.resource("/config_dev.yaml", optional = true))

This flexibility is essential, we'll see, because each of our deployment targets has different allowances for external config files. If we had to rely exclusively on a config file, we'd be locked out of using certain deployment targets, or forced to include our config file within our JAR file, which is insecure. In fact, let's take a look at the differences between our deployment targets:

Linux VPSAWS LambdaHeroku
accepts a config fileyesnono
any Java versionyes8, 11yes
access to filesystemyesnolimited
deployment methodflexibleJAR uploadgit
admin responsibilityfullAWS-centricnone

Deployment targets

A basic VPS, like those offered at Digital Ocean, Vultr, Amazon EC2, or any of the inexpensive providers you might find at LowEndBox has the most flexibility, but also places the most burden on the developer to set up, administer and secure the server. You'll have to update all the necessary system tools on the remote operating system, set up user accounts, implement security policies, prepare the file system, install Java, install a web server/reverse proxy, and much more. Much of this can be automated with tools like Ansible, but learning those platforms and creating a perfect setup script is far from trivial. Of course, once you have a working server, you have full control, you can deploy your application with an SCP/SFTP upload, with a git pull, downloading your JAR from another source or importing a Docker image. You also have the ability to upload any type of config file and to edit it in-place. You can set a strong security policy on the file to make it unreadable by anyone but admins.

The primary benefit of AWS Lambda is the elimination of all the admin tasks above. All you need is to create functions, and Lambda will run them. It's also nearly infinitely scalable, although we're not going to worry about scale when we are just launching. The problem with the promise of no administration, is the fact that deployment on Lambda does require specialized knowledge of the AWS ecosystem. It isn't true that a function "just runs." It needs a trigger to tell it to run, it needs a user account, a security policy, a VPC to call other web services, a CloudWatch logging account. If you need access to storage (S3) or a database, you'll need accounts that can securly access those resources as well. Here is an automated script to set up a very basic Lambda function, you'll notice it requires a Role, a Role Policy, an API, Permissions.

Once you do have your Lambda set up, whether you do it manually or use a service like Terraform or Pulumi to help automate it, it runs reliably and consistently. Every call is logged and memory resources are identified. And Amazon offers a very generous free plan 1 million free requests per month (maximum of 400,000 GB-seconds), which is a huge incentive to plow through and learn how to navigate its ecosystem. Lambdas don't offer any persistent storage, however, so we cannot use a config file, unless we set up a separate S3 storage bucket. Even then, however, accessing an S3 file object is not the same as accessing a regular file, and therefore reading it will likely require specialized AWS libraries in your code. Alternatively, we can set an environment variable direclty within the Lambda definition, and store a JSON string there.

Heroku is more limited in scope, but in a sense, its limitations make our decisions easier. You cannot store an external config file on Heroku. Same as Lambda, you will have to store the configuration as a JSON string in an environment variable. Heroku has more flexibility, however, in selecting the platform to run our code. AWS Lambda only supports Java v8 and v11, while Heroku allows 7 to 16 (as of Aug. 2021). Heroku's biggest (only?) weakness is price. If your application outgrows its low-cost Hobby plan, the price of its "dynos" can add up fast. 1 CPU and 1GB RAM is $50 monthly, while the same spec of VPS at Vultr is $5. But it's nearly impossible to beat Heroku's ease of deployment (git push) and lack of administration.

Let's get to the heart of this article performing deployments. There are plenty of Hello World article and tutorials out there, my issue with them is that they were too simple and rarely address real-world concerns, especially configurations, and also interactions with other necessary services. I have prepared a basic app that should be easy to follow but goes beyond Hello World, as it connects to an external API and to a database.

Our demo application

Imagine you are the owner of the Border Inn, at the eastern edge of "The Loneliest Road in America," U.S. Route 50 in Nevada. You're lonely, so you want to encourage people to visit you! Our application will allow any user to enter his or her location, and we'll reply with a total distance, and driving directions, thanks to We're also going to save the state and ZIP code of each query in our database (just in case people input their exact address, we don't want to save any personal info, the ZIP code is fine), and later we can get a list of the most popular origins, so we can prepare to welcome our new guests! The repo can be found at

Of course, that repo does NOT include any configuration details! Our passwords and keys must be kept outside of our source code and never checked in. I've already specified the configuration in the code block above. With Hoplite, we can prepare a YAML file such as

portNumber: 9000

  name: The Border Inn
  lat: 39.05628
  lon: -114.04906
  timeZone: America/Los_Angeles

  apiKey: my_api_key

  url: jdbc:h2:/opt/borderinn/searchdb
  username: myusername
  password: mypassword

or a JSON file such as

   "portNumber": 9000,
   "destination": {
      "name": "The Border Inn",
      "lat": 39.05628,
      "lon": -114.04906,
      "timeZone": "America/Los_Angeles"
   "geo": {
      "apiKey": "my_api_key"
   "db": {
      "url": "jdbc:h2:/opt/borderinn/searchdb",
      "username": "myusername",
      "password": "mypassword"

I certainly think the YAML version is easier to work with, but the JSON is important because as we will see, we'll need to copy it as a string on our deployment targets. Because YAML relies on formatting, we can't convert a YAML file to a simple string. You'll also note that the destination is entirely configurable, so if we're successful at driving traffic to the Border Inn, we can launch the same app for another very lonely place without any code changes, we'll only need to edit our configuration.

Our app reads and writes to an external database. This opens the question, which database to use? If you choose right off the bat to go with a full-scale database server such as PostgreSQL, then there will be little difference between deployment targets, other than changing the connection string. Both Heroku and AWS offer a hosted PostgreSQL service, and both are relatively easy to set up. Of course, you aren't limited to just the internal option. You could start a Heroku Postgres database and connect to it from your VPS or from Lambda. You can also go with an entirely 3rd party service like ScaleGrid. And of course there are many database servers other than PostgreSQL.

However, given that our Kotlin app runs on the JVM, we should take a look at one of the platform's hidden gems, the H2 Database. H2 is a remarkably robust and flexible file-based database written in Java. SQLite gets lots of love (deservedly) due to its ease of use and especially its portability, since a file-based database doesn't require any setup or servers. While H2 requires the JVM, if that is your platform of choice, it is packed with features and is extremely fast. Additionally, H2 can be run in-memory only, without creating a file. It's just a fantastic database with many, many uses.

Deploying on a VPS

A VPS, or Virtual Private Server, is simply a base operating system running on a server somewhere out in the cloud. You have full control over the server, just like over your desktop or laptop machine. As stated earlier, however, you are also entirely responsible for security and for preparing the system has all the services necessary to run your app. As with any server accessible online, your server will constantly be probed and poked by potential hackers hoping to find obvious vulnerabilities. You'll need to install Java, and prepare a logging solution, at a minimum. There are many, many articles about preparing a server, you are advised to read them. On to the deployment details: most commonly, you will build the JAR file locally on your desktop, using the shadowJar command, and then connect to the remote server via SSH, and upload the JAR. There are many tools to establish an SSH connection, including the command line, and the ever-popular PuTTY. Recently the Bitvise SSH Client has become my favorite, by far. I think it does a better job than PuTTY of managing keys, and it includes both a terminal window and an SFTP window for each connection. PuTTY only does terminals. With Bitvise, uploading the JAR file is as simple as establishing the connection, then dragging-and-dropping from the local machine to the remote server, in the proper directory. Since you have full control of a VPS, you can easily use our original YAML config file, uploaded the same way. If you need to edit the config at some point, this can easily be done by uploading a new version, or, more conveniently, editing directly in the terminal with nano. Additionally, since we have full access to the filesystem, we can also easily upload an H2 database file with our schema already prepared, and set our config to point to it.

To launch our app, we simply need to run a command in the terminal: java -jar -DCONFIG_FILENAME=/opt/borderinn/config.yaml /opt/borderinn/BorderInnDirections.jar. Of the three config options a JSON environment variable, a specified external file, or a default external file we specify the location of our external file. (note that the default file is typically only used to set the config while developing). We can test the service with curl: curl localhost:9000/from/Denver.

We've got two issues, however. First, if our server ever goes down, or we need to reboot for any reason, we'll need to manually restart our app. What we want, instead of just an application, is a service that is always running and always available. The other problem is a bit more subtle. Currently, yes, our app can be reached by any computer over the internet, and if we config the port to 8000, nobody will need to specify the port in the URL. But the http4k server (and this would be the same for nearly all frameworks) is missing essential features that a full-fledged web server provides. For one, we have no secure connections and no certificate management, we have no client logging, no ability to serve up static assets directly without calling our application, no ability for load balancing or to add additional applications served on the same incoming ports. None of this may matter for our demo application, but any real application should have a real web server handling incoming traffic, if only to enable TLS.

Therefore, while the deployment of our app is relatively simple on a VPS, it is actually incomplete until we install a real web server and set our app up as a service. There are many web servers to choose from nginx, Caddy, HAProxy, Apache, and more so choosing and installing one will be left to the reader. I've found that Caddy may be the easiest to set up, as its configuration file is short (for basic usage) and it fully manages installing and renewing Let's Encrypt SSL certificates. Creating a service from a Java application is described in various articles, some good starting points are this Stack Overflow question and this Baeldung article.

Deploying on AWS Lambda

Lambda is the exact opposite experience of a VPS. Instead of having full control over the server, Lambda offers you none. This is good and bad. All the maintenance and administration tasks are gone. If you happen to have a popular service, a VPS setup gets more complicated as you must arrange scaling across servers and load-balancing, but Lambda will just keep serving up your function without issue.

The biggest change that we need to prepare for on Lambda, is that AWS handles all message ingress and egress via its API Gateway. Just like the Caddy web server described in the VPS, our application will actually sit behind the API Gateway. But a big difference is that Caddy or other web servers typically just pass along the HTTP request in its native form, while Lambda converts the request into its own proprietary API format. All the pieces of the original request are there, of course, but the whole request has been converted to a JSON object with nested objects like queryStringParameters.

There are times when this is enormously useful, especially when it isn't trivial to embed an HTTP server directly into your application, like with a Python application. But http4k lets us insert a server with just a single line of code! Fortunately, the http4k developers created an integration with AWS that we can activate with a single line: class GatewayListener : ApiGatewayV2LambdaFunction(appRoutes). Now, instead of having Lambda activate our main() method upon startup, the new class becomes the entry point. Our internal server never starts (that's inside main()), instead, the ApiGatewayV2LambdaFunction translates all incoming messages from the API Gateway back into HTTP Requests that our Routes handle normally.

Lambda offers multiple options to actually get our code onto the Amazon servers, detailed on this page. The most straightforward is to simply build our shadowJar then upload it via the web interface. If you prefer the command line, there is an AWS CLI to push the JAR with one command, like aws lambda update-function-code --function-name my-function --zip-file fileb://my-function.jar. There's also something called the AWS SAM (Server Application Model), but if you're heading down this path, you're likely committed to only using Lambda and embedding deeply into the AWS ecosystem.

Lambda provides zero access to the filesystem, so we cannot use a config file. Instead, we can take the JSON version and, using the web interface (on the Configuration tab), create an Environment Variable named CONFIG_JSON with our JSON string as the value. Hoplite will load that first, then it won't find any of the specified files so it will skip that step. While we're configuring the function, we should go to the Code tab and set the Handler not to MainKt, but to our new GatewayListener class.

Without access to the filesystem, we cannot use H2 as our database, since it's file-based. Therefore, be aware that some external service will be required. AWS provides numerous data storage options, however, if you'd like to use Dynamo or some of the other services, it will require code changes that are AWS-exclusive.

As mentioned earlier, there is a bit of expertise needed to connect all the required AWS assets, such as Users, Policies and Roles, and services like the Gateway and CloudWatch. This article is focused on the deployment of an application, however, so it is recommended to look for AWS-centric tutorials. One tip: I have found that creating an API Gateway from within your Lambda function's page will not work correctly, you'l need to go to the API Gateway service and manually create one, HTTP v2 (not REST), integrated with Lambda, using the $default stage and no routes other than $default. There are also services like Terraform or Pulumi which help automate Lambda preparation with pre-defined scripts.

Deploying on Heroku

Heroku is like a hybrid of VPS and Lambda. Like Lambda, we aren't burdened with system administration. Once deployed, our app will just keep running. Unlike Lambda, there is a semi-persistent filesystem. Unfortunately, we cannot access it directly, nor can we SCP/SFTP files to it, including our config file. We'll have to set an Environment Variable with our JSON config, just like we did on Lambda. While we're setting Environment Variables, we need to set one more, GRADLE_TASK to shadowJar. As we will see in a minute, Heroku is going to build our JAR, so it needs to know what Gradle task to run.

Heroku offers three ways to deploy our application. Both require us to install Heroku's CLI tool. The more traditional way, where we build our shadowJar then upload it, can be done via the Java CLI plugin. It only requires a single command: heroku jar:deploy <path_to_jar> --app <appname>. The second way is to set Heroku as your repo's remote (the Heroku CLI can do this for you via heroku create) then all subsequent deployments only require a git push. Not only is this super-convenient, it also makes it very easy to deploy upon every git commit, following the priciples of Continuous Deployment (in fact, if we host our repo on Github, then Heroku can re-build and re-deploy upon every new commit). No need to separately build our JAR then upload it. When we push our code, the JAR is built on Heroku's servers, based on the GRADLE_TASK we set. Remember when we talked about the Gradle wrapper earlier? It ensures that the specific Gradle version we're using locally will be the same version Heroku uses to build our JAR. The third way, if you are using GitHub, is to set that as your repo's remote (instead of Heroku), and enable Heroku's GitHub integration. Now all three repos will stay sync'ed up local machine, GitHub, and the Heroku production app. This repo-based continuous deployment by default, is where Heroku really shines.

Heroku requires we check in 4 additional files to our git repo. 2 of them already exist: /gradle/wrapper/gradle-wrapper.jar and /gradle/wrapper/ The third is new, /, which allows us to specify some Heroku options, most importantly the line java.runtime.version=11 (or possibly a different version). Often, this is the only line necessary. Finally, we must create a Procfile which tells Heroku what command to use to launch our app. This file is also likely just one line, web: java -jar $JAVA_OPTS build/libs/<jar file name>.jar. We don't even have to specify -DCONFIG_JSON in the command, as Heroku will automatically pass it along in the command.

Heroku also determines what port the application will listen on. On our dev server, and on a VPS, we decide, and set it in the config file. Therefore it is important to remove the reference to the port number in the config JSON, on Heroku only. Heroku provides a PORT variable, which we pick up in code via the line val portNumber: Int = System.getenv("PORT")?.toInt() ?: 0 (the zero default is there for Lambda, which doesn't use any ports).

Heroku provides access to a full filesystem, but it is only temporary. It would be tempting to use it to run an H2 database file, and it is possible to do so, but that file will disappear upon the next code deploy. Instead, like with Lambda, we'll be best served using a separate, persistent database. Like AWS, Heroku itself offers PostgreSQL, but we can use any PostgreSQL (or other database) provider.

Wrapping up

In this article, we discussed building your Kotlin app with the help of the shadowJar Gradle plugin, which makes it very easy to package our application into a single file, for easy deployment on our VPS or AWS Lambda, and for easy building by Heroku. Next, we learned how to utilize Hoplite for creating a configuration setup that will, without further code changes, allow us to set those config parameters easily on any of the 3 platforms, with the added benefit of type checking, static naming, and early error notifications if there are any problems with our config.

Then we learned how to deploy our demo Border Inn app on all 3 platforms, including the small code additions needed for compatibility with the AWS API Gateway and with Heroku's deployment details. Fortunately the extra code, like the GatewayListener class, or the Heroku Procfile, do not interfere at all with the other deployments. Our app, therefore, is essentially portable between all 3 platforms.

Future steps

Continuous Deployment is a very powerful, and addictive, methodology. Heroku's ability to automatically update our application based on the latest code commit, without any manual interaction, is fantastic. Although the other 2 platforms don't have CD built-in, there are options. Amazon offers its CodePipeline service to get our code onto Lambda. There are also third-party services and platforms which focus solely on Continuous Integration / Continuous Delivery Jenkins, CircleCI, and others. Bitbucket and GitLab are alternatives to GitHub and both offer their own CI/CD pipelines. All of these tools and systems allow you to create Heroku-like deployment on your VPS or AWS. You are definitely encouraged to investigate and set up a CI pipeline, as it will alleviate many admin and DevOps tasks as well as making sure your latest code is delivered directly to production.

We covered a lot of ground in this article. Found an error? Have a better solution? I look forward to hearing from you at the email address below. Thanks for reading!