Setup your JSON-RPC service with µship stack

To get started, you have to create a new project. This part will use Maven to illustrate the process but it is easily adaptable to Gradle or any Java based project.

To create a new Apache Maven project, you can use mvn archetype:generate but we recommend you to just create a folder and manually write a pom to avoid to inherit from a legacy setup.

Here is a pom.xml template you can use to get started:

pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  (1)
  <groupId>com.company</groupId>
  <artifactId>my-app</artifactId>
  <version>1.0.0-SNAPSHOT</version>

  <properties>
    <maven.compiler.java.version>11</maven.compiler.java.version>

    <uship.version>...</uship.version> (2)
  </properties>

  <dependencies>
    <dependency> (3)
      <groupId>io.yupiik.logging</groupId>
      <artifactId>yupiik-logging-jul</artifactId>
      <version>1.0.5</version>
      <classifier>jakarta</classifier>
      <scope>runtime</scope>
    </dependency>
    <dependency> (4)
      <groupId>org.junit.jupiter</groupId>
      <artifactId>junit-jupiter</artifactId>
      <version>5.9.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <dependencyManagement>
    <dependencies>
      <dependency> (5)
        <groupId>io.yupiik.uship</groupId>
        <artifactId>bom</artifactId>
        <version>${uship.version}</version>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <build>
    <plugins>
      <plugin> (6)
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <version>3.2.0</version>
        <configuration>
          <encoding>UTF-8</encoding>
        </configuration>
      </plugin>
      <plugin> (7)
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.8.1</version>
        <configuration>
          <source>${maven.compiler.java.version}</source>
          <target>${maven.compiler.java.version}</target>
          <release>${maven.compiler.java.version}</release>
          <encoding>UTF-8</encoding>
          <parameters>true</parameters>
        </configuration>
      </plugin>
      <plugin> (8)
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>3.0.0-M5</version>
        <configuration>
          <trimStackTrace>false</trimStackTrace> (9)
          <systemPropertyVariables> (10)
            <java.util.logging.manager>io.yupiik.logging.jul.YupiikLogManager</java.util.logging.manager>
          </systemPropertyVariables>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>
  1. Ensure to define your project metadata, note that this setup will be compatible with a multi-module project too,

  2. Define uship version as a variable for easier upgrades (not required),

  3. We use Yupiik Logging to get a more cloud friendly logging but this is not required at all, skip this dependency if not desired (if you prefer Log4j2 or so use JUL binding for example),

  4. We want to write tests with JUnit 5 so we set it as dependency,

  5. We import the UShip bom to get dependencies versions right,

  6. We force the encoding for resources to avoid surprises (OS dependent otherwise),

  7. We force the compiler to use the Java version we want (note you can use any version >= 11),

  8. We force surefire version to ensure we are JUnit 5 compatible,

  9. We prevent surefire to trim the stack when an exception is thrown - it swallows the information you need to understand why it failed in general,

  10. We force Yupiik logging manager (if you don’t use Yupiik Logging, skip it).

Note
µship does not use a parent pom to set it up automatically because: 1. it can quickly get outdated with transitive dependencies and project must be able to update any of the plugin/dependencies without a new µship release for flexibility, 2. you can use other plugins (junit-platform-maven-plugin instead of maven-surefire-plugin for example, spock, etc…​), 3. it is saner to use a project related parent than a cross-project parent which is a bad practise and breaks several Maven features/integrations.

At that stage we have a good "parent" pom but to be able to code against it you should add the related dependencies. The simplest is to add this dependency:

<dependency>
  <groupId>io.yupiik.uship</groupId>
  <artifactId>jsonrpc-core</artifactId>
  <version>${uship.version}</version>
</dependency>

From here you can develop JSON-RPC endpoints.

Create JSON-RPC endpoints

Creating a JSON-RPC endpoint is about marking a bean with the qualifier @JsonRpc and some method(s) with @JsonRpcMethod:

@JsonRpc (1)
@ApplicationScoped (2)
public class MyEndpoints {
    @JsonRpcMethod(name = "test1") (3)
    public Foo test1(@JsonRpcParam final String in) { (4)
        // ...
    }
}
  1. Defines the class as containing JSON-RPC methods,

  2. Since the class will match a CDI bean, it can use any relevant scope. We strongly encourage you to use @ApplicationScoped if possible for performances and consistency but it is not required,

  3. @JsonRpcMethod defines a method usable over JSON-RPC transport (a servlet by default). The name attribute must be unique per deployment and we highly recommend you to set the documentation attribute,

  4. The method can then define its return type and inputs as any JSON-B friendly types. Inputs can be marked with @JsonRpcParam to set their documentation.

Tip
the JSON-RPC implementation supports by position calls (parameters are passed in order) or names (JsonRpcParam#value). If not explicitly set, the name is taken from the parameter bytecode name. It is highly recommended to set -parameters to javac to get the same names than in the source code. Also take care that the order and names are then part of your contract.

Document JSON-RPC endpoints

If fully described - documentation methods being set in annotations, you can generate your endpoint documentation using jsonrpc-documentation module and in particular io.yupiik.uship.jsonrpc.doc.AsciidoctorJsonRpcDocumentationGenerator class.

You have to add this dependency to your pom.xml:

<dependency>
    <groupId>io.yupiik.uship</groupId>
    <artifactId>jsonrpc-documentation</artifactId>
    <version>${uship.version}</version>
</dependency>

Then add new exec build plugin instructions:

<plugin>
  <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>...</version>
    <executions>
      <execution> (1)
        <id>api-asciidoc</id>
        <phase>process-classes</phase>
        <goals>
            <goal>java</goal>
        </goals>
        <configuration>
            <mainClass>io.yupiik.uship.jsonrpc.doc.AsciidoctorJsonRpcDocumentationGenerator</mainClass>
            <includeProjectDependencies>true</includeProjectDependencies>
            <arguments>
                <argument>My JSON-RPC API</argument> <!-- document title -->
                <argument>com.company.MyEndpoints1,com.company.MyEndpoints2,...</argument> <!-- classes -->
                <argument>${project.build.directory}/generated-doc/api.adoc</argument> <!-- output -->
            </arguments>
        </configuration>
      </execution>
      <execution> (2)
        <id>api-openrpc.json</id>
        <phase>process-classes</phase>
        <goals>
            <goal>java</goal>
        </goals>
        <configuration>
            <mainClass>io.yupiik.uship.jsonrpc.doc.OpenRPCGenerator</mainClass>
            <includeProjectDependencies>true</includeProjectDependencies>
            <arguments>
                <argument>My JSON-RPC API</argument> <!-- OpenRPC title -->
                <argument>com.company.MyEndpoints1,com.company.MyEndpoints2,...</argument> <!-- classes to use -->
                <argument>${project.build.directory}/generated-doc/openrpc.json</argument> <!-- output -->
                <argument>https://api.company.com/jsonrpc</argument> <!-- base -->
                <argument>true</argument> <!-- formatted -->
            </arguments>
        </configuration>
      </execution>
    </executions>
</plugin>
  1. Will generate a textual (Asciidoctor) documentation of your contract from the classes listed in the arguments,

  2. Will generate an OpenRPC (JSON) contract from the classes listed in the arguments.

Optimize your JSON-RPC execution

As with any bulk friendly solution, you can optimize the JSON-RPC execution by implementing a kind of "execution plan" for the request. There are several cases it can be useful:

  1. You receive a bulk request (array) which does N > 1 atomic findById and want to replace it by a single findByIds,

  2. You have a custom bulk method,

  3. You have a bulk request which can be optimized merging multiple requests (in this case the result of the first one can be dropped and only the last one will be used for ex.).

Let’s take a concrete example:

You receive:

[
  {"jsonrpc":"2.0","method":"entityfindById","params":{"id":"1"}},
  {"jsonrpc":"2.0","method":"entityfindById","params":{"id":"2"}}
]

If you keep it this way you will do 2 queries (assume SQL ones for example). The idea is to replace them by an alternative execution which would do a single query.

One option, if you already have a method enabling that is to replace the method and then dispatch the results:

[
  {"jsonrpc":"2.0","method":"entityfindByIds","params":{"ids":["1","2"]}}
]

This can be done rewritting the request this way:

@Specializes
@ApplicationScoped
public class EnrichedJsonRpcHandler extends JsonRpcHandler {
    @Inject
    private RequestRewriter requestRewriter; // your own impl

    @Override
    public JsonStructure readRequest(final HttpServletRequest request, final Reader reader) throws IOException {
        return requestRewriter.rewrite(request::setAttribute, super.readRequest(reader));
    }
}

The issue then is to dispatch the result since instead of having 2 findById results you get a single one findByIds. The trick there is to pass a state in the HttpServletRequest as attribute and use it in handleRequest to be able to process the output:

@Specializes
@ApplicationScoped
public class EnrichedJsonRpcHandler extends JsonRpcHandler {
    @Inject
    private RequestRewriter requestRewriter;

    @Override
    public JsonStructure readRequest(final HttpServletRequest request, final Reader reader) throws IOException {
        return requestRewriter.preProcess(request::setAttribute, super.readRequest(reader)); (1)
    }

    @Override
    public CompletionStage<Response> handleRequest(final JsonObject request,
                                                   final HttpServletRequest httpRequest,
                                                   final HttpServletResponse httpResponse) {
        return super.handleRequest(request, httpRequest, httpResponse)
                .thenApply(res -> requestRewriter.postProcess(httpRequest::getAttribute, res)); (2)
    }
}
  1. We rewrite the request before its execution,

  2. We process the response after its execution (take care to error cases).

As a guide, here is a skeleton for the request rewritter:

@ApplicationScoped
public class RequestRewriter {
    public JsonStructure preProcess(final BiConsumer<String, Object> attributeSetter, final JsonStructure structure) {
        if (isFindByIds(structure)) { // if it a rewritten request
            // store the post process callback - enables to have a generic postProcess
            attributeSetter.accept("RequestRewriter.postProcess", (Function<Response, Response>) this::dispatchFindByIds);
            // rewrite the request
            return flattenFindByIds(structure);
        }
        return structure;
    }

    public Response postProcess(final Function<String, Object> attributeGetter, final Response result) {
        return ofNullable(attributeGetter.apply("RequestRewriter.postProcess"))
                .map(it ->  (Function<Response, Response>) it)
                .map(it -> it.apply(result))
                .orElse(result);
    }
}

An alternative is to just override handleRequest to implement there the alternative execution paths:

@Specializes
@ApplicationScoped
public class EnrichedJsonRpcHandler extends JsonRpcHandler {
    @Override
    public CompletionStage<?> execute(final JsonStructure request, final HttpServletRequest httpRequest, final HttpServletResponse httpResponse) {
        if (shouldBeRewritten(request)) { // to define with your rules
            return alternativeImplementation(request);
        }
        return super.execute(request, httpRequest, httpResponse);
    }
}

If you want a more complete example of execution plan you can read execution plan example page.

Postman collection for JSON-RPC endpoint

Similarly to Asciidoctor documentation you can generate a collection of JSON-RPC requests using PostmanCollectionGenerator main. It takes an OpenRPC file (you can get it with openrpc method) and output a Postman collection file.

Persistence

Since UShip is mainly CDI based, it will be compatible with any kind of persistence Layer from SQL to NoSQL. However, for common simple cases, we ship a small JDBC mapper in our io.yupiik.uship:persistence module.

Its scope is not to replace JPA but for simple cases to just provide a very light ORB. It only supports flat mapping - relationships must be managed by your which also means no magic or lazy query ;) - and transactions are managed through the DataSource. It works if the Connection is in autocommit mode or if you handle the commit through a transactional interceptor for example.

The entry point is the Database.of(configuration) factory then all operations are available on the database instance.

Here some examples:

final var database = Database.of(new Configuration().setDataSource(dataSource));
final var entity = database.getOrCreateEntity(MyFlatEntity.class);
final var ddl = entity.ddl();
// execute the statement on a Connection to create the table

final var entity = ...;
database.insert(entity);
final var found = database.findById(MyEntity.class, "myid");
database.update(entity);
database.delete(entity);

Mapping is a simple as:

@Table("MY_ENTITY")
public class MyEntity {
    @Id
    private String id;

    @Column // mark the field as persistent
    private String name;

    @Column(name = "SIMPLE_AGE") // rename the field
    private int age;

    @OnInsert
    private void onInsert() {
        id = MyIDFactory.create(); // any custom way to create an ID like an UUID (recommended)
    }

    @OnUpdate
    private void onUpdate() {
        // no-op
    }

    @OnDelete
    private void onDelete() {
        // no-op
    }
}

For more advanced cases you can use query and batch methods from the Database instance.

Tip
to setup a DataSource you can rely on org.apache.tomcat:tomcat-jdbc and TomcatDataSource extension which enables to bind a connection to a thread to reuse it in your code if needed.

Advanced queries

For advanced queries you can use a virtual table (it is a plain table but the @Table annotation is ignored) which would be used as project based on query aliases:

final var sql = "SELECT DISTINCT " + String.join(", ",
        entty1.concatenateColumns(new Entity.ColumnsConcatenationRequest()
                .setPrefix("e1.").setAliasPrefix("")),
        entity2.concatenateColumns(new Entity.ColumnsConcatenationRequest()
                .setPrefix("e2.").setAliasPrefix("e2").setIgnored(Set.of("e1_id")))) + " " +
        "FROM ENTITY1 e1" +
        " LEFT JOIN ENTITY2 admin on e2.e1_id = e1.id " +
        "WHERE e1.id = ?";
final var lines = final var lines = database.query(
        JoinModel.class, sql, b -> b.bind("the-id"));

with JoinModel being something like:

@Table(name = "ignored")
public class JoinModel {
    // e1
    @Id
    private String id;
    @Column
    private String name;
    // e2
    @Id
    private String e2Id;
    @Column
    private String e2Label;
}

Or you can also use Entity binder capacity:

// can be done in a @PostConstruct
final var e2Alias = "e2";
final var e2Ignored = Set.of("e1Id");
final var sql = "SELECT DISTINCT " + String.join(", ",
        entty1.concatenateColumns(new Entity.ColumnsConcatenationRequest()
                .setPrefix("e1.").setAliasPrefix("")),
        entity2.concatenateColumns(new Entity.ColumnsConcatenationRequest()
                .setPrefix(e2Alias + '.').setAliasPrefix(e2Alias).setIgnored(e2Ignored))) + " " +
        "FROM ENTITY1 e1" +
        " LEFT JOIN ENTITY2 admin on e2.e1_id = e1.id " +
        "WHERE e1.id = ?";

// precompile the binders
var fields = database.getOrCreateEntity(Entity1.class).getOrderedColumns().stream()
            .map(Entity.ColumnMetadata::javaName)
            .collect(toList());
final var e1Binder = database.getOrCreateEntity(Entity1.class)
        .mapFromPrefix("", fields.toArray(String[]::new));

fields.addAll( // continue to go through the queries fields appending the next entity ones - binder will pick the column indices right this way
        database.getOrCreateEntity(Entity2.class)
            .getOrderedColumns().stream()
            .filter(c -> !e2Ignored.contains(c.javaName()))
            .map(c -> c.toAliasName(e2Alias))
            .collect(toList()));
final var e2Binder = database.getOrCreateEntity(Entity2.class)
        .mapFromPrefix(e2Alias, fields.toArray(String[]::new));

// at runtime
final var lines = final var lines = database.query(
        sql,
        b -> b.bind("the-id"),
        result -> {
            // bind current resultSet and iterate over each line of the resultSet
            return result.mapAll(line -> Tuple2.of(e1Binder.apply(line), e2Binder.apply(line)));
        });
// lines will get both Entity1 and Entity2 instances, then you can just filter them checking there is an id or not for example
// and join them as needed to create your output model
Warning
1.0.2 was broken, ensure to use >= 1.0.3 to get this feature.

Query from interfaces

A light interface statement support is done through @Operation and @Statement annotations. The idea is to expose the Database capabilities through a statically typed API. Here is a sample:

@Operation(aliases = @Operation.Alias(alias = "e", type = MyFlatEntity.class))
public interface MyOps {
    @Statement("select count(*) from ${e#table}")
    long countAll();

    @Statement("select ${e#fields} from ${e#table} order by name")
    List<MyFlatEntity> findAll();

    @Statement("select ${e#fields} from ${e#table} where name = ?")
    MyFlatEntity findOne(String name);

    @Statement("select ${e#fields} from ${e#table} where name = ${parameters#name}")
    MyFlatEntity findOneWithPlaceholders(String name);

    @Statement("delete from ${e#table} where name like ?")
    int delete(String name);

    @Statement("delete from ${e#table} where name like ?")
    void deleteWithoutReturnedValue(String name);
}

The statements can be plain SQL with ? bindings or can use the available interpolations (but don’t mix ${parameters#xxx} with ? bindings, you must choose one type of binding per statement):

  • ${<alias>#table}: name of the table of the entity aliased by alias,

  • ${<alias>#fields}: all columns of the entity represented by the alias,

  • ${parameters#<name>}: will be replaced by a ? binding and the parameter named name (using bytecode name, ensure to compile with -parameter flag) will be used. It enables to not set the parameters in the same order than in the query because otherwise it is just bound blindly in order.

  • ${parameters#<name>#in}: will be replaced by as much ? than the size of the parameter name and surround the bindings by parenthesis prefixed by in ` keyword (ex: `in (?, ?) if name parameter is a list of 2 items). It is useful for in where clauses.

Aliases are defined through @Operation annotation on the interface and enables to have a shorter syntax in the statement. You can also use the fully qualified name of the entity instead of defining aliases but it is less readable.

Going further

It is possible to enrich the JSON-RPC protocol, in particular bulk request support, by reusing io.yupiik.uship.jsonrpc.core.impl.JsonRpcHandler class in your own endpoints. Typical examples are endpoint wrapping a set of request (sub methods), in a single transaction, endpoints propagating a state between method calls (like the second method will get the id generated in the first one), etc…​