Canary Deployment with Kubernetes and Istio

Istio i

s of Istio's traffic management capabilities:

  • Virtual Service: Virtual Service describes how traffic flows to a set of destinations. Using Virtual Service you can configure how to route the requests to a service within the mesh. It contains a bunch of routing rules that are evaluated, and then a decision is made on where to route the incoming request (or even reject if no routes match).

  • Gateway: Gateways are used to manage your inbound and outbound traffic. They allow you to specify the virtual hosts and their associated ports that needs to be opened for allowing the traffic into the cluster.

  • Destination Rule: This is used to configure how a client in the mesh interacts with your service. It's used for configuring TLS settings of your sidecar, splitting your service into subs

    ets, load balancing strategy for your clients etc.

For doing canary deployment, destination rule plays a major role as that's what we will be using to split the service into subset and route traffic accordingly.

Application deployment

For our canary deployment, we will be using the following version of the application:

  • httpbin.org: This will be the version one (v1) of our application. This is the application that's already deployed, and your aim is to partially replace it with a newer version of the application.

  • websocket app: This will be the version two (v2) of the application that has to be gradually introduced.

Note that in the actual real world, both the applications will share the same code. For our example, we are just taking two arbitrary applications to make testing easier.

Our assumption is that we already have version one of our application deployed. So let's deploy that initially. We will write our usual Kubernetes resources for it. The deployment manifest for the version one application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80

And let's create a corresponding service for it:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: httpbin
  name: httpbin
  namespace: canary
spec:
  ports:
  - name: httpbin
    port: 8000
    targetPort: 80
  - name: tornado
    port: 8001
    targetPort: 8888
  selector:
    app: httpbin
  type

SSL certificate for the application which will use cert-manager:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: httpbin-ingress-cert
  namespace: istio-system
spec:
  secretName: httpbin-ingress-cert
  issuerRef:
    name: letsencrypt-dns-prod
    kind: ClusterIssuer
  dnsNames

And the Istio resources for the application:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: canary
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: https-httpbin
      number: 443
      protocol: HTTPS
    tls:
      credentialName: httpbin-ingress-cert
      mode: SIMPLE
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: http-httpbin
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000

The above resource define gateway and virtual service. You could see that we are using TLS here and redirecting HTTP to HTTPS.

We also have to make sure that namespace has istio injection enabled:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/component: httpbin
    istio-injection: enabled
  name

I have the above set of k8s resources managed via kustomize. Let's deploy them to get the initial environment which consists of only v1 (httpbin) application:

kustomize build overlays/istio_canary > istio.yaml
kubectl apply -f istio.yaml
namespace/canary created
service/httpbin created
deployment.apps/httpbin created
gateway.networking.istio.io/httpbin-gateway created
virtualservice.networking.istio.io/httpbin created
kubectl apply -f overlays/istio_canary/certificate.yaml
certificate.cert-manager.io/httpbin-ingress-cert created

Now I can go and verify in my browser that my application is actually up and running:


Now comes the interesting part. We have to deploy the version two of our application and make sure around 20% of our traffic goes to it. Let's write the deployment manifest for it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin-v2
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v2
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - image: psibi/tornado-websocket:v0.3
        imagePullPolicy: IfNotPresent
        name: tornado
        ports:
        - containerPort: 8888

And now the destination rule to split the service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
  namespace: canary
spec:
  host: httpbin.canary.svc.cluster.local
  subsets:
  - labels:
      version: v1
    name: v1
  - labels:
      version: v2
    name

And finally let's modify the virtual service to split 20% of the traffic to the newer version:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000
        subset: v1
      weight: 80
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8001
        subset: v2
      weight: 20

And now if you go again to the browser and refresh it a number of times (note that we route only 20% of the traffic to the new deployment), you will see the new application eventually:


Testing deployment

Let's do around 10 curl requests to our endpoint to see how the traffic is getting routed:

seq 10 | xargs -Iz curl -s https://canary.33test.dev-sandbox.fpcomplete.com | rg "<title>"
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>

And you can confirm how out of the 10 requests, 2 requests are routed to the websocket (v2) application. If you have Kiali deployed, you can even visualize the above traffic flow:


And that summarizes our post on how to achieve canary deployment using Istio. While this post shows a basic example, traffic steering and routing is one of the core features of Istio and it offers various ways to configure the routing decisions made by it. You can find more further details about it in the official docs. You can also use a controller like Argo Rollouts with Istio to perform canary deployments and use additional features like analysis and experiment.

Canary Deployment with Kubernetes and Istio

Istio i

s of Istio's traffic management capabilities:

  • Virtual Service: Virtual Service describes how traffic flows to a set of destinations. Using Virtual Service you can configure how to route the requests to a service within the mesh. It contains a bunch of routing rules that are evaluated, and then a decision is made on where to route the incoming request (or even reject if no routes match).

  • Gateway: Gateways are used to manage your inbound and outbound traffic. They allow you to specify the virtual hosts and their associated ports that needs to be opened for allowing the traffic into the cluster.

  • Destination Rule: This is used to configure how a client in the mesh interacts with your service. It's used for configuring TLS settings of your sidecar, splitting your service into subs

    ets, load balancing strategy for your clients etc.

For doing canary deployment, destination rule plays a major role as that's what we will be using to split the service into subset and route traffic accordingly.

Application deployment

For our canary deployment, we will be using the following version of the application:

  • httpbin.org: This will be the version one (v1) of our application. This is the application that's already deployed, and your aim is to partially replace it with a newer version of the application.

  • websocket app: This will be the version two (v2) of the application that has to be gradually introduced.

Note that in the actual real world, both the applications will share the same code. For our example, we are just taking two arbitrary applications to make testing easier.

Our assumption is that we already have version one of our application deployed. So let's deploy that initially. We will write our usual Kubernetes resources for it. The deployment manifest for the version one application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80

And let's create a corresponding service for it:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: httpbin
  name: httpbin
  namespace: canary
spec:
  ports:
  - name: httpbin
    port: 8000
    targetPort: 80
  - name: tornado
    port: 8001
    targetPort: 8888
  selector:
    app: httpbin
  type

SSL certificate for the application which will use cert-manager:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: httpbin-ingress-cert
  namespace: istio-system
spec:
  secretName: httpbin-ingress-cert
  issuerRef:
    name: letsencrypt-dns-prod
    kind: ClusterIssuer
  dnsNames

And the Istio resources for the application:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: canary
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: https-httpbin
      number: 443
      protocol: HTTPS
    tls:
      credentialName: httpbin-ingress-cert
      mode: SIMPLE
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: http-httpbin
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000

The above resource define gateway and virtual service. You could see that we are using TLS here and redirecting HTTP to HTTPS.

We also have to make sure that namespace has istio injection enabled:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/component: httpbin
    istio-injection: enabled
  name

I have the above set of k8s resources managed via kustomize. Let's deploy them to get the initial environment which consists of only v1 (httpbin) application:

kustomize build overlays/istio_canary > istio.yaml
kubectl apply -f istio.yaml
namespace/canary created
service/httpbin created
deployment.apps/httpbin created
gateway.networking.istio.io/httpbin-gateway created
virtualservice.networking.istio.io/httpbin created
kubectl apply -f overlays/istio_canary/certificate.yaml
certificate.cert-manager.io/httpbin-ingress-cert created

Now I can go and verify in my browser that my application is actually up and running:


Now comes the interesting part. We have to deploy the version two of our application and make sure around 20% of our traffic goes to it. Let's write the deployment manifest for it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin-v2
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v2
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - image: psibi/tornado-websocket:v0.3
        imagePullPolicy: IfNotPresent
        name: tornado
        ports:
        - containerPort: 8888

And now the destination rule to split the service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
  namespace: canary
spec:
  host: httpbin.canary.svc.cluster.local
  subsets:
  - labels:
      version: v1
    name: v1
  - labels:
      version: v2
    name

And finally let's modify the virtual service to split 20% of the traffic to the newer version:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000
        subset: v1
      weight: 80
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8001
        subset: v2
      weight: 20

And now if you go again to the browser and refresh it a number of times (note that we route only 20% of the traffic to the new deployment), you will see the new application eventually:


Testing deployment

Let's do around 10 curl requests to our endpoint to see how the traffic is getting routed:

seq 10 | xargs -Iz curl -s https://canary.33test.dev-sandbox.fpcomplete.com | rg "<title>"
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>

And you can confirm how out of the 10 requests, 2 requests are routed to the websocket (v2) application. If you have Kiali deployed, you can even visualize the above traffic flow:


And that summarizes our post on how to achieve canary deployment using Istio. While this post shows a basic example, traffic steering and routing is one of the core features of Istio and it offers various ways to configure the routing decisions made by it. You can find more further details about it in the official docs. You can also use a controller like Argo Rollouts with Istio to perform canary deployments and use additional features like analysis and experiment.

Haskell

Canary Deployment with Kubernetes and Istio

Istio i

s of Istio's traffic management capabilities:

  • Virtual Service: Virtual Service describes how traffic flows to a set of destinations. Using Virtual Service you can configure how to route the requests to a service within the mesh. It contains a bunch of routing rules that are evaluated, and then a decision is made on where to route the incoming request (or even reject if no routes match).

  • Gateway: Gateways are used to manage your inbound and outbound traffic. They allow you to specify the virtual hosts and their associated ports that needs to be opened for allowing the traffic into the cluster.

  • Destination Rule: This is used to configure how a client in the mesh interacts with your service. It's used for configuring TLS settings of your sidecar, splitting your service into subs

    ets, load balancing strategy for your clients etc.

For doing canary deployment, destination rule plays a major role as that's what we will be using to split the service into subset and route traffic accordingly.

Application deployment

For our canary deployment, we will be using the following version of the application:

  • httpbin.org: This will be the version one (v1) of our application. This is the application that's already deployed, and your aim is to partially replace it with a newer version of the application.

  • websocket app: This will be the version two (v2) of the application that has to be gradually introduced.

Note that in the actual real world, both the applications will share the same code. For our example, we are just taking two arbitrary applications to make testing easier.

Our assumption is that we already have version one of our application deployed. So let's deploy that initially. We will write our usual Kubernetes resources for it. The deployment manifest for the version one application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80

And let's create a corresponding service for it:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: httpbin
  name: httpbin
  namespace: canary
spec:
  ports:
  - name: httpbin
    port: 8000
    targetPort: 80
  - name: tornado
    port: 8001
    targetPort: 8888
  selector:
    app: httpbin
  type

SSL certificate for the application which will use cert-manager:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: httpbin-ingress-cert
  namespace: istio-system
spec:
  secretName: httpbin-ingress-cert
  issuerRef:
    name: letsencrypt-dns-prod
    kind: ClusterIssuer
  dnsNames

And the Istio resources for the application:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: canary
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: https-httpbin
      number: 443
      protocol: HTTPS
    tls:
      credentialName: httpbin-ingress-cert
      mode: SIMPLE
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: http-httpbin
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000

The above resource define gateway and virtual service. You could see that we are using TLS here and redirecting HTTP to HTTPS.

We also have to make sure that namespace has istio injection enabled:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/component: httpbin
    istio-injection: enabled
  name

I have the above set of k8s resources managed via kustomize. Let's deploy them to get the initial environment which consists of only v1 (httpbin) application:

kustomize build overlays/istio_canary > istio.yaml
kubectl apply -f istio.yaml
namespace/canary created
service/httpbin created
deployment.apps/httpbin created
gateway.networking.istio.io/httpbin-gateway created
virtualservice.networking.istio.io/httpbin created
kubectl apply -f overlays/istio_canary/certificate.yaml
certificate.cert-manager.io/httpbin-ingress-cert created

Now I can go and verify in my browser that my application is actually up and running:


Now comes the interesting part. We have to deploy the version two of our application and make sure around 20% of our traffic goes to it. Let's write the deployment manifest for it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin-v2
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v2
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - image: psibi/tornado-websocket:v0.3
        imagePullPolicy: IfNotPresent
        name: tornado
        ports:
        - containerPort: 8888

And now the destination rule to split the service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
  namespace: canary
spec:
  host: httpbin.canary.svc.cluster.local
  subsets:
  - labels:
      version: v1
    name: v1
  - labels:
      version: v2
    name

And finally let's modify the virtual service to split 20% of the traffic to the newer version:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000
        subset: v1
      weight: 80
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8001
        subset: v2
      weight: 20

And now if you go again to the browser and refresh it a number of times (note that we route only 20% of the traffic to the new deployment), you will see the new application eventually:


Testing deployment

Let's do around 10 curl requests to our endpoint to see how the traffic is getting routed:

seq 10 | xargs -Iz curl -s https://canary.33test.dev-sandbox.fpcomplete.com | rg "<title>"
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>

And you can confirm how out of the 10 requests, 2 requests are routed to the websocket (v2) application. If you have Kiali deployed, you can even visualize the above traffic flow:


And that summarizes our post on how to achieve canary deployment using Istio. While this post shows a basic example, traffic steering and routing is one of the core features of Istio and it offers various ways to configure the routing decisions made by it. You can find more further details about it in the official docs. You can also use a controller like Argo Rollouts with Istio to perform canary deployments and use additional features like analysis and experiment.

Sep 22, 2025

Author:

Haskell

Canary Deployment with Kubernetes and Istio

Istio i

s of Istio's traffic management capabilities:

  • Virtual Service: Virtual Service describes how traffic flows to a set of destinations. Using Virtual Service you can configure how to route the requests to a service within the mesh. It contains a bunch of routing rules that are evaluated, and then a decision is made on where to route the incoming request (or even reject if no routes match).

  • Gateway: Gateways are used to manage your inbound and outbound traffic. They allow you to specify the virtual hosts and their associated ports that needs to be opened for allowing the traffic into the cluster.

  • Destination Rule: This is used to configure how a client in the mesh interacts with your service. It's used for configuring TLS settings of your sidecar, splitting your service into subs

    ets, load balancing strategy for your clients etc.

For doing canary deployment, destination rule plays a major role as that's what we will be using to split the service into subset and route traffic accordingly.

Application deployment

For our canary deployment, we will be using the following version of the application:

  • httpbin.org: This will be the version one (v1) of our application. This is the application that's already deployed, and your aim is to partially replace it with a newer version of the application.

  • websocket app: This will be the version two (v2) of the application that has to be gradually introduced.

Note that in the actual real world, both the applications will share the same code. For our example, we are just taking two arbitrary applications to make testing easier.

Our assumption is that we already have version one of our application deployed. So let's deploy that initially. We will write our usual Kubernetes resources for it. The deployment manifest for the version one application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80

And let's create a corresponding service for it:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: httpbin
  name: httpbin
  namespace: canary
spec:
  ports:
  - name: httpbin
    port: 8000
    targetPort: 80
  - name: tornado
    port: 8001
    targetPort: 8888
  selector:
    app: httpbin
  type

SSL certificate for the application which will use cert-manager:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: httpbin-ingress-cert
  namespace: istio-system
spec:
  secretName: httpbin-ingress-cert
  issuerRef:
    name: letsencrypt-dns-prod
    kind: ClusterIssuer
  dnsNames

And the Istio resources for the application:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: canary
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: https-httpbin
      number: 443
      protocol: HTTPS
    tls:
      credentialName: httpbin-ingress-cert
      mode: SIMPLE
  - hosts:
    - canary.33test.dev-sandbox.fpcomplete.com
    port:
      name: http-httpbin
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000

The above resource define gateway and virtual service. You could see that we are using TLS here and redirecting HTTP to HTTPS.

We also have to make sure that namespace has istio injection enabled:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/component: httpbin
    istio-injection: enabled
  name

I have the above set of k8s resources managed via kustomize. Let's deploy them to get the initial environment which consists of only v1 (httpbin) application:

kustomize build overlays/istio_canary > istio.yaml
kubectl apply -f istio.yaml
namespace/canary created
service/httpbin created
deployment.apps/httpbin created
gateway.networking.istio.io/httpbin-gateway created
virtualservice.networking.istio.io/httpbin created
kubectl apply -f overlays/istio_canary/certificate.yaml
certificate.cert-manager.io/httpbin-ingress-cert created

Now I can go and verify in my browser that my application is actually up and running:


Now comes the interesting part. We have to deploy the version two of our application and make sure around 20% of our traffic goes to it. Let's write the deployment manifest for it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin-v2
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v2
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - image: psibi/tornado-websocket:v0.3
        imagePullPolicy: IfNotPresent
        name: tornado
        ports:
        - containerPort: 8888

And now the destination rule to split the service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
  namespace: canary
spec:
  host: httpbin.canary.svc.cluster.local
  subsets:
  - labels:
      version: v1
    name: v1
  - labels:
      version: v2
    name

And finally let's modify the virtual service to split 20% of the traffic to the newer version:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: canary
spec:
  gateways:
  - httpbin-gateway
  hosts:
  - canary.33test.dev-sandbox.fpcomplete.com
  http:
  - route:
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8000
        subset: v1
      weight: 80
    - destination:
        host: httpbin.canary.svc.cluster.local
        port:
          number: 8001
        subset: v2
      weight: 20

And now if you go again to the browser and refresh it a number of times (note that we route only 20% of the traffic to the new deployment), you will see the new application eventually:


Testing deployment

Let's do around 10 curl requests to our endpoint to see how the traffic is getting routed:

seq 10 | xargs -Iz curl -s https://canary.33test.dev-sandbox.fpcomplete.com | rg "<title>"
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
    <title>httpbin.org</title>
<title>tornado WebSocket example</title>

And you can confirm how out of the 10 requests, 2 requests are routed to the websocket (v2) application. If you have Kiali deployed, you can even visualize the above traffic flow:


And that summarizes our post on how to achieve canary deployment using Istio. While this post shows a basic example, traffic steering and routing is one of the core features of Istio and it offers various ways to configure the routing decisions made by it. You can find more further details about it in the official docs. You can also use a controller like Argo Rollouts with Istio to perform canary deployments and use additional features like analysis and experiment.

Haskell

FP Complete Corporation Announces Partnership with Portworx by Pure Storage

FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.

Charlotte, North Carolina (August 31, 2022) – FP Complete Corporation, a global technology partner that specializes in DevSecOps, Cloud Native Computing, and Advanced Server-Side Programming Languages today announced that it has partnered with Portworx by Pure Storage to bring an integrated solution to customers seeking DevSecOps consulting services for the management of persistent storage, data protection, disaster recovery, data security, and hybrid data migrations.

The partnership between FP Complete Corporation and Portworx will be integral in providing FP Complete's DevSecOps and Cloud Enablement clients with a data storage platform designed to run in a container that supports any cloud physical storage on any Kubernetes distribution.

Portworx Enterprise gets right to the heart of what developers and Kubernetes admins want: data to behave like a cloud service. Developers and Admins wish to request Storage based on their requirements (capacity, performance level, resiliency level, security level, access, protection level, and more) and let the data management layer figure out all the details. Portworx PX-Backup adds enterprise-grade point-and-click backup and recovery for all applications running on Kubernetes, even if they are stateless.

Portworx shortens development timelines and headaches for companies moving from on-prem to cloud. In addition, the integration between FP Complete Corporation and Portworx allows the easy exchange of best practices information, so design and storage run in parallel.

Gartner predicts that by 2025, more than 85% of global organizations will be running containerized applications in production, up from less than 35% in 20191. As container adoption increases and more applications are being deployed in the enterprise, these organizations want more options to manage stateful and persistent data associated with these modern applications.

"It is my pleasure to announce that Pure Storage can now be utilized by our world-class engineers needing a fully integrated, end-to-end storage and data management solution for our DevSecOps clients with complicated Kubernetes projects. Pure Storage is known globally for its strength in the storage industry, and this partnership offers strong support for our business," said Wes Crook, CEO of FP Complete Corporation.

“There can be zero doubt that most new cloud-native apps are built on containers and orchestrated by Kubernetes. Unfortunately, the early development on containers resulted in lots of data access and availability issues due to a lack of enterprise-grade persistent storage data management and low data visibility. With Portworx and the aid of Kubernetes experts like FP Complete, we can offer customers a rock-solid, enterprise-class, cloud-native development platform that delivers end-to-end application and data lifecycle management that significantly lowers the risks and costs of operating cloud-native application infrastructure,” said Venkat Ramakrishnan, VP, Engineering, Cloud Native Business Unit, Pure Storage.

About FP Complete Corporation

Founded in 2012 by Aaron Contorer, former Microsoft executive, FP Complete Corporation is known globally as the one-stop, full-stack technology shop that delivers agile, reliable, repeatable, and highly secure software. In 2019, we launched our flagship platform, Kube360®, which is a fully managed enterprise Kubernetes-based DevOps ecosystem. With Kube360, FP Complete is now well positioned to provide a complete suite of products and solutions to our clients on their journey towards cloudification, containerization, and DevOps best practices. The Company's mission is to deliver superior software engineering to build great software for our clients. FP Complete Corporation serves over 200+ global clients and employs over 70 people worldwide. It has won many awards and made the Inc. 5000 list in 2020 for being one of the 5000 fastest-growing private companies in America. For more information about FP Complete Corporation, visit its website at [www.fpcomplete.com](https://www.fpcomplete.com/).

1 Arun Chandrasekaran, Best Practices for Running Containers and Kubernetes in Production, Gartner, August 2020

Sep 22, 2025

Author: FP Block Staff

Haskell

FP Complete Corporation Announces Partnership with Portworx by Pure Storage

FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.

Charlotte, North Carolina (August 31, 2022) – FP Complete Corporation, a global technology partner that specializes in DevSecOps, Cloud Native Computing, and Advanced Server-Side Programming Languages today announced that it has partnered with Portworx by Pure Storage to bring an integrated solution to customers seeking DevSecOps consulting services for the management of persistent storage, data protection, disaster recovery, data security, and hybrid data migrations.

The partnership between FP Complete Corporation and Portworx will be integral in providing FP Complete's DevSecOps and Cloud Enablement clients with a data storage platform designed to run in a container that supports any cloud physical storage on any Kubernetes distribution.

Portworx Enterprise gets right to the heart of what developers and Kubernetes admins want: data to behave like a cloud service. Developers and Admins wish to request Storage based on their requirements (capacity, performance level, resiliency level, security level, access, protection level, and more) and let the data management layer figure out all the details. Portworx PX-Backup adds enterprise-grade point-and-click backup and recovery for all applications running on Kubernetes, even if they are stateless.

Portworx shortens development timelines and headaches for companies moving from on-prem to cloud. In addition, the integration between FP Complete Corporation and Portworx allows the easy exchange of best practices information, so design and storage run in parallel.

Gartner predicts that by 2025, more than 85% of global organizations will be running containerized applications in production, up from less than 35% in 20191. As container adoption increases and more applications are being deployed in the enterprise, these organizations want more options to manage stateful and persistent data associated with these modern applications.

"It is my pleasure to announce that Pure Storage can now be utilized by our world-class engineers needing a fully integrated, end-to-end storage and data management solution for our DevSecOps clients with complicated Kubernetes projects. Pure Storage is known globally for its strength in the storage industry, and this partnership offers strong support for our business," said Wes Crook, CEO of FP Complete Corporation.

“There can be zero doubt that most new cloud-native apps are built on containers and orchestrated by Kubernetes. Unfortunately, the early development on containers resulted in lots of data access and availability issues due to a lack of enterprise-grade persistent storage data management and low data visibility. With Portworx and the aid of Kubernetes experts like FP Complete, we can offer customers a rock-solid, enterprise-class, cloud-native development platform that delivers end-to-end application and data lifecycle management that significantly lowers the risks and costs of operating cloud-native application infrastructure,” said Venkat Ramakrishnan, VP, Engineering, Cloud Native Business Unit, Pure Storage.

About FP Complete Corporation

Founded in 2012 by Aaron Contorer, former Microsoft executive, FP Complete Corporation is known globally as the one-stop, full-stack technology shop that delivers agile, reliable, repeatable, and highly secure software. In 2019, we launched our flagship platform, Kube360®, which is a fully managed enterprise Kubernetes-based DevOps ecosystem. With Kube360, FP Complete is now well positioned to provide a complete suite of products and solutions to our clients on their journey towards cloudification, containerization, and DevOps best practices. The Company's mission is to deliver superior software engineering to build great software for our clients. FP Complete Corporation serves over 200+ global clients and employs over 70 people worldwide. It has won many awards and made the Inc. 5000 list in 2020 for being one of the 5000 fastest-growing private companies in America. For more information about FP Complete Corporation, visit its website at [www.fpcomplete.com](https://www.fpcomplete.com/).

1 Arun Chandrasekaran, Best Practices for Running Containers and Kubernetes in Production, Gartner, August 2020

Haskell

Reflections on Haskell and Rust

Introduction

For most of my professional experience, I have been writing production code in both Haskell and Rust, primarily focusing on web services, APIs, and HTTP stack development. My journey started with Haskell, followed by working with Rust, and most recently returning to the Haskell ecosystem.

This experience has given me perspective on both languages' strengths and limitations in real-world applications. Each language has aspects that I appreciate and miss when working with the other. This post examines the features and characteristics that stand out to me in each language.

Variable shadowing

Rust's ability to shadow variables seamlessly is something I came to appreciate. In Rust, you can write:

let config = load_config();
let config = validate_config(config)?;
let config = merge_defaults(config);

This pattern is common and encouraged in Rust, making code more readable by avoiding the need for intermediate variable names. In Haskell, you would typically need different names:

config <- loadConfig
config' <- validateConfig config
config'' <- mergeDefaults config'

Haskell's approach is slightly harder to read, while Rust's shadowing makes transformation pipelines more natural.

Sum types of records

Rust's enum system, particularly when combined with pattern matching, feels more robust than Haskell's sum types of records. When defining sum types of records in Haskell, there is a possibility of introducing partial record accessors which can cause runtime crashes, though recent versions of GHC now produce compile-time warnings for this pattern:

data Person = Student { name :: String, university :: String }
            | Worker { name :: String, company :: String }

-- This can crash if applied to a Student
getCompany :: Person -> String
getCompany p = company p  -- runtime error for Student

-- Safe approach requires pattern matching
getCompany :: Person -> Maybe String
getCompany (Worker _ c) = Just c
getCompany (Student _ _) = Nothing

Rust eliminates this class of errors by design:

enum Person {
    Student { name: String, university: String },
    Worker { name: String, company: String },
}

fn get_company(person: &Person) -> Option<&str> {
    match person {
        Person::Worker { company, .. } => Some(company),
        Person::Student { .. } => None,
    }
}

Enum variant namespacing

Rust allows multiple enum types to have the same variant names within the same module, while Haskell's constructor names must be unique within their scope. This leads to different patterns in the two languages.

In Rust, you can define:

enum HttpStatus {
    Success,
    Error,
}

enum DatabaseOperation {
    Success,
    Error,
}

// Usage is clear due to explicit namespacing
fn handle_request() {
    let http_result = HttpStatus::Success;
    let db_result = DatabaseOperation::Error;
}

The same approach in Haskell would cause a compile error:

-- This won't compile - duplicate constructor names
data HttpStatus = Success | Error
data DatabaseOperation = Success | Error  -- Error: duplicate constructors

In Haskell, you need unique constructor names:

data HttpStatus = HttpSuccess | HttpError
data DatabaseOperation = DbSuccess | DbError

-- Usage
handleRequest = do
    let httpResult = HttpSuccess
    let dbResult =

Alternatively, Haskell developers often use qualified imports syntax to achieve similar namespacing:

-- Using modules for namespacing
module Http where
data Status = Success | Error

module Database where
data Operation = Success | Error

-- Usage with qualified imports
import qualified Http
import qualified Database

handleRequest = do
    let httpResult = Http.Success
    let dbResult = Database.

The Typename::Variant syntax in Rust makes the intent clearer at the usage site. You immediately know which enum type you're working with, while Haskell's approach can sometimes require additional context or prefixing to achieve the same clarity.

Struct field visibility

Rust provides granular visibility control for struct fields, allowing you to expose only specific fields while keeping others private. This fine-grained control is built into the language:

pub struct User {
    pub name: String,        // publicly accessible
    pub email: String,       // publicly accessible
    created_at: DateTime,    // private field
    password_hash: String,   // private field
}

impl User {
    pub fn new(name: String, email: String, password: String) -> Self {
        User {
            name,
            email,
            created_at: Utc::now(),
            password_hash: hash_password(password),
        }
    }

    // Controlled access to private field
    pub fn created_at(&self) -> DateTime {
        self.created_at
    }
}

Rust offers even more granular visibility control beyond simple pub and private fields. You can specify exactly where a field should be accessible:

pub struct Config {
    pub name: String,              // accessible everywhere
    pub(crate) internal_id: u64,   // accessible within this crate only
    pub(super) parent_ref: String, // accessible to parent module only
    pub(in crate::utils) debug_info: String, // accessible within utils module only
    private_key: String,           // private to this module
}

These granular visibility modifiers (pub(crate), pub(super), pub(in path)) allow you to create sophisticated access patterns that match your module hierarchy and architectural boundaries. Some of these patterns are simply not possible to replicate in Haskell's module system.

In Haskell, record field visibility is controlled at the type level, not the field level. This means you typically either export all of a record's fields at once (using ..) or none of them. To achieve similar granular control, you need to use more awkward patterns:

-- All fields exported - no control
module User
  ( User(..)  -- exports User constructor and all fields
  ) where

data User = User
  { name :: String
  , email :: String
  , createdAt :: UTCTime
  , passwordHash :: String
  }

-- Or no fields exported, requiring accessor functions
module User
  ( User        -- only type constructor exported
  , mkUser      -- smart constructor
  , userName    -- accessor for name
  , userEmail   -- accessor for email
  , userCreatedAt -- accessor for createdAt
  -- passwordHash intentionally not exported
  ) where

data User = User
  { name :: String
  , email :: String
  , createdAt :: UTCTime
  , passwordHash :: String
  }

mkUser :: String -> String -> String -> IO User
mkUser name email password = do
  now <- getCurrentTime
  hash <- hashPassword password
  return $ User name email now hash

userName :: User -> String
userName = name

userEmail :: User -> String
userEmail = email

userCreatedAt :: User -> UTCTime
userCreatedAt = createdAt

The Haskell approach requires writing boilerplate accessor functions and losing the convenient record syntax for the fields you want to keep private. Rust's per-field visibility eliminates this awkwardness while maintaining the benefits of direct field access for public fields.

Purity and Referential Transparency

One of Haskell's most significant strengths is its commitment to purity. Pure functions, which have no side effects, are easier to reason about, test, and debug. Referential transparency—the principle that a function call can be replaced by its resulting value without changing the program's behavior—is a direct benefit of this purity.

In Haskell, the type system explicitly tracks effects through monads like IO, making it clear which parts of the code interact with the outside world. This separation of pure and impure code is a powerful tool for building reliable software.

-- Pure function: predictable and testable
add :: Int -> Int -> Int
add x y = x + y

-- Impure action: clearly marked by the IO type
printAndAdd :: Int -> Int -> IO Int
printAndAdd x y = do
  putStrLn "Adding numbers"
  return (x + y

While Rust encourages a similar separation of concerns, it does not enforce it at the language level in the same way. A function in Rust can perform I/O or mutate state without any explicit indication in its type signature (beyond &mut for mutable borrows). This means that while you can write pure functions in Rust, the language doesn't provide the same strong guarantees as Haskell.

// This function looks pure, but isn't
fn add_and_print(x: i32, y: i32) -> i32 {
    println!("Adding numbers"); // side effect
    x + y
}

This lack of enforced purity in Rust means you lose some of the strong reasoning and refactoring guarantees that are a hallmark of Haskell development.

Error handling

Rust's explicit error handling through Result<T, E> removes the cognitive overhead of exceptions. Compare these approaches:

Haskell (with potential exceptions):

parseConfig :: FilePath -> IO Config
parseConfig path = do
    content <- readFile path  -- can throw IOException
    case parseJSON content of  -- direct error handling
        Left err -> throwIO (ConfigParseError err)
        Right config -> return config

Rust (explicit throughout):

fn parse_config(path: &str) -> Result<Config, ConfigError> {
    let content = std::fs::read_to_string(path)?;
    let config = serde_json::from_str(&content)?;
    Ok(config)
}

In Rust, the ? operator makes error propagation clean while keeping the flow clear.

Unit tests as part of source code

Rust's built-in support for unit tests within the same file as the code being tested is convenient:

fn add(a: i32, b: i32) -> i32 {
    a + b
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_add() {
        assert_eq!(add(2, 3), 5);
    }
}

In Haskell, tests are typically in separate files:

-- src/Math.hs
add :: Int -> Int -> Int
add a b = a + b

-- test/MathSpec.hs
import Math
import Test.Hspec

spec :: Spec
spec = describe "add" $
    it "adds two numbers" $
        add 2 3 `shouldBe` 5

Rust's co-location makes tests harder to forget and easier to maintain. Additionally, in Haskell you often need to export internal types and functions from your modules just to make them accessible for testing, which can pollute your public API.

-- src/Parser.hs
module Parser
  ( parseConfig
  , InternalState(..)  -- exported only for testing
  , validateToken      -- exported only for testing
  ) where

data InternalState = InternalState { ... }

validateToken :: Token -> Bool
validateToken = ...  -- internal function we want to test

Rust's #[cfg(test)] attribute means test code has access to private functions and types without exposing them publicly.

Standard formatting

Rust's rustfmt provides a standard formatting tool that the entire community has adopted:

$ cargo fmt  # formats entire project consistently

In Haskell, while we have excellent tools like fourmolu and ormolu, the lack of a single standard has led to configuration debates:

-- Which style?
function :: Int -> Int -> Int -> IO Int
function arg1 arg2 arg3 =
    someComputation arg1 arg2 arg3

-- Or:
function ::
	   Int
	-> Int
	-> Int
	-> IO Int
function arg1 arg2 arg3 =
    someComputation arg1 arg2 arg3

I have witnessed significant time spent on style discussions when team members are reluctant to adopt formatting tools.

Language server support

While Haskell Language Server (HLS) has improved significantly, it still struggles with larger projects. Basic functionality like variable renaming can fail in certain scenarios, particularly in Template Haskell heavy codebases.

rust-analyzer provides a more reliable experience across different project sizes, with features like "go to definition" working consistently even in large monorepos. One feature I'm particularly fond of is rust-analyzer's ability to jump into the definitions of standard library functions and external dependencies. This seamless navigation into library code is something I miss when using HLS, even on smaller projects, where such functionality is not currently possible.

Another feature I extensively use in rust-analyzer is the ability to run tests inline directly from the editor. This functionality is currently missing in HLS, though there is an open issue tracking this feature request.

Compilation time

Despite Rust's reputation for slow compilation, I've found it consistently faster than Haskell for equivalent services. The Rust team has made significant efforts to optimize the compiler over the years, and these improvements are noticeable in practice. In contrast, Haskell compilation times have remained slow, and newer GHC versions unfortunately don't seem to provide meaningful improvements in this area.

Interactive development experience

I appreciate Haskell's REPL (Read-eval-print loop) for rapid prototyping and experimentation. Not having a native REPL in Rust noticeably slows down development when you need to try things out quickly or explore library APIs interactively. In GHCi, you can load your existing codebase and experiment around it, making it easy to test functions, try different inputs, and explore how your code behaves.

As an alternative, I have been using org babel in rustic mode for interactive Rust development. While this provides some level of interactivity within Emacs, it feels more like a band-aid than an actual solution for quickly experimenting with code. The workflow is more cumbersome compared to the direct approach of typing expressions directly into GHCi and seeing results instantly.

Sep 22, 2025

Author:

Haskell

Reflections on Haskell and Rust

Introduction

For most of my professional experience, I have been writing production code in both Haskell and Rust, primarily focusing on web services, APIs, and HTTP stack development. My journey started with Haskell, followed by working with Rust, and most recently returning to the Haskell ecosystem.

This experience has given me perspective on both languages' strengths and limitations in real-world applications. Each language has aspects that I appreciate and miss when working with the other. This post examines the features and characteristics that stand out to me in each language.

Variable shadowing

Rust's ability to shadow variables seamlessly is something I came to appreciate. In Rust, you can write:

let config = load_config();
let config = validate_config(config)?;
let config = merge_defaults(config);

This pattern is common and encouraged in Rust, making code more readable by avoiding the need for intermediate variable names. In Haskell, you would typically need different names:

config <- loadConfig
config' <- validateConfig config
config'' <- mergeDefaults config'

Haskell's approach is slightly harder to read, while Rust's shadowing makes transformation pipelines more natural.

Sum types of records

Rust's enum system, particularly when combined with pattern matching, feels more robust than Haskell's sum types of records. When defining sum types of records in Haskell, there is a possibility of introducing partial record accessors which can cause runtime crashes, though recent versions of GHC now produce compile-time warnings for this pattern:

data Person = Student { name :: String, university :: String }
            | Worker { name :: String, company :: String }

-- This can crash if applied to a Student
getCompany :: Person -> String
getCompany p = company p  -- runtime error for Student

-- Safe approach requires pattern matching
getCompany :: Person -> Maybe String
getCompany (Worker _ c) = Just c
getCompany (Student _ _) = Nothing

Rust eliminates this class of errors by design:

enum Person {
    Student { name: String, university: String },
    Worker { name: String, company: String },
}

fn get_company(person: &Person) -> Option<&str> {
    match person {
        Person::Worker { company, .. } => Some(company),
        Person::Student { .. } => None,
    }
}

Enum variant namespacing

Rust allows multiple enum types to have the same variant names within the same module, while Haskell's constructor names must be unique within their scope. This leads to different patterns in the two languages.

In Rust, you can define:

enum HttpStatus {
    Success,
    Error,
}

enum DatabaseOperation {
    Success,
    Error,
}

// Usage is clear due to explicit namespacing
fn handle_request() {
    let http_result = HttpStatus::Success;
    let db_result = DatabaseOperation::Error;
}

The same approach in Haskell would cause a compile error:

-- This won't compile - duplicate constructor names
data HttpStatus = Success | Error
data DatabaseOperation = Success | Error  -- Error: duplicate constructors

In Haskell, you need unique constructor names:

data HttpStatus = HttpSuccess | HttpError
data DatabaseOperation = DbSuccess | DbError

-- Usage
handleRequest = do
    let httpResult = HttpSuccess
    let dbResult =

Alternatively, Haskell developers often use qualified imports syntax to achieve similar namespacing:

-- Using modules for namespacing
module Http where
data Status = Success | Error

module Database where
data Operation = Success | Error

-- Usage with qualified imports
import qualified Http
import qualified Database

handleRequest = do
    let httpResult = Http.Success
    let dbResult = Database.

The Typename::Variant syntax in Rust makes the intent clearer at the usage site. You immediately know which enum type you're working with, while Haskell's approach can sometimes require additional context or prefixing to achieve the same clarity.

Struct field visibility

Rust provides granular visibility control for struct fields, allowing you to expose only specific fields while keeping others private. This fine-grained control is built into the language:

pub struct User {
    pub name: String,        // publicly accessible
    pub email: String,       // publicly accessible
    created_at: DateTime,    // private field
    password_hash: String,   // private field
}

impl User {
    pub fn new(name: String, email: String, password: String) -> Self {
        User {
            name,
            email,
            created_at: Utc::now(),
            password_hash: hash_password(password),
        }
    }

    // Controlled access to private field
    pub fn created_at(&self) -> DateTime {
        self.created_at
    }
}

Rust offers even more granular visibility control beyond simple pub and private fields. You can specify exactly where a field should be accessible:

pub struct Config {
    pub name: String,              // accessible everywhere
    pub(crate) internal_id: u64,   // accessible within this crate only
    pub(super) parent_ref: String, // accessible to parent module only
    pub(in crate::utils) debug_info: String, // accessible within utils module only
    private_key: String,           // private to this module
}

These granular visibility modifiers (pub(crate), pub(super), pub(in path)) allow you to create sophisticated access patterns that match your module hierarchy and architectural boundaries. Some of these patterns are simply not possible to replicate in Haskell's module system.

In Haskell, record field visibility is controlled at the type level, not the field level. This means you typically either export all of a record's fields at once (using ..) or none of them. To achieve similar granular control, you need to use more awkward patterns:

-- All fields exported - no control
module User
  ( User(..)  -- exports User constructor and all fields
  ) where

data User = User
  { name :: String
  , email :: String
  , createdAt :: UTCTime
  , passwordHash :: String
  }

-- Or no fields exported, requiring accessor functions
module User
  ( User        -- only type constructor exported
  , mkUser      -- smart constructor
  , userName    -- accessor for name
  , userEmail   -- accessor for email
  , userCreatedAt -- accessor for createdAt
  -- passwordHash intentionally not exported
  ) where

data User = User
  { name :: String
  , email :: String
  , createdAt :: UTCTime
  , passwordHash :: String
  }

mkUser :: String -> String -> String -> IO User
mkUser name email password = do
  now <- getCurrentTime
  hash <- hashPassword password
  return $ User name email now hash

userName :: User -> String
userName = name

userEmail :: User -> String
userEmail = email

userCreatedAt :: User -> UTCTime
userCreatedAt = createdAt

The Haskell approach requires writing boilerplate accessor functions and losing the convenient record syntax for the fields you want to keep private. Rust's per-field visibility eliminates this awkwardness while maintaining the benefits of direct field access for public fields.

Purity and Referential Transparency

One of Haskell's most significant strengths is its commitment to purity. Pure functions, which have no side effects, are easier to reason about, test, and debug. Referential transparency—the principle that a function call can be replaced by its resulting value without changing the program's behavior—is a direct benefit of this purity.

In Haskell, the type system explicitly tracks effects through monads like IO, making it clear which parts of the code interact with the outside world. This separation of pure and impure code is a powerful tool for building reliable software.

-- Pure function: predictable and testable
add :: Int -> Int -> Int
add x y = x + y

-- Impure action: clearly marked by the IO type
printAndAdd :: Int -> Int -> IO Int
printAndAdd x y = do
  putStrLn "Adding numbers"
  return (x + y

While Rust encourages a similar separation of concerns, it does not enforce it at the language level in the same way. A function in Rust can perform I/O or mutate state without any explicit indication in its type signature (beyond &mut for mutable borrows). This means that while you can write pure functions in Rust, the language doesn't provide the same strong guarantees as Haskell.

// This function looks pure, but isn't
fn add_and_print(x: i32, y: i32) -> i32 {
    println!("Adding numbers"); // side effect
    x + y
}

This lack of enforced purity in Rust means you lose some of the strong reasoning and refactoring guarantees that are a hallmark of Haskell development.

Error handling

Rust's explicit error handling through Result<T, E> removes the cognitive overhead of exceptions. Compare these approaches:

Haskell (with potential exceptions):

parseConfig :: FilePath -> IO Config
parseConfig path = do
    content <- readFile path  -- can throw IOException
    case parseJSON content of  -- direct error handling
        Left err -> throwIO (ConfigParseError err)
        Right config -> return config

Rust (explicit throughout):

fn parse_config(path: &str) -> Result<Config, ConfigError> {
    let content = std::fs::read_to_string(path)?;
    let config = serde_json::from_str(&content)?;
    Ok(config)
}

In Rust, the ? operator makes error propagation clean while keeping the flow clear.

Unit tests as part of source code

Rust's built-in support for unit tests within the same file as the code being tested is convenient:

fn add(a: i32, b: i32) -> i32 {
    a + b
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_add() {
        assert_eq!(add(2, 3), 5);
    }
}

In Haskell, tests are typically in separate files:

-- src/Math.hs
add :: Int -> Int -> Int
add a b = a + b

-- test/MathSpec.hs
import Math
import Test.Hspec

spec :: Spec
spec = describe "add" $
    it "adds two numbers" $
        add 2 3 `shouldBe` 5

Rust's co-location makes tests harder to forget and easier to maintain. Additionally, in Haskell you often need to export internal types and functions from your modules just to make them accessible for testing, which can pollute your public API.

-- src/Parser.hs
module Parser
  ( parseConfig
  , InternalState(..)  -- exported only for testing
  , validateToken      -- exported only for testing
  ) where

data InternalState = InternalState { ... }

validateToken :: Token -> Bool
validateToken = ...  -- internal function we want to test

Rust's #[cfg(test)] attribute means test code has access to private functions and types without exposing them publicly.

Standard formatting

Rust's rustfmt provides a standard formatting tool that the entire community has adopted:

$ cargo fmt  # formats entire project consistently

In Haskell, while we have excellent tools like fourmolu and ormolu, the lack of a single standard has led to configuration debates:

-- Which style?
function :: Int -> Int -> Int -> IO Int
function arg1 arg2 arg3 =
    someComputation arg1 arg2 arg3

-- Or:
function ::
	   Int
	-> Int
	-> Int
	-> IO Int
function arg1 arg2 arg3 =
    someComputation arg1 arg2 arg3

I have witnessed significant time spent on style discussions when team members are reluctant to adopt formatting tools.

Language server support

While Haskell Language Server (HLS) has improved significantly, it still struggles with larger projects. Basic functionality like variable renaming can fail in certain scenarios, particularly in Template Haskell heavy codebases.

rust-analyzer provides a more reliable experience across different project sizes, with features like "go to definition" working consistently even in large monorepos. One feature I'm particularly fond of is rust-analyzer's ability to jump into the definitions of standard library functions and external dependencies. This seamless navigation into library code is something I miss when using HLS, even on smaller projects, where such functionality is not currently possible.

Another feature I extensively use in rust-analyzer is the ability to run tests inline directly from the editor. This functionality is currently missing in HLS, though there is an open issue tracking this feature request.

Compilation time

Despite Rust's reputation for slow compilation, I've found it consistently faster than Haskell for equivalent services. The Rust team has made significant efforts to optimize the compiler over the years, and these improvements are noticeable in practice. In contrast, Haskell compilation times have remained slow, and newer GHC versions unfortunately don't seem to provide meaningful improvements in this area.

Interactive development experience

I appreciate Haskell's REPL (Read-eval-print loop) for rapid prototyping and experimentation. Not having a native REPL in Rust noticeably slows down development when you need to try things out quickly or explore library APIs interactively. In GHCi, you can load your existing codebase and experiment around it, making it easy to test functions, try different inputs, and explore how your code behaves.

As an alternative, I have been using org babel in rustic mode for interactive Rust development. While this provides some level of interactivity within Emacs, it feels more like a band-aid than an actual solution for quickly experimenting with code. The workflow is more cumbersome compared to the direct approach of typing expressions directly into GHCi and seeing results instantly.

Haskell

Kolme: Architecture for Founders Who Want to Win

Build powerful products and accelerate blockchain development timelines with Kolme


Most blockchain platforms make you choose: speed or security, scalability or simplicity, performance or affordability.Kolme changes the equation — delivering 10x performance and accelerating development timelines at a fraction of the traditional cost.By combining a modular architecture with powerful integrations and a security model built for the realities of cross-chain apps, Kolme gives builders the advantage they’ve been waiting for.


1. Dedicated Chains: Built for Your App — Not Everyone Else’sEvery Kolme application runs on its own dedicated blockchain. That means:No congestion from other appsConsistent, low-latency performanceTotal control over throughput and feesWhether you’re building a high-frequency trading platform, a cross-chain DEX, or a real-time betting protocol, Kolme ensures your users never suffer because of factors outside your control.This architecture doesn’t just improve UX — it reduces costs, removes the need for complex infra setups, and dramatically accelerates your time to market.Why it matters: You get enterprise-grade performance without needing an enterprise-sized team. Launch in weeks, not months.


2. Triadic Security Model: Rethinking How Blockchains Stay SafeKolme introduces a three-group validator model that distributes power and adds multiple layers of verification:Listeners watch external chains (e.g., Ethereum, Solana) and confirm deposits or events.Processors execute transactions and build blocks in high-availability environments.Approvers verify and sign off on outbound actions like withdrawals.Actions require quorum from multiple groups, ensuring that no single point of failure or actor can compromise your application.Upgrades and admin operations follow the same rule: no unilateral changes, but no bottlenecks either.Why it matters: Kolme gives you battle-tested security rivaling top L1s, without locking you into inflexible governance models or compromising speed.


3. Seamless Integration: Faster, Smarter, Multichain by DesignKolme makes integrating with real-world data and other blockchains seamless:Secure Data Feeds: Fetch from oracles (like Pyth, Chainlink), APIs, or custom sources. Data is signed, verified, and stored on-chain without delay.Multichain Bridges: Native support for Solana, Ethereum, and beyond — allows users to deposit and withdraw in their preferred ecosystems.You don’t need to rewrite your app every time a new chain becomes relevant — just plug in and go.Why it matters: Kolme eliminates the traditional pain points of launching multichain products, helping you reach more users faster — with less code and fewer risks.


A Founder’s Competitive EdgeKolme isn’t just technically impressive — it’s strategically designed to give startups and builders a real edge:10x performance by eliminating shared bottlenecksFaster time to market, with architecture that’s ready out-of-the-boxLower operational overhead, thanks to less time grappling with complex technical requirementsFuture-proof scalability, without rewrites or relaunchesIn an industry where launching late or lagging on UX can kill momentum, Kolme gives founders the firepower to execute quickly — and win.


The Bottom LineKolme delivers the three things every serious web3 founder needs:Speed: Dedicated chains and low-latency infra make your app fast by default.Security: A modular, redundant validator model keeps your users safe.Scalability: Native multichain support and fast data ingestion future-proof your product.All while cutting the cost and time usually required to get there.Kolme isn’t just a better blockchain architecture. It’s your unfair advantage.

Feb 27, 2025

Author: FP Block Staff

Haskell

Kolme: Architecture for Founders Who Want to Win

Build powerful products and accelerate blockchain development timelines with Kolme


Most blockchain platforms make you choose: speed or security, scalability or simplicity, performance or affordability.Kolme changes the equation — delivering 10x performance and accelerating development timelines at a fraction of the traditional cost.By combining a modular architecture with powerful integrations and a security model built for the realities of cross-chain apps, Kolme gives builders the advantage they’ve been waiting for.


1. Dedicated Chains: Built for Your App — Not Everyone Else’sEvery Kolme application runs on its own dedicated blockchain. That means:No congestion from other appsConsistent, low-latency performanceTotal control over throughput and feesWhether you’re building a high-frequency trading platform, a cross-chain DEX, or a real-time betting protocol, Kolme ensures your users never suffer because of factors outside your control.This architecture doesn’t just improve UX — it reduces costs, removes the need for complex infra setups, and dramatically accelerates your time to market.Why it matters: You get enterprise-grade performance without needing an enterprise-sized team. Launch in weeks, not months.


2. Triadic Security Model: Rethinking How Blockchains Stay SafeKolme introduces a three-group validator model that distributes power and adds multiple layers of verification:Listeners watch external chains (e.g., Ethereum, Solana) and confirm deposits or events.Processors execute transactions and build blocks in high-availability environments.Approvers verify and sign off on outbound actions like withdrawals.Actions require quorum from multiple groups, ensuring that no single point of failure or actor can compromise your application.Upgrades and admin operations follow the same rule: no unilateral changes, but no bottlenecks either.Why it matters: Kolme gives you battle-tested security rivaling top L1s, without locking you into inflexible governance models or compromising speed.


3. Seamless Integration: Faster, Smarter, Multichain by DesignKolme makes integrating with real-world data and other blockchains seamless:Secure Data Feeds: Fetch from oracles (like Pyth, Chainlink), APIs, or custom sources. Data is signed, verified, and stored on-chain without delay.Multichain Bridges: Native support for Solana, Ethereum, and beyond — allows users to deposit and withdraw in their preferred ecosystems.You don’t need to rewrite your app every time a new chain becomes relevant — just plug in and go.Why it matters: Kolme eliminates the traditional pain points of launching multichain products, helping you reach more users faster — with less code and fewer risks.


A Founder’s Competitive EdgeKolme isn’t just technically impressive — it’s strategically designed to give startups and builders a real edge:10x performance by eliminating shared bottlenecksFaster time to market, with architecture that’s ready out-of-the-boxLower operational overhead, thanks to less time grappling with complex technical requirementsFuture-proof scalability, without rewrites or relaunchesIn an industry where launching late or lagging on UX can kill momentum, Kolme gives founders the firepower to execute quickly — and win.


The Bottom LineKolme delivers the three things every serious web3 founder needs:Speed: Dedicated chains and low-latency infra make your app fast by default.Security: A modular, redundant validator model keeps your users safe.Scalability: Native multichain support and fast data ingestion future-proof your product.All while cutting the cost and time usually required to get there.Kolme isn’t just a better blockchain architecture. It’s your unfair advantage.

Haskell

FP Block Is Heading to Rare Evo 2025

Recent international tournaments and league championships have showcased the exceptional talent, determination, and resilience of female athletes, sending a powerful message of equality and empowerment to fans worldwide.

A landmark event in women’s soccer saw record-breaking attendance and viewership, with teams delivering thrilling performances that captivated millions. “This isn’t just a game—it’s a movement,” said head coach Elena Rivera of one of the top-ranked teams. The tournament’s success is driving increased investment in women’s sports, with sponsors and broadcasters recognizing the immense potential and growing fan base.

The positive momentum extends beyond soccer. In sports like basketball, tennis, and athletics, female athletes are setting new personal and world records, challenging long-held stereotypes and inspiring young girls to pursue their athletic dreams. Grassroots programs and school initiatives are being developed to nurture young talent and provide role models that represent success, perseverance, and leadership. “Seeing women excel on the world stage has a ripple effect—it motivates the next generation to dream big,” noted a sports educator.

Media coverage has also shifted in favor of women’s sports, with increased airtime and more in-depth reporting highlighting both on-field achievements and off-field contributions. Social media platforms are abuzz with stories of female athletes overcoming adversity, advocating for gender equality, and leading charitable initiatives. “Women in sports are not just athletes—they’re ambassadors of change,” stated a popular sports journalist.

This surge in support is translating into tangible improvements in sports infrastructure and funding. National sports federations are allocating more resources to women’s leagues, ensuring better training facilities, higher salaries, and more competitive opportunities. “Investment in women’s sports is an investment in our future,” said Rivera. The success stories emerging from these leagues are reshaping the sports landscape and fostering an environment where equality and excellence go hand in hand.

The global celebration of women’s sports is also promoting cultural exchange and international camaraderie. Major tournaments now serve as platforms for dialogue about gender equality, human rights, and the power of sports to bridge divides. As nations come together to celebrate athletic achievement, the unity and shared passion for the game are inspiring change at every level.

The ongoing evolution of women’s sports represents a beacon of hope and progress, signaling that the future of athletics is inclusive, dynamic, and transformative. With every match and every record broken, female athletes are not only winning on the field—they are paving the way for a more equitable and empowered world.

Feb 27, 2025

Author: FP Block Staff

Haskell

FP Block Is Heading to Rare Evo 2025

Recent international tournaments and league championships have showcased the exceptional talent, determination, and resilience of female athletes, sending a powerful message of equality and empowerment to fans worldwide.

A landmark event in women’s soccer saw record-breaking attendance and viewership, with teams delivering thrilling performances that captivated millions. “This isn’t just a game—it’s a movement,” said head coach Elena Rivera of one of the top-ranked teams. The tournament’s success is driving increased investment in women’s sports, with sponsors and broadcasters recognizing the immense potential and growing fan base.

The positive momentum extends beyond soccer. In sports like basketball, tennis, and athletics, female athletes are setting new personal and world records, challenging long-held stereotypes and inspiring young girls to pursue their athletic dreams. Grassroots programs and school initiatives are being developed to nurture young talent and provide role models that represent success, perseverance, and leadership. “Seeing women excel on the world stage has a ripple effect—it motivates the next generation to dream big,” noted a sports educator.

Media coverage has also shifted in favor of women’s sports, with increased airtime and more in-depth reporting highlighting both on-field achievements and off-field contributions. Social media platforms are abuzz with stories of female athletes overcoming adversity, advocating for gender equality, and leading charitable initiatives. “Women in sports are not just athletes—they’re ambassadors of change,” stated a popular sports journalist.

This surge in support is translating into tangible improvements in sports infrastructure and funding. National sports federations are allocating more resources to women’s leagues, ensuring better training facilities, higher salaries, and more competitive opportunities. “Investment in women’s sports is an investment in our future,” said Rivera. The success stories emerging from these leagues are reshaping the sports landscape and fostering an environment where equality and excellence go hand in hand.

The global celebration of women’s sports is also promoting cultural exchange and international camaraderie. Major tournaments now serve as platforms for dialogue about gender equality, human rights, and the power of sports to bridge divides. As nations come together to celebrate athletic achievement, the unity and shared passion for the game are inspiring change at every level.

The ongoing evolution of women’s sports represents a beacon of hope and progress, signaling that the future of athletics is inclusive, dynamic, and transformative. With every match and every record broken, female athletes are not only winning on the field—they are paving the way for a more equitable and empowered world.

Haskell

The Kolme Manifesto: Build 10x Blockchain Products — 10x Faster

In a world where innovation moves at lightning speed, blockchain development has too often been a bottleneck — slowed not by performance issues, but by the overwhelming complexity of getting from idea to market.Time-to-market is everything — yet many teams remain creatively constrained by outdated blockchain development processes, rigid smart contract logic, and unreliable external data integrations. That changes today.


The Problem: Navigating Blockchains Technical CompromisesBuilding on blockchain doesn’t just mean writing smart contracts. It means navigating a landscape of forced compromises. Current blockchain development demands awkward architectural workarounds to satisfy the inherent constraints of working with systems that weren’t designed for speed or flexibility — they were designed for consensus and permanence.For product builders, this creates a drag on development velocity and injects technical debt before a product even sees daylight. The result? Bloated timelines, ballooning budgets, and burned-out teams stuck retrofitting innovation into infrastructure that simply doesn’t fit.


Enter Kolme: Blockchain, Reimagined for BuildersKolme, built by the team at FP Block, is a breakthrough framework engineered to eliminate the structural frictions of blockchain development. It rethinks how dApps and decentralized systems are built — not by offering “faster” versions of the same old tools, but by rejecting the faulty premises that have defined Web3 architecture for years.Kolme retains the transparency and reproducibility of blockchain, but strips away the shared state and infrastructure overhead that bogs teams down. It’s simple, composable, and product-focused — letting you ship faster, cleaner, and smarter — without any performance or security compromises.


Why Kolme?Blockchain development has been a bottleneck — slow, rigid, and frustrating. Most teams burn through runway before their product ever reaches users.Kolme eliminates those barriers by rethinking blockchain from the ground up.Kolme delivers the speed of modern web apps, the resilience and security of top-tier chains, and seamless multichain access — all in one framework designed for builders who want to move fast.Here’s how:Dedicated Blockchains: Every Kolme app runs on its own public blockchain. That means no blockspace congestion, no throughput bottlenecks — just instant, predictable execution built for performance at scale.Complete Customisability: Forget about transaction fees, execution limits, or waiting on block times. Kolme removes those constraints entirely, so your app runs fast, free, and without compromise.The Rust Advantage: Build complex features and iterate quickly, without needing to rely on fitting your idea around unsuitable smart contract logic.Multichain by Default: Kolme apps connect seamlessly to external blockchains through secure bridge contracts that make your product accessible to users across ecosystems — without sacrificing safety or speed.️External Chain Resilience: Kolme apps stay live even when other chains go down. External chains are only used for deposits and withdrawals, so your core product keeps running no matter what.Verifiable Transparency: Every action — is recorded on your app’s public blockchain. That means full auditability for users, validators, and external reviewers. Trust through transparency is built-in.With Kolme, you’re no longer adapting your product to fit the blockchain. The blockchain adapts to your product.That’s how you build 10x faster. That’s how you build 10x better.


The Kolme AdvantageSpeed Without Compromise: Kolme streamlines the development lifecycle from idea to deployment, without forcing trade-offs in performance or decentralization where it matters. You build at the speed your product demands — not at the pace at which you can work around legacy requirements.Save Time and Money: Kolme eliminates the inefficiencies of the traditional Web3 stack, letting you ship features instead of accumulating technical debt caused by architectural constraints. That’s lower engineering overhead, shorter time-to-revenue, and more room to iterate.Focus on What Matters: Stop bending your product to fit the blockchain. Kolme flips the equation — it bends blockchain to fit your product. Say goodbye to technical debt created by wrestling with unsuitable environments, and start using your resources to build what your users need.Better Products, Built Better: Kolme’s architecture is built with security, modularity, and scalability in mind. That means your blockchain product isn’t just faster to build — it’s ready to scale and stand up to real-world demands.


It’s Time to Build Faster with KolmeThe days of building blockchain products on top of broken assumptions are over. Kolme is for founders, developers, and product leaders who are done compromising. It’s for those who want to ship ideas faster than ever before, without fighting uphill against outdated blockchain development processes, workflows, and models.Build faster. Build cleaner. Build smarter. Build with Kolme.


About FP Block: FP Block is a leading blockchain engineering firm. With a team of experienced engineers and over 100 successful projects delivered, FP Block is dedicated to engineering the future of decentralized technologies through innovative and secure solutions. The company’s engineering team combines deep technical expertise with practical business acumen to deliver reliable, secure, and scalable blockchain solutions for enterprise clients.

Feb 27, 2025

Author: FP Block Staff

Haskell

The Kolme Manifesto: Build 10x Blockchain Products — 10x Faster

In a world where innovation moves at lightning speed, blockchain development has too often been a bottleneck — slowed not by performance issues, but by the overwhelming complexity of getting from idea to market.Time-to-market is everything — yet many teams remain creatively constrained by outdated blockchain development processes, rigid smart contract logic, and unreliable external data integrations. That changes today.


The Problem: Navigating Blockchains Technical CompromisesBuilding on blockchain doesn’t just mean writing smart contracts. It means navigating a landscape of forced compromises. Current blockchain development demands awkward architectural workarounds to satisfy the inherent constraints of working with systems that weren’t designed for speed or flexibility — they were designed for consensus and permanence.For product builders, this creates a drag on development velocity and injects technical debt before a product even sees daylight. The result? Bloated timelines, ballooning budgets, and burned-out teams stuck retrofitting innovation into infrastructure that simply doesn’t fit.


Enter Kolme: Blockchain, Reimagined for BuildersKolme, built by the team at FP Block, is a breakthrough framework engineered to eliminate the structural frictions of blockchain development. It rethinks how dApps and decentralized systems are built — not by offering “faster” versions of the same old tools, but by rejecting the faulty premises that have defined Web3 architecture for years.Kolme retains the transparency and reproducibility of blockchain, but strips away the shared state and infrastructure overhead that bogs teams down. It’s simple, composable, and product-focused — letting you ship faster, cleaner, and smarter — without any performance or security compromises.


Why Kolme?Blockchain development has been a bottleneck — slow, rigid, and frustrating. Most teams burn through runway before their product ever reaches users.Kolme eliminates those barriers by rethinking blockchain from the ground up.Kolme delivers the speed of modern web apps, the resilience and security of top-tier chains, and seamless multichain access — all in one framework designed for builders who want to move fast.Here’s how:Dedicated Blockchains: Every Kolme app runs on its own public blockchain. That means no blockspace congestion, no throughput bottlenecks — just instant, predictable execution built for performance at scale.Complete Customisability: Forget about transaction fees, execution limits, or waiting on block times. Kolme removes those constraints entirely, so your app runs fast, free, and without compromise.The Rust Advantage: Build complex features and iterate quickly, without needing to rely on fitting your idea around unsuitable smart contract logic.Multichain by Default: Kolme apps connect seamlessly to external blockchains through secure bridge contracts that make your product accessible to users across ecosystems — without sacrificing safety or speed.️External Chain Resilience: Kolme apps stay live even when other chains go down. External chains are only used for deposits and withdrawals, so your core product keeps running no matter what.Verifiable Transparency: Every action — is recorded on your app’s public blockchain. That means full auditability for users, validators, and external reviewers. Trust through transparency is built-in.With Kolme, you’re no longer adapting your product to fit the blockchain. The blockchain adapts to your product.That’s how you build 10x faster. That’s how you build 10x better.


The Kolme AdvantageSpeed Without Compromise: Kolme streamlines the development lifecycle from idea to deployment, without forcing trade-offs in performance or decentralization where it matters. You build at the speed your product demands — not at the pace at which you can work around legacy requirements.Save Time and Money: Kolme eliminates the inefficiencies of the traditional Web3 stack, letting you ship features instead of accumulating technical debt caused by architectural constraints. That’s lower engineering overhead, shorter time-to-revenue, and more room to iterate.Focus on What Matters: Stop bending your product to fit the blockchain. Kolme flips the equation — it bends blockchain to fit your product. Say goodbye to technical debt created by wrestling with unsuitable environments, and start using your resources to build what your users need.Better Products, Built Better: Kolme’s architecture is built with security, modularity, and scalability in mind. That means your blockchain product isn’t just faster to build — it’s ready to scale and stand up to real-world demands.


It’s Time to Build Faster with KolmeThe days of building blockchain products on top of broken assumptions are over. Kolme is for founders, developers, and product leaders who are done compromising. It’s for those who want to ship ideas faster than ever before, without fighting uphill against outdated blockchain development processes, workflows, and models.Build faster. Build cleaner. Build smarter. Build with Kolme.


About FP Block: FP Block is a leading blockchain engineering firm. With a team of experienced engineers and over 100 successful projects delivered, FP Block is dedicated to engineering the future of decentralized technologies through innovative and secure solutions. The company’s engineering team combines deep technical expertise with practical business acumen to deliver reliable, secure, and scalable blockchain solutions for enterprise clients.