Richard Churchman

I am Richard Churchman, a software architect from Paphos, Cyprus.

Welcome to my – unapologetically basic – contact page. Here, you will find a brief foreword from me, along with some contact information, and not much more.

I architect, develop and implement horizontally scalable, high-throughput software for real-time transaction and event monitoring. My work tends to overlap software development, data engineering, advanced analytics, machine learning (supervised and unsupervised classification), and increasingly AI (to the extent it is different from machine learning, integration, embedding and inference).

I have, over the years, been lucky to work with hundreds of clients across the world, including in banks, pharmaceutical logistics, airlines and startups, although I have tended to spend most of my time working on risk problems.

My preferences in software architecture

In the back end, I work with C# and TypeScript, which would imply .NET and Node, respectively. Irrespective of language and runtime, I implement straightforward patterns; my code has a Controller, Service Layer, Data Layer, Repository Layer, always an ORM, sometimes quasi-CQRS, and of course migrations – but nothing ever more complex than that, concentrated with observability (i.e., Interlocked Increment Counters), verbose logging and pleasant, useful, documentation. I tend to hive off model recall (i.e, advanced analytics and machine learning) to microservices, although I am sensitive to microservice sprawl.

In the front end, I work with Blazor and Angular, as might be expected given the languages I declare myself to be comfortable with. For persistence, I typically use PostgreSQL (i.e, configuration, serialized transactions and idempotency index tokens), Redis (or ValKey) for caching, RabbitMQ for RPC messaging, or Kafka when replayable messaging is needed.

I enjoy using Python and R, but am disinclined to use them for anything other than data analysis or training portable ML models, except, on occasion, and only in the case of Python, an alternative to shell scripting.

I’m a proponent of containerisation, and writing Dockerfiles and Compose setups is second nature. Being decent with Git is implicitly understood.

My relationship with cloud infrastructure

While I don’t hold myself out as a cloud infrastructure expert, my work makes it largely unavoidable. I use AWS, Azure, and Digital Ocean, and am comfortable with compute options (VMs, FaaS, containers), managed databases, cache, storage types (block/blob), virtual networks, floating IPs, and load balancers.

Ok, I’ll bite; A note about AI

I have a practical background in advanced analytics and machine learning; however, a significant amount of my time is allocated to the arrangement and fine-tuning of Large Language Models (LLMs), Visual LLMs, and Embedding Models, inside AI \ Data Pipelines. I am accustomed to data preparation and feature engineering for traditional implementations of advanced analytics and machine learning (e.g., Neural Networks), and my observation of AI, as it is currently understood, is that it is no less acute. It follows that my AI architectures tend to be skewed towards AI \ Data Pipelines, which typically include tokenised context (comprising structured data and traditional feature engineering), document text extraction, image extraction, image analysis with Visual LLM’s, LLM inference, and finally LLM as a judge techniques. I favour the creation of small, targeted AI \ Data Pipelines that strive to maintain a narrow context (i.e., contextual information, text extraction, and processing instructions). I strive to use the smallest possible language model available (e.g., Phi 4 mini) with good math and language reasoning, and crucially, the ability to fine-tune (while Retrieval Augmented Generation, with Embedding, is often unavoidable for training latency, it is a compromise).

I am extremely sensitive to AI \ Data Pipeline and Inference costs, and have enacted some highly creative client infrastructure - with no reduction in outcome - to ensure that AI value propositions can be profitable.

It is sufficient to say, I don’t just sling ChatGPT wrappers.

My Bread and Butter

I spend some of my time working on the development and training of Jube – an Open-Source Anti-Money Laundering (AML) and Fraud Detection Transaction Monitoring software which I wrote and maintain. In keeping with Jube's focus, I stay up-to-date with Anti-Money Laundering regulations and maintain Anti-Money Laundering (AML) Monitoring Compliance Guidance to ensure general compliance.

Under the umbrella of Jube also - and being work I find enormously enjoyable - I deliver Advanced Analytics with R Training and maintain Advanced Analytics with R Guidance in support of the same. I have - somewhat perplexingly - found myself keeping abreast of Basel 1, 2 and 3 in the context of Credit Analytics to enrich my Advanced Analytics with R Training delivery with regulatory context.

I often get engaged on critical technology projects in need of seniority to get them "over the hump", often implying the development of bespoke software under IP assignment. Commercially, my projects are structured similarly: hourly billing, ticket-based work, and milestone rollups by agreement. I adore delivering such work, however I engage only given trusted introduction.

I mostly work remotely, though I occasionally travel for on-site training where remote delivery is less effective.

I resolutely stick with Fedora Linux and prefer JetBrains IDEs (Rider, WebStorm, DataGrip, DataSpell, PyCharm). When needed, I use RStudio, JIRA, Confluence, Brave Browser, Brave Talk (meetings), Proton (email/calendar), Matrix (chat) – and little else.

Contact

If you'd like to discuss a project, explore a collaboration, or just connect, feel free to reach out directly: