Blog Most Popular Insights

Why Buy Now, Pay Later is Develop Now, Be FinNative

Why Buy Now, Pay Later is Develop Now, Be FinNative

David Symons

Back in the day when I was a younger version of myself, I was a full-time student and part-time pocket money takin’ teenager.

Buying video games in the 1990’s was a romantic affair – weeks, sometimes months – of visiting a store, picking up the video game box, looking at the back, dreaming of the day I’d be able to afford it. Either my parents will cave to spoiling their child, or I’d have to save up my pocket money.

Target had this great service called ‘Layby’. I could put down a deposit and make regular payments until it was paid off. There were no online orders, no next day delivery and no automated payments. You had to take your receipt in-store and pay on-time and with cadence. The things we did for video games!

Fast forward to 2022: Laybuy, Afterpay / Clearpay, Klarna and even PayPal are in on the Buy Now, Pay Later (BNPL) gold rush. These companies have a combined value in the hundreds of billions of dollars.

How was this possible? 

We know as long ago as 2011 that software is and continues to eat the world. Legacy BNPL could not scale the way it does now: only possible through software delivery running at scale globally in the cloud, delivered through cloud native software engineering and architectural practices.

 Through software development and delivery to the cloud, companies have been able to become ‘FinNative’ – Financially Native – in that they no longer need to rely on centralised payment gateways (more like gatekeepers) of processing transactions aka accepting payments.

 Now, through the drop-in of a few lines of code, any business can integrate in minutes with Stripe, PayPal and the BNPL pièce de résistance – Klarna.

How can your business become FinNative?

  • Be Cloud Native – pick a cloud, any cloud. A foot in the door is worth more than one shoe outside. Unless you’re breaking the cloud, you probably don’t need to be on-premise.
  • Software Development Teams developers are the new Kingmakers. Be proud of your developer experience. Developers are your first-class citizens.
    • SDLC – Agile? Scrum? Kanban? Scrumban? Optimise, rinse and repeat. SDLC is the DNA of your software development delivery.
  • CI/CD if you don’t know what CI/CD means, you may already be too late. If SDLC is the DNA, CI/CD is the blood. Delivery frequency is one of the single biggest differentiators of winners and losers in business.
  • Shift Left – test, rinse and repeat. Code reviews, code quality, unit testing, integration testing, security checks, compliance and policy enforcement. Especially if you’re FinNative, don’t short change what’s left (pun intended).
  • Build Once, Scale Everywhere – Kubernetes has forever changed the application delivery landscape. Now, you can have what Google runs on in a matter of minutes, running on any public cloud (AKS, EKS, GKE, et al.). The best way to do this is with GitOps, and the best way to do GitOps is with Weave GitOps.

Where to start?

For most businesses, you won’t be starting from zero. You may have an existing team, paying customers, and cloud infrastructure. The challenge is the “what’s next” – how to improve, grow and scale.

Kaizen (continuous improvement) is the antidote to most business challenges. Constant and continuous improvement, 1% better each and everyday, moving towards your North Star of becoming FinNative – moving from a priori to a posteriori.

In other words, the “unknown unknown’s” of business growth is resolved by the value of a well oiled fast feedback loop, well architectured, cloud native software delivery machine.

>>>>Register for our upcoming webinar Accelerating Payments Modernisation with GoCardless and Elavon<<<<

How to start?

The best way for your business to start is to accelerate the journey to becoming FinNative. Think of it akin to taking a flight versus driving. While you may have the capacity to get where you’re going, it’s more effective and efficient to accelerate the journey. Especially in the modern landscape of cloud technology. Why wait?

esynergy helps build products, services and platforms that accelerates value for clients and is key to realising their business goals. We work with stakeholders, understand your requirements and define the cadence for value delivered e.g. two week sprints. This ensures value is delivered effectively and continuously (because kaizen), and those 1% improvements become atomic overtime.

We bring the expertise – especially in the ecosphere of becoming FinNative, notably:

  • Security (Shifting Left) and CI/CD
  • Data Processing, Encryption and Storage
  • Two Factor Authentication for transactions
  • PCI DSS, GDPR and other regulation compliance

This is but to name a few. esynergy helps you accelerate your journey to becoming FinNative.

>>>>Register for our upcoming webinar Accelerating Payments Modernisation with GoCardless and Elavon<<<<

Start the journey to a more impactful future

Blog Most Popular Insights

Is AI the key to securing your data?

Is AI the key to securing your data?

Cybersecurity has always been a race between attack technology and defence technology. Today, the tech leading both sides of the race is artificial intelligence (AI). Companies are layering AI into their IT networks to secure their data in the cloud, while criminals are adopting ever-more sophisticated AI capabilities.

Criminal and non-criminal organisations use AI technologies such as machine learning, smart automation and virtual modelling in a similar way and for similar reasons. Both sides incorporate AI components into their existing apps and infrastructures to add visibility, insight and efficiency. Hackers, for example, can buy AI components via the dark web to modernise malware such as TrickBot, a six-year-old Trojan that now boasts smart automated capabilities and is many times more dangerous than it used to be.

But before we get carried away with nightmare tales of weaponised AI and automated data theft, let’s explore why, and how, emerging technologies such as AI are essential for protecting your data in 2022. 

How AI keeps your cloud data safe

Any organisation reading about these nefarious new AI-powered threats may wonder whether it’s wise to allow sales data, business plans, customers’ personally-identifiable information (PII) and other sensitive data anywhere near the cloud. Especially now that companies face greater liability than ever for PII under GDPR and the Data Protection Act of 2018. There’s just too much at stake, particularly in sectors such as banking, healthcare and government.

We understand those fears. But at esynergy we firmly believe that an organisation must embrace a move to the cloud in order to be competitive and to provide the kind of service that customers expect in 2022. And rather than making your data more vulnerable, the cloud actually keeps it more secure.

This is because cloud providers enable businesses of all shapes and sizes to embed the very latest security technologies to guard their data while also unlocking its value. Services such as Microsoft Azure, Amazon’s S3 and Google’s cloud storage allow you to embed machine learning components, smart automation tools and other AI tech seamlessly into business operations and then scale them as needed. Once in place, those capabilities update instantly and automatically, so they always provide the latest benefits and protections.

Azure’s secure research environment for regulated data is a great example of how AI keeps data safe automatically, while freeing users to work with that data. Originally created for higher education institutions, the architecture can be used in any industry that requires data to be isolated securely for research, such as finance and medicine. The dataflow process is seamless and secure. When the client organisation uploads their data, Azure’s architecture automatically encrypts it, removes PII, creates a copy in a secure environment, deletes the original, allows privileged virtual desktop access, and then adds AI capabilities such as training the data set and managing machine learning models. Approved researchers are able to focus on their work with the data, while the AI security components continuously monitor the workload and its environment to discover and mitigate risks before they can do any damage.

Unlike legacy security tools, AI doesn’t have to be told what threats to look out for. Instead, it uses machine learning to automatically detect anomalies, and then mitigate threats before they get anywhere near your data.

“Our products use machine learning algorithms, trained on millions of malware samples, to identify threats that we haven’t seen before,” says Adam Kujawa, Security Evangelist at Malwarebytes. “There are also AI tools that help with network monitoring and log analysis, and can inform IT staff of a problem as soon as possible. The AI might miss the first attack, but then share that knowledge with other AI and learn from it, creating new ways to detect the new attack and so on.”

Kujawa also credits AI with speeding up and automating the detection of phishing attacks, social engineering attempts and malware infections. “If all is working well, the user won’t encounter threats at all, and the battles will be at lightning speed, computer versus computer.”

Why AI attack needs AI defence

Cyber criminals love emerging technologies. AI and machine learning add visibility to their operations (for example by revealing security holes), guide decisions (such as how and when to attack), and automate data fraud at a scale and speed that even the most energetic old-school con artist would never manage. These capabilities are not exactly hard to come by. There’s every chance that your own phone is equipped with AI sophisticated enough to create a deepfake that’d fool a CEO’s mum. 

Worse still, gangs have sabotaged companies’ own automation systems to generate data fraud at industrial scale, and even sabotaged machine learning data sets to make them generate inaccurate or dangerous decisions. This so-called ‘AI poisoning’ can render cybersecurity systems useless, with grim implications for global security. 

The best protection against AI attack is AI defence. AI and machine learning are the only scaling factors that can supervise these systems effectively in real time. The AI security tools embedded in many cloud providers’ services are more than up to the job of guarding your data from advancing threats, especially with their machine learning algorithms learning constantly to spot possible threats across your entire database and network. 

“Artificial intelligence can spot the breadcrumbs of sophisticated attacks,” says Max Heinemeyer, VP of cyber innovation at security firm Darktrace. “It can autonomously interrupt the in-progress threats it detects at every stage, whether that be with a digital fake email created by AI to stealthy lateral movement, all without business disruption.” 

AI is not only the easiest and most effective way to secure your data in the cloud, it’s also an essential defence against emerging threats – including attacks that make use of AI to achieve devastating scale. Find out more about how we can design a scalable, modern strategy to protect your data in the cloud. 

Start the journey to a more impactful future

Blog Most Popular Insights

Transforming HMRC with DevOps innovation

Transforming HMRC with DevOps innovation

With 45 million individual customers and 5 million business customers, HMRC handles £2.3 billion in transactions and collects £636.7 billion in annual revenue. Behind the scenes there are 4,800 IT professionals facilitating this mass of complex activity, with a large number now dedicated to digital projects. These factors frame the organisation’s critical need for DevOps practices that enable the delivery of innovative applications and services at high velocity.

A revolutionary solution came in the form of the multi-channel digital tax platform (MDTP), an initiative that eSynergy helped to build and operate. We recently caught up with Ben Conrad, the Head of Agile Delivery at HMRC, who explained how MDTP has facilitated innovative digital tax services and what has been learned along the way. In this article, we will explore the valuable insights shared by Ben and take a closer look at HMRC’s DevOps journey.

The dawn of MDTP

Action was first taken to bring about MDTP in 2013, at which time HMRC was merely equipped with some limited Java services for purposes like the completion of self-assessments. Ben explained that these services ‘were hosted on physical servers and struggled each year to meet the demands of the self-assessment peak.’ With it becoming harder to facilitate this crucial event in the business calendar, the Government Digital Service (GDS) initiated the exploration of digital solutions to bring about a critical step change.

The building of these new services called for somewhere to run them that offered adequate scalability, connectivity to HMRC systems, and the appropriate tools and functionality needed by the developers. By 2016, MDTP was “multi-active” across two cloud providers, and Ben described this multi-cloud milestone as being ‘technically quite impressive’ at the time. In 2017, this approach was optimised via a migration to AWS.

Recognition from the Father of the Web

The platform has become highly effective and thrives on open collaboration with multi-disciplinary teams. Slack plays a vital role in HMRC’s DevOps approach, with 2,400 people across 1,600 channels sharing over 600,000 messages every month. This agile, self-service approach that typifies MDTP allows for 350 deployments to be made each week.

Ben stated that ‘we must be doing something right, because the approach we take is now a case study in the second edition of the DevOps Handbook, with particular regard to the role MDTP played in enabling the economic response to the COVID-19 pandemic.’ In addition to this high-profile reference, Tim Berners-Lee, the inventor of the World Wide Web, commended the team and their system on Twitter.

What does DevOps enable for HMRC today?

MDTP now hosts and supports over 250 digital services, a task which demands continuous improvements to resilience and efficiency. By working with DevOps engineers to write the code and automate testing procedures, HMRC has been able to gain excellent visibility via logs and metrics for the first time.

It is because of the dynamic structure of MDTP, Ben explained, that ‘we are able to provide a platform that is relied upon by so many other delivery groups within HMRC.’ With every element of the system’s infrastructure defined in code on GitHub, the entire process has become auditable, enabling over 1,200 microservices to be run and supported across more than five environments.

Fuelling progress with capability and culture

Specialist skills have been and continue to be crucial to the success of MDTP according to Ben, who said that ‘the great support from esynergy and other suppliers that we work with has enabled us to make this a real success.’ There are approximately 80 people working on the platform teams at present, with the involvement of 16 Civil Service engineers and a range of apprentices.

Alongside talent, Ben also emphasised the importance of culture, which he explained ‘is extremely important to me, and is something that can easily be taken for granted.’ He told us that the powerful DevOps culture that has been achieved requires massive and constant effort to uphold and is another benefit that comes from working with a mix of suppliers. It is this culture, combined with Agile methods and DevOps practices, to which Ben attributes the ongoing success of MDTP.

Continuous improvement

MDTP has broken new ground, achieved huge success, and brought many key HMRC services up to date, but there are always improvements to be made. Ben discussed the ongoing need to break down barriers to collaboration, which will require the development of shared ways of working across the various elements of HMRC.

The Head of Agile Delivery finally focused on the importance of maintaining the freedom to do the right thing, stating that ‘a big part of my job is protecting the freedoms we have to operate in the way that we do,’ which is essential to innovation and progress. Above all, Ben is tasked with building a vision of what to do next, and he expressed a strong determination not to rest on the laurels of MDTP’s successes to date.

Start the journey to a more impactful future

Blog Most Popular Insights

It is time to think about Design Thinking

It is time to think about Design Thinking

Mark Chillingworth

The term design thinking conjures thoughts of Sir Norman Foster and fellow architects hand drawing the designs for a structure of beauty like the Millau Viaduct or Jony Ive developing the Apple iPhone. Ive was responsible for both the exterior design and the software interface. In truth, design thinking is about the end-to-end user experience. Therefore, design thinking is about people, customers and what we clumsily call users in the technology sector.


In tandem with Agile working methods, design thinking has created a completely new technological culture. A culture that studies and seeks to constantly understand those using technologies and shapes the service to ensure the outcomes meet the user’s demands. That outcome could be a financial transaction, a civic service from a government department or a business process. As a people-centric ethos, design thinking reduces friction – which is the core value technology should always bring to an organisation.


So what is design thinking, and why should CIOs and technology teams be adopting the method? “The methodology is ideally suited for IT teams looking to transform and take a leadership role in driving innovation in products, services, processes, or business models,” says Chris Goodhue, VP, CIO Strategy and Executive Advisory Services at IDC, a technology analyst house. IDC adds: Design thinking focuses heavily on why people would want and use a product or service — their motivation — and what they expect to gain from that use — their reward. The goal of design thinking is typically the conceptualisation or improvement of a product, process, experience, or outcome. Fellow technology analysts Gartner agree and describe design thinking as a problem-solving process.


At the heart of the problem-solving ethos of design thinking is to always be considering people, whether citizens, customers or colleagues. Technologists using design thinking invest time and energy into observation and analysis of user behaviour. They take this insight to create services that are tailored to the customer or user’s needs.


In today’s digital economy, organisations, particularly those in retail, financial services and the public sector, need to prioritise customer-focused design thinking. Many of the market challengers that have disrupted retail and financial services have used design thinking to win customers. Coupled with decreasing barriers to entry afforded by enterprise cloud computing infrastructure, design thinking provides market challengers with a major advantage in securing customers. Whilst in the public sector, citizens, are increasingly losing trust in traditional pillars of society; one of the main reasons for this is that they feel that civic bodies do not understand them. Public services that use design thinking demonstrate that they understand the citizen and their situation. This was seen to great effect in the UK during the early months of the COVID-19 pandemic; Her Majesty’s Revenue and Customs (HMRC) developed a trio of technology solutions to support furloughed workers and the self-employed. HMRC used design thinking for the usability of these technologies, which protected families and reduced the impact of the pandemic on the economy.


“It’s about finding out peoples’ behaviour, motivations and needs and coming up with solutions and services to match,” Gartner analyst Marcus Blosch told CIO magazine recently.

Cut the friction 

With an understanding of motivations, needs and behaviours, design thinking will reduce friction in technology services to the customer, citizen or end-user. Friction ultimately reduces the quality of service to everyone. If the customer experiences problems online, they can easily move to another provider. A citizen cannot change public sector suppliers but may stop dealing with a department and miss out on opportunities, which ultimately reduces the purpose of that department.


If employees experience technological obstacles, they can deploy shadow IT, whose work arounds may meet the needs of a customer or citizen today, but ultimately open the organisation up to disjointed data, additional technology operating costs and security risks.


Design thinking was utilised by esynergy to evaluate 10 business cases and the impact of automation, new integrations and an increase in scale of the e-commerce platform, an outbound citation application that manages sold stock, and the warehouse management applications at an international e-commerce retailer. The three technologies were increasing in scale, cost and complexity for the online retailer.


Combining Agile discovery methods with design thinking enabled esynergy to merge elements of these operational domains and understand the potential value and risks of having tighter integration and automation across user journeys. This has led to productivity and efficiency improvements and new opportunities to deliver the perfect order at scale.


Design thinking is now being used to incrementally deliver a new architecture with improved end-to-end user journey integration and cost reduction for exception management at the online retailer.


Whether retailers, financial services providers, or public sector bodies, customers and team members have high technology expectations. Design thinking allows technology teams to change their culture, move up the business value chain and delight customers and colleagues; design thinking does, after all, build bridges.

Start the journey to a more impactful future

Blog Most Popular Insights

Technical Debt’s Hidden Cousin: Dark Debt

Technical Debt’s Hidden Cousin: Dark Debt

Russ Miles

When we build complex, software-based systems we are dealing in pure thought-stuff. Most of the time we cannot see, touch or hit (although sometimes we’d like to) the edifices we are constructing. Is it any wonder we love a good metaphor to help us navigate our daily jaunts into the ethereal?

Metaphors help us navigate the shadow-lands of software systems by giving us mental waypoints and handholds that keep us focussed on the right problem to solve at the right time. From virtual desktops to whiteboard designs, it’s metaphors that help us find our way when the challenges are at their most scurrilous and this is an article about just such a metaphor: Dark Debt.

To start exploring Dark Debt first we need to look at the flip side of the coin (to ashamedly use another metaphor to explain a metaphor … a meta-metaphor?) looking first at Dark Debt’s cousin: Technical Debt.

Technical Debt is Known and Created On Purpose

Technical debt suffers, like many other software development metaphors, from multiple, sometimes conflicting, interpretations. From being present the moment any code is written, to associations with “quick and dirty” decisions that will result in future refactoring being required, you’d be forgiven for thinking that technical debt is a “genium malignum”, akin to Descartes’ evil genius/demon for software developers that’s always present to slip a design into the mire of hard-to-maintain legacy code.

In all the exploration of technical debt’s debits, two characteristics to its credit are often common but rarely highlighted, they are:

  • Technical debt is known.
  • Technical debt is created on purpose.

When you are deciding between two solutions and opt for the faster, less clean approach, you are doing so with intent. Technical debt is not a surprise. You know you’ve taken a specific path. You’ve consciously and, hopefully, collectively decided to accrue some design debt, and everyone is comfortable with that fact.

Far from being an evil demon, technical debt is a powerful metaphor to guide you to collaborate on “good enough for now” solutions, while ideally not losing touch with those decisions that might lead to problems further down the line.

Technical Debt is far from being an evil demon looking to cause you problems, you’re a willing collaborator! However, not so with Technical Debt’s twin, Dark Debt…

Dark Debt is all about Surprise!

Dark debt, in stark contrast to Technical Debt, is always a surprise. That’s why it is dark; you don’t know you’re creating it and, even more importantly, you can’t know you’re creating it. Dark Debt is a natural occurrence of a sufficiently complex system, and it doesn’t matter how hard you think in advance, like a Pokemon that hasn’t read its marketing materials with Dark Debt you absolutely won’t catch it all as “Dark Debt is not recognisable at the time of creation”.

The effects of Dark Debt are complex system failures; anomalies that just were not predicted when a system was placed into production. I often encourage people to think of “incidents” as “surprises” for this exact reason, they are surprising! All incidents look anticipatable and avoidable in retrospect. At the time of the incident though, when Dark Debt is making itself known, things are surprising, confusing, and even brutally embarrassing. Far from obvious and avoidable. 

Surfacing Dark Debt

Technical debt has the advantage of being known, so you can choose when you pay it back. You can choose to do that in advance or at the least-responsible-moment when your design is about to go bankrupt in the face of inevitable change, but it is a choice.

Dark debt doesn’t give you that choice. To deal with dark debt organisations are proactively investing in practices and tools to help them explore, experiment, and even experience it.

Dark Debt requires you need to push and prod at your design in production so that you can uncover your dark debt before it chooses to rear its ugly head in an expensive outage on its own terms and timescale, and this is one of the major reasons that Chaos Engineering and Learning From Incidents are getting so much attention across the industry.

Chaos engineering is a proactive practice that embraces the job of surfacing dark debt by running controlled chaos engineering experiments. Through deliberately injecting turbulent conditions, such as failures, into a system you can throw a light on your dark debt ahead of time. Through chaos engineering you can explore and invest in being better prepared for dark debt on your own terms and timescales.

The Learning From Incidents (LFI) work, originally headed up by Nora Jones (CEO of Jeli), and now augmented by her team’s recent work, “Howie: The Post-Incident Guide”, emphasizes how we can best learn from when dark debt rears its head in chaos experiments and actual surprise incidents.

The combination of LFI an chaos engineering are leading the charge on surfacing and improving a system’s resilience to dark debt. While you can never be sure all dark debt has been overcome, by starting to explore these practices you can at least get ahead of your dark debt and even be better prepared for when, not if, it makes an appearance!

Start the journey to a more impactful future

Blog Most Popular Insights

Making cloud feature in business value

Making cloud feature in business value

Mark Chillingworth

When choosing a house, the purchase is about so much more than a building. True, at a basic level, a house is a building with four walls, a roof and interconnections to infrastructure such as power, clean water and waste. But choosing a house is a far more involved and complex purchasing decision than just selecting the building. Buyers consider the travel options, possibly the nearby schools, local shops, access to parks and recreation and of course the features of the house such as the number of rooms, energy efficiency and the size of the garden.

As organisations increase their enterprise cloud computing adoption, the same process as buying a house occurs. The features of cloud computing need to be considered to ensure that the cloud delivers business value. As with a house, the features of the cloud are diverse; applications are the rooms where the business activity takes place, the cloud is also the infrastructure that provides energy to your organisation. Therefore defining the features that make up the cloud estate within the organisation is as essential as purchasing a home that benefits everyone who will live there. The feature set of the cloud estate must benefit all layers of the organisation, providing productivity, efficiency, security, growth and reliability. Together these will ensure the cloud delivers business value.

An event often triggers a home move – an increase in family size, the need to work from home – and the same is true of the move to cloud computing. The digitisation of the economy will require organisations to increase their usage of cloud computing if they are to effectively manage data levels, build apps and services that are cloud-native and therefore able to meet the needs of customers, and enable the diverse and remote workforce that will shape the modern enterprise. Before a home buyer hits Purple Bricks or Zoopla, at the very least, they jot down what the house must feature. To get business value from cloud computing, it is vital that organisations define their requirements and expectations. These will typically be the business needs and how enterprise cloud computing delivers business outcomes and meets key performance indicators (KPI).

The definition stage will help the organisation with the technical processes that follow, such as migrating applications and workloads to the cloud. With a well-defined understanding of the features that the business requires, it will be possible to determine the configurations and features needed. This process is best achieved as a business-wide objective, with technologists, business lines, senior leadership team and the technology partner working together to harmonise the requirements and, therefore, the feature set the enterprise will use.


Business Benefits

The features of the cloud estate must be constantly connected to business value. Management consultancy McKinsey notes in a paper that businesses that get the most value from software investments are those that “tackle entrenched cultural and structural barriers”. The features and scalability of cloud computing enable organisations to become data-centric, agile and efficient, which beneficially changes the culture.

Early in the move towards enterprise cloud computing, organisations simply lifted and shifted their application stack and business operating models to the cloud. The scalability of the cloud provided frontline workers with greater scope; as a result, cloud computing only temporarily reduced IT operating costs. The business value of the cloud is to re-engineer data and business processes to take advantage of the cloud’s scale and adopt features, applications, and tools that completely modernise the organisation and the way it operates.

The greenfield businesses that were born cloud-native have taken this approach, which has allowed them to seize a significant slice of their chosen vertical market.

McKinsey adds in a separate paper that organisations must use the features of the cloud to tackle integration, tech debt and patchwork service problems that inhibit the modern large enterprise, particularly those in financial services and the public sector. McKinsey warns that organisations can rapidly add new technology capabilities to their businesses with cloud computing, but without a clear focus on the business value, this can add to increases in business complexity and, therefore, the maintenance and operating costs.

Cloud adoption is also an opportunity to reduce existing costs and complexity. In a partnership with eSynergy Solutions, Public sector organisation Ofqual was able to reduce its dependence on legacy technology with a new enterprise cloud approach. eSynergy Solutions identified that the legacy infrastructure of Ofqual no longer fitted the needs of the digital services and data services the government organisation offered; as a result, Ofqual carried a significant level of technical debt. From August 2020, eSynergy Solutions began replacing the legacy estate using the Microsoft Cloud Adoption Framework for the Azure environment. On-premise servers were migrated to Microsoft Azure; in addition, existing Azure deployments were replatformed, cutting inefficient Azure usage and reducing cloud computing costs at the department. Ofqual has also benefited from increased automation, operational efficiency, and working methods.

The structural modernisation of an organisation, which makes full use of the cloud and the features it offers, will deliver value if there is a constant focus on the needs of the organisation. Just as the features of a home may need to be changed according to new needs, this constant focus on usage will ensure the features match the demands.

Start the journey to a more impactful future

Blog Most Popular Insights

Developing your Resilience: 4 Capacities & 7 Properties

Developing your Resilience: 4 Capacities & 7 Properties

Russ Miles, eSynergy Lead Associate

trees bending symbol of resilience

In my last article I explored why high-performing teams and technology-driven organisations are leveraging an investment in resilience to augment their ability to evolve business-critical systems quickly. Resilience is the key to being able to be agile and reliable, or secure. In this article I’m going to show you how to get started gradually by investing in your own system’s resilience capacities through 7 properties that you can start developing today.

Invest in Resilience Capacities
The good news is that investing in resilience can start gradually. In fact I’d argue that the hardest thing is the switch in mindset from “It must work” to “It will fail, we need to be better at preparation for that”. This mindset change is the force behind the cultural change in your approach to resilience and reliability. In parallel you can then start to focus on what are the systemic capacities that you can invest in developing to improve reliability and security while maintaining and improving your speed of delivery, such as:

  • Developing and improving your capacity to anticipate.
    Can we see problems coming? What signals are we looking out for?
  • Developing and improving your capacity to synchronize.
    When we anticipate something, how do we bring the right resources to bear?
  • Developing and improving your capacity to respond.
    With our resources in play, how do they respond? How effective are those responses?
  • Developing and improving your capacity to learn.
    Given how we anticipate, synchronize and respond to inevitable problems with reliability and security, how do we better learn from and promulgate those learnings in the most effective way across the organisation?

In a future article we’ll dive into some of the strategies we’ve seen working to develop these capacities in different real-world contexts. For now, let’s dive a little deeper into how you can develop those capacities for the challenge of system reliability.

Develop Resilience Properties for a Specific Goal
In my experience there are 7 key measurable properties that you can develop across your business-critical socio-technical systems to improve your resilience for a given concern, i.e. Reliability. Those 7 properties can be best expressed as a set of questions we can ask about a desirable systemic quality, call it X for now, we are looking to develop with resilience:

  • How do we define X?
  • How do we observe X?
  • How do we explore X?
  • How do we fix/improve X?
  • How do we continuously verify X?
  • How do we learn with regards to X?

For the case of developing your system’s reliability, you would interpret these 7 properties as:

  • How do we define reliability?
  • How do we observe reliability?
  • How do we explore reliability?
  • How do we fix/improve reliability?
  • How do we continuously verify reliability?
  • How do we learn with regards to reliability?

For each new desirable systemic quality that you wish to develop you build a plan that purposefully invests in developing each of these properties for that quality. As you iterate over those plans you are looking to gradually and measurably invest in developing your socio-technical system’s resilience capacities.

Next Steps
In this article you’ve learned about the 4 resilience capacities that you can invest in to gain the advantages of speed of delivery, while not sacrificing crucially important concerns such as reliability and security. Those capacities can be improved and evolved for reliability and security by building concrete plans to define, observe, explore, fix/improve, continuously verify and learn; the 7 properties to develop for resiliency in a particular area.

In future articles I’ll explore some concrete approaches to developing each of these resilience properties in specific contexts, such as tips for developing reliability in a FinTech environment.

Russ Miles, a Lead Associate of eSynergy, is on a mission to help organisations establish agile, reliable, secure and, ultimately, resilient and humane socio-technical systems that enable all stakeholders, from the users and customers to the builders and operators, to thrive inside and outside of those systems.

Russ is currently a lead engineer with Segovia Technology at Crown Agents Bank where his team develop the payment and foreign exchange systems that help incredible organisations such as the UN and Save the Children distribute much-needed funds to hard to reach countries and markets.Russ is co-founder of the free and open source Chaos Toolkit project. He’s also an international consultant, trainer, speaker, and author. He is a recognised expert in Chaos Engineering and has contributed to “Chaos Engineering: System Resiliency in Practice” from O’Reilly Media as well as having written “Learning Chaos Engineering”, also by O’Reilly Media, where he explores how to build trust and confidence in modern, complex systems by applying chaos engineering to surface evidence of system weaknesses before they affect your users.

Russ can be reached on Linkedin and on Twitter.

Start the journey to a more impactful future

Blog Most Popular Insights

Setting up, building & maintaining a Community of Practice (CoP)

Setting up, building & maintaining a Community of Practice (CoP)

To Enable Organisational Change, Skills Transfer & Innovation: Best Practice and Challenges

With increasingly complex technology landscapes within most organisations the need for the IT departments to communicate and share efficiently as well as having transparency across the organisation is more important than ever to ensure the smooth deployments of a higher number of systems and applications with more changes per year and more complexity and risk to the implemented change.

We have seen across many industries over the last couple of years significant deployment failures with customer outages costing millions in compensation post-event.

Facilitating improvements in communication, knowledge sharing and transparency across the organisation is fundamental to reducing these types of issues.

This need for improved communication naturally leads to the creation of various Communities of Practice, to allow people to share experiences and knowledge with other members of the company either within the same group or outside the structured organisational boundaries.

A Community of Practice (CoP) can be setup to cover any subject matter and can include people from various groups and backgrounds who have an interest in the topic. Often the CoP will be focused on a new technology that the organisation is looking to adopt. Given my background in Artificial Intelligence, the topic of focus for many of the CoPs I have been involved with has been on the adoption of AI and Machine Learning. Other times the CoP can be focused on a process improvement or even a job role specialism. For example, I have been involved in broader areas such as Innovation or Lean Delivery CoPs. Regardless of the topic, the challenges for the CoP are still the same.

However, despite the importance of CoPs, setting up and running them successfully long-term is a non-trivial exercise. We have seen many examples where organisations set up a CoP but fail to secure its running long-term. Either the initial enthusiasm dissolves, or the core group does not reach critical mass, or the organisers runs out of interesting agenda items and topics. This combined with the pressures of our main roles, executing a CoP while hugely beneficial if done correctly can be difficult to sustain.

Over the last few years I have setup and run a number of different CoPs in a number of different organisations and have seen many of the problems that challenge the longevity of a CoP and have created a number of solutions that turn the challenges into opportunities that fundamentally secure both the need and support for the CoP operationally.

One of the key success factors for a CoP is purpose. Without purpose the CoP will eventually collapse. From the outset, defining the purpose of the CoP, its reason for existing is really important. Communicating the purpose with the wider community, potential members, stakeholders and advocates is essential. This messaging needs to be reinforced regularly so members who attend, understand how important the group is to the management and organisation. The purpose needs to be linked to the business and technology strategy and seen as a driver for success for them.

Another key success factor is to give the CoP ownership and decision-making authority within its own field of expertise. Allowing the members to be masters of their own destiny is tremendously powerful. This will align well with an agile and lean methodology of constant improvement but also drive engagement and contributions to facilitate best practice across the organisation.

A well organised and managed CoP will not only keep and grow the community internally but will facilitate the introduction of external speakers to share wisdom from other organisations that maybe more advanced in the area of interest providing valuable insights to accelerate progress and avoid pitfalls. External Speakers will supplement the CoP agendas which will already have people within your organisation sharing their own experiences, successes and challenges as part of the knowledge sharing focus.

Another key factor for the CoP is membership recruitment. It is essential to grow the membership of the group to keep an active membership. Overtime people join and people change roles, so having an ongoing focus on membership acquisition is important. It is also important to note that different people join a CoP for different reasons. Some are joining just to learn as they are interested in the topic but don’t consider themselves as experts and may not even be working with the subject matter (but may want to in the future), these members are important but most are unlikely to be what I would call active members of the community. Others are joining because it is a topic they know and work with and want to both learn from others but also contribute their views and opinions too. These members will be active members offering views and thoughts and potentially even speak on the work they are doing. A few members of the group will be your core team, very active, joining every meeting, wanting to contribute to the purpose and objectives and able to help with any deliverables you decide the group should focus on. Understanding this mix of participation and encouraging members who want to contribute more will be another key success factor for the community.

Another success factor is that of reporting progress of the CoP. This information is useful for both the existing members, to see how active the group is and how the group is growing to extend its influence, but also to the senior management, stakeholders, advocates and sponsors to illustrate the progress and value achieved from the group. The reporting will take different formats and utilise various formats, from quarterly updates to weekly status and progress. The reports will have different audiences and will be used in different ways to promote the group.

While these are a few of the key success factors for a CoP, there are many other aspects to sustaining a Community of Practice, learnt from many years of running such communities both within organisations and externally in the public sector that will help ensure your CoP not only runs longer term but adds value and delivers organisational and technical change that aligns with your overall strategy.

When a CoP is setup and run well, it can be a significant force for change, help an organisation to accelerate its adoption of technologies and processes both faster and more successfully than otherwise possible. I have seen CoPs bring different parts of an organisation together (that previously thought of each other as competitors), discover projects and technology implementation occurring that would not have been widely shared without the CoP, and provide a platform for agreeing standards and principles that become important for the organisation from an audit, compliance and governance perspective.

By Andy Pardoe
eSynergy Associate

We are positioned well to facilitate your journey to leveraging the benefits of CoPs within your organisation and happy to help regardless of your experiences of CoPs in the past.

For more information and to arrange a meeting please contact Adele Lewis.

Start the journey to a more impactful future

Blog Most Popular Insights

Communitea with Dave Farley

Communitea with Dave Farley

In this series we’re bringing a stream of leaders across the word of tech and digital together to discuss a number of thought provoking topics, giving you insights from those pushing the boundaries and driving innovation and change.

Today I’m delighted to be joined by Dave Farley, pioneer of Continuous Delivery, thought-leader and expert practitioner in CD, DevOps, TDD and software development in general.

We’re going to explore some of the biggest challenges leaders face and find out how you can drive business or customer value and what makes a successful team today.

So lets begin!

Dave, welcome! Tell us a bit about yourself?

I am Dave Farley. I am an author, software developer, consultant and speaker.

I work as an independent Software Engineering Consultant, advising organisations and speaking at conferences and other events all around the world. I am one of the authors of the Continuous Delivery book and the Reactive Manifesto. My consultancy practice is mostly organised around those approaches.

Previously I was the Head of Engineering for one of the world’s highest performing financial exchanges, employing advanced Continuous Delivery techniques in a technically demanding, regulated industry.

What does a typical day look like for you?

I started my own business a little over 5 years ago. My work is pretty varied. I am currently working on developing a YouTube Channel ( to discuss ideas about Continuous Delivery and Software Engineering and I’m developing a series of video-based training courses to help people to create “Better Software Faster”. I am consulting regularly, albeit remotely, and am writing a book on Software Engineering. I also have a couple of side-projects so write code every week, if not most days.

How are you driving business or customer value through technology or the cloud?

My business is really about education, rather than (sw) product delivery these days. Though the build of my career has been spent creating complex software systems.

My “thing” is an engineering centred approach to software development. Continuous Delivery is currently “state-of-the-art” in Software development process. It helps the biggest, most successful SW companies in the world to deliver great products, quickly and efficiently with VERY high quality. I help my clients to achieve similar results.

What are the biggest challenges as a leader you face today?

I believe that we, the software industry, have found the answer with how to build high-quality software effectively and efficiently. We now know, and have evidence for, what works. The problem is getting people to understand it and adopt it. One of the things that I have learned, being involved at the birth of several now widely adopted ideas, is that “semantic diffusion” is an incredibly powerful thing. Nearly everyone misunderstands popular ideas, often missing some of the most valuable aspects of those ideas.

I see my job as an educator and coach to help people have a better framework to process and adopt ideas that matter, and also a framework to help them to discard ideas that don’t matter.

What does a successful team look like?

A successful team is small, autonomous, has all of the skills that it needs to make its own decisions without referring to anyone outside the team for help (during the majority of their work). They can produce a “releasable outcome” multiple times-per-day and spend 44% more time on new work than lower-performing teams (source: “Accelerate, The science of Lean Software and DevOps”, by Nicole Fosgren, Jez Humble & Gene Kim

How do you ensure you stay ahead of the curve?

One of the reasons that I chose Software Development as a profession, was because I am addicted to learning, so I am always interested in new ideas in our field. I am less focussed on tools and frameworks and more focussed on patterns, principles and design thinking. I am an avid reader and watcher of presentations and explanatory videos.

Which quote defines you?

“It doesn’t matter how intelligent you are, if you guess and that guess cannot be backed up by experimental evidence – then it is still a guess!” – Richard Feynman

Tell us a story you’re not telling enough?

I once created an automated Deployment Pipeline for Deployment Pipelines, complete with unit and acceptance tests.

Who inspires you?

Lots of people: My wife, Richard Feynman, Sean Carol, David Deutsch, Alan Perlis, Kent Beck, nearly everyone that I have ever worked with.

What three pieces of advice would you give to the next generation of leaders?

  1. Don’t think that your job is to be the smartest person and come up with all of the ideas. Use evidence, not guesswork or only experience. Work experimentally!
  2. Your job is to have a vision for where you want to go, but the team should work out how to get there.
  3. Effective leaders amplify the power of a team. Think of yourself as a sports-coach, hire talented people, try and help them develop and grow their talent.

A huge thank you to Dave for sharing these insights in our Communitea series, if you would like to participate in a future session or have questions you would like to ask our next series of leaders please get in touch adele.lew[email protected]

To get in touch with Dave please visit or follow him on twitter @davefarley77

Start the journey to a more impactful future

Blog Most Popular Insights

5 take-homes for building a cloud self-service platform

5 take-homes for building a cloud self-service platform

Self-service development is an essential characteristic of cloud computing. It allows engineers to develop and ship code faster, releasing new features and products to market at pace. But it can be one of the hardest cloud-based capabilities to implement…

Steve Wade, ex-platform lead at Mettle, has served in technology leadership roles across financial services, government, real estate and gaming. He recently presented as part of the eSynergy Tech Series, looking back at his time leading the team at Mettle as they created a dynamic self-service platform for engineers.

The challenge at Mettle

Mettle is a venture inside of NatWest. It was spun-up to implement business banking for small to medium-sized enterprises. The Mettle offering is completely digital.

The challenge was for Steve and his team to remove red tape, innovate at speed and save money (cost per customer), all the while providing a scalable platform to service Mettle’s customers.

There was initially a single platform engineer, swamped with requests from developers wanting to innovate. This engineer was unable to help because of their backlog. There was a constant tug of war between development and operations… This had to be addressed.

Steve’s 5 take-homes…

1. “Once you’ve found a pattern that works, your next goal should be helping others to do the same.” – Steve Wade.

At Mettle, we found a pattern that worked. One that scaled for the number of developers we had, and that enabled innovation within the product teams, without the platform team and the platform getting in the way.

2. “Confidence is contagious and so is a lack of confidence” – Vince Lombardi…

We had to stop the existing tug of war between the developers and the platform team – we had to become one big team working on a common goal to deliver functionality and features to our customers.

This meant instilling confidence and providing self-service to the product teams, while getting away from having a massive backlog of things we needed to do for them.

3. In any good transformation, you need a good mission statement: one you can stand behind.

At Mettle, it was this: “To provide an easily extensible, config driven, ephemeral platform, with clear ownership with – most importantly – reliability and consistency.”

We defined the use of terraform, upfront, from day one. We wanted an à la carte menu, allowing the developers and the product teams to pick and choose the modules they needed to be able to deploy and run their applications or microservices.

We made this developer friendly by implementing Atlantis as a centralised tool that all infrastructure would run through (terraform execution by pull request).

Crucially, we achieved clear ownership by leveraging GitOps, and deployed changes through the environments by embedding GitOps into the way developers work – we did not replace the way they worked with GitOps.

We established a cookie-cutter environment for developers to deploy any application they wanted, to any Kubernetes platform, in a consistent fashion. This consistency was achieved through the three-character environment prefix: Dev / UAT / Stage production. That flowed through everything we did at Mettle.

4. “Technology is easy, people are hard.” – Heather Downing.

The people part of the puzzle was the biggest challenge. Application engineers are not Kubernetes experts – their expertise is in application development. We had to work collaboratively with them on the migration and rollout, and leveraging the platform.

We had the stability of being backed by a large organisation but they put us very much under the microscope (particularly with regards to compliance and audit). Because we used Git as our central source of truth, our commits and our PR merges to master provided the most complete audit trail possible.

Our challenge throughout, was distilling something that – from the outside – looked ridiculously complicated, down to the bare minimum that people at Mettle needed to be able to leverage the platform and innovate for their customers while building better products.

5. The platform team’s customers are the developers. If the developers are happy, the platform team will surely be happy.

The most challenging part of the migration was getting everyone on the same page from an understanding point of view. There was a shared responsibility and ownership model but there was also a shared support model. So, we needed to make sure all the product teams understood the components they needed to deploy changes for their apps – but also understood the bigger picture of how everything was put together.

Our work meant that:

Production deployment increased by 50%
There was a cluster MTTR of 20 minutes (excluding data)
Developers were 75% less focused on operations and more focused on delivering value to the customers.
There are now clear ownership boundaries at Mettle. The platform is efficient and enables self-service. Developers can get on with their product development and keep the product leads and product teams happy. In short: everybody wins.

For more information on what Steve and his team achieved at Mettle, watch his eSynergy Tech Series presentation here.

Start the journey to a more impactful future