💾 Archived View for capsule.adrianhesketh.com › 2021 › 07 › 05 › think-before-you-ipaas captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-11-30)
-=-=-=-=-=-=-
Building out new products is often a case of combining custom code with SaaS services to deliver a unique solution to customers.
SaaS services include customer service tooling products used to interact with customers such as ZenDesk, card payment solutions such as Stripe, and management tools such as Shipstation for ecommerce fulfillment.
A common challenge faced by teams is integrating these products. For example, to capture when a product has been shipped in ShipStation and update ZenDesk so that customer service teams are aware if a customer calls up, or capture failed subscription payments in Stripe and trigger an automated process to let the customer know.
This is where iPaaS (Integration Platform as a Service) tools can step in. If you've never heard of the term before, you've probably heard of the products like MuleSoft and Snaplogic, and you can see a list in the Gartner website [0] or read a write up of the main points [1].
Tool vendors often claim that they simplify the integration of systems, and even allow people without software engineering skills to carry out work that would usually be done by developers.
I think this is untrue, and that they cause more problems than they're worth.
I think that cloud tooling, open source software packages and software engineering techniques provide all of the benefits, but with fewer downsides, and lower total cost of ownership than iPaaS platforms.
Here's some reasons to think before you buy into the idea of using an iPaaS solution.
Let's assume we've got a scenario where we have two SaaS products that "need to talk to each other". If the connectivity is so standard/common and the SaaS products are popular, they'll probably already "talk to each other" via a ready made connector. For example, Shopify and Stripe [2] or Salesforce and Docusign [3]. There's no need to introduce an iPaaS to connect them.
You might need to enrich data by connecting to other services, restructure data as it passes through, or redact sensitive data as it moves between systems. It's highly unlikely that the off-the-shelf connector will do this.
If you're buying into a platform based on the idea that some future integration you do will be easier, then that's quite a gamble. You may also find that the connector is not very well featured. It ticks a box on your RFP process, but when you come to use it, you may find it doesn't meet expectations.
If your needs are so basic that the connector does what you want, great, but it's unrealistic to think that you're going to be dragging boxes together to make everything work perfectly.
If you're using AWS or another cloud platform, you have everything you need to integrate systems.
You have low code. The JavaScript NPM repository has over 1M packages in it, the .NET NuGet package repository has 250k packages. That's software your teams don't need to write.
You have low operations. Serverless platforms like AWS Lambda, EventBridge, and Step Functions mean that you have very low operational overhead, and incredibly low operating costs. You're not managing operating systems, you're not dealing with servers running out of disk space, you're not dealing with logging.
By buying in best-of-breed solutions, are you actually just adding an unnecessary platform?
Tools like SnapLogic show their drag-and-drop interfaces off in demos. I've been around for long enough to see several iterations of this idea play out.
Around 10 years ago, I led a team to build a system on Windows Workflow Foundation based on the idea that business users, and semi-specialist "Automation Engineers" would be able to change the workflow to suit customer needs.
The cost of doing this was huge but the vast majority of customers used the default workflow with a few minor tweaks, and the ones that needed to have custom workflows needed to have custom software development done anyway.
The workflow visualisations that we were so impressed with in the early stages weren't information dense enough for expert users. We replaced all the drag-and-drop stuff with dense tables, driven by web sockets.
Are you buying a product that solves a problem that you really have, or are you buying a bag of features? Don't make the same mistake I did.
I've built several systems that used workflow systems, and I can't think of a time where we didn't end up removing it and replacing it with something custom, after wasting a lot of effort trying to understand how to operate the system.
The key things to understand were the failure modes.
How does buying a platform help you with these problems?
The recent-ish improvements in software engineering for me have been in the areas of:
We have accepted that configuring a system carries the same risk as making a code change, and we push those changes through a process that uses automated testing, deployment and infrastructure as code techniques to catch issues early.
As an industry, we've developed significant knowledge around testing approaches and developed a large quantity of open source programming languages, frameworks and tooling.
If I search for the free and open source Jest testing framework with `+jest +test` I get 400M results. If I search for the proprietary unit testing tool embedded into a vendor's product, I get 1M hits.
I don't think it's a good tradeoff to swap a vast marketplace of tools and techniques for a walled garden. It's the difference between swapping a car for a railway ticket - great between major cities (at certain times of day), but if you need to get somewhere that's outside the city, you might be in for a hard time.
Take a look on LinkedIn at how many people there are with MuleSoft skills in your area. Then look at how many people there are with JavaScript or C# skills.
At the time of writing, on LinkedIn only 2,700 results show for people with Mulesoft as a skill in the UK, compared to 95,000 with AWS (35x more).
If the argument is that you can train people up, then the counterargument is that there are already countless coding schools set up to do the same thing - I've hired from them.
If you hire for general developer skills, you can use that person for all sorts of different projects and tools.
The "semi-skilled" people you think you need to build your mission critical platform in a low code or integration system need developer level skills anyway. If you look at what they're producing, it's a computer program. It has if statements, switch statements, variables and all the rest.
They've had to understand complex requirements, manage data formats, write SQL statements, design data mappings, enter regular expressions, talk to stakeholders, deal with testing, understand integrations. If they can do all that, surely it's not really helping them to drag some items around a screen, or have to write program code in XML fragments.
I'd expect a high turnover of staff, as staff leave to work at companies that give them marketable skills. Unless the plan is to create a team of people that can't leave, because they don't have marketable skills of course...
I've watched my son learn programming from a young age. First building up complex games and projects using the Scratch programming language, and now moving between Node.js, Python, C, C#, and Go. I asked him if he would use tools like Scratch and Microsoft Blocks again, instead of symbolic programming languages. He said that it's just much too slow, and that keyboard-driven autocomplete makes it much faster to work. He's 11 years old.
I think it's a mistake to believe that people can't learn to code, and learn it quickly. Today's engineers have books, blogs, tutorials, Stack Overflow, YouTube, video training, conferences, bootcamps - the list goes on.
By funneling people into a platform that limits their options, you're stopping them from making use of the full range of tools and support that programmers have at their disposal. Documentation is less useful, you can't benefit from the open source ecosystem, you're reliant on fewer vendors.
Avoiding or working around licensing restrictions and costs tends to drive a number of anti-patterns, such as shuffling or limiting developer licenses, per vCPU licensing comprising the hosting design or encouraging a single point-of-failure, using non-representative non-production environments due to avoidance of premium features such as high availability systems.
iPaaS seems to be a product category where the people buying it are buying it for someone else to use.
This makes it difficult for the buyers to assess it by carrying out hands-on comparisons against other techniques, and iPaaS products are likely to impress until the "edge cases" start arriving.
I'm all for using sharp, focused tools for delivering an outcome. Customer.io's workflow tools? Great. ZenDesk's issue management workflows? Great.
But using those type of tools a general platform for API development and integration? I'd encourage thinking about whether the overall TCO stacks up.
Think about the integrations you're going to need. How many of these are actually custom into existing or proprietary systems and will require software engineering anyway?
Think about how this new system fits within the wider environment. Will you be doing extra work to integrate it into your environment? With that work, is the value of the product still justifying its cost?
Think about available skills. Can you recruit? Will you have to train? How will this affect your operational model?
Think about how you'll run this. Are non-production environments available, how are they licensed? How does this affect the costs? What are the licensing costs? How will this system be fault tolerant? If the performance is poor, what can you do? When a new version of the platform comes out, how much effort is it to upgrade? Will you need to take downtime?
Think about 3 years from now. How do you think it will have gone when you've got years of changes in the platform?
Think about the licensing costs. Will that affect how you use the system? What will you do if you need to double your licensing spend? Is your business seasonal, do you need to price for peak load? (I've seen plenty of retail systems that peak at over 10x normal volume).
Think about what happens when things go wrong. When the system is down, how much help are you really going to get? Does the support only actually cover the base platform software and built-in connectors and doesn't touch your customisations?
Think about how changes you might need will be handled. If something's wrong, how long will you be waiting for the next release? Will your change just sit on the backlog?
Go CDK - building Go Lambda functions
Cancelling Go network requests