The 'Known Good' Podcast

Kubernetes w/ CPO Paul Farrington

December 01, 2021 Glasswall Season 1 Episode 2
The 'Known Good' Podcast
Kubernetes w/ CPO Paul Farrington
Show Notes Transcript

88% of organisations are today running Kubernetes to manage containerised software. Cloud-native technologies are now emphatically mainstream. In this Podcast we discuss how teams can use the elastic nature of Kubernetes to secure and remove threats from files, at industrial scale.

David: ​Hi, and welcome to the Glasswall podcast. Today we're talking to Glasswall Chief Product Officer, Paul Farrington. Paul, hello and welcome.

 

Paul: ​Hi David, great to be with you.

 

David: ​The topic we're looking at today is delivering security at scale. The context for this is that as more applications, services and data are moved to the Cloud, security has got the challenge of keeping pace with this amazing speed of change, and to do so with Cloud-native technologies. So Paul, over the next few minutes, you're going to walk us through this important issue. You're also going to focus on one particular piece of Cloud-native technology which is Kubernetes and meeting this need for security at scale. Before we get there, could I ask you to explain what is meant by Cloud-native? And why this is important in the cyber security context. 

 

Paul: ​Yeah, sure. These technologies have really come to the fore because of organisations deploying inter-Cloud environments, really trying to come to terms with the challenges of deploying infrastructure at scale and moving beyond the need to actually deploy physical boxes or even virtualized machines, and trying to break down or decomposemonolithic applications. Applications, think of spaghetti code, which is really hard to maintain; resistance to change and innovation. 

 

When you think about that driving force that organisations have to compete in the market to innovate faster, to be agile, the need to be able to effect change has really driven organisations to try and figure out how best to deploy the computing services, particularly into Cloud environments that really kind of go with the grain as it were of needing to achieve agility.

 

So Cloud-native technologies just really help organisations to build and run scalable applications in a really dynamic environment. It could be public, private and hybrid Clouds. Some of the attributes you would see in Cloud-native technologies would be the use of containers. You’ve heard I'm sure of Docker, one that is one kind of species as it were of how containers can be managed. There’s open standards nowwith how containers can be used in production. It doesn't necessarily just need to be Docker as a technology. 

 

You also see functions as a service, so-called serverless computing. Actually, there are servers behind them. These functions though, as they are exposed to the average user or the developer that's kind of creating functionality using functionalist computing. It basically abstracts away all the cares around the infrastructure. And so you can just focus on building kind of discreet functions, and then basic composing those together to have a system which is highly functional and-- lots of agility.

 

Cloud-Native technologies, including Kubernetes, are really the modern approach to how you deploy and then run your applications. But also, have change built-in, or the ability to change built-in to how we think about maintaining an application.

 

David: ​So this is contributing to an environment where organisations really need to deliver security at scale, but why is that important? Where does the Cloud fit in? Perhaps why can't traditional security technologies meet those needs?

 

Paul: ​So I think some of the Kubernetes technologies, they bring both opportunity and challenges for organisations. Some of the opportunities they afford organisations is to be, as I mentioned, far more agile, to be able to decompose and toseparate concerns in how you actually construct your applications and to deploy those into your production environment. And that's great, that really helps teams to move faster, with greater velocity. 

 

Also for the operations people to have greater confidence. To have that certainty, predictability about how the systems are going to run at scale. But what that also demands is a change in how we think about security. Because, some of these technologies don't come secure out of the box. They need to be hardened. We need to think about how to observe security at scale across these Cloud-native environments. In particular,if you're using containers, microservices and Kubernetes we need to think about how we're running those deployments at scale in a secure fashion. 

 

Also, some of the other opportunities which are now afforded to companies is the ability to assess the security posture, in a way which is far more efficient and scalable. And it's not just about the software that's running, think about the data which is flowing between these different services. 

 

So we think about files, for example, being able to understand at any moment in time where the files at rest or in transit, what the security posture is for that object, that data object, the file, the business document, and to be able to remove those threats. Having the ability to insert, inject security into different processes, because it's more discreet, far more maintainable. 

 

This is a fantastic opportunity for organisations to use these contemporary technologies and to really address security in a far more elegant and scalable way.

 

David: ​Right, we've mentioned Kubernetes a couple of times in relation to being Cloud Native and scalable security. But, in a large sense, there is a bigger background here, isn't there? Because Kubernetes is having a much bigger impact than security, isn't it?

 

Paul: ​Yeah, I think Kubernetes is really helping organisations to drive down the cost of infrastructure, and it's not just limited to Kubernetes. Serverless computings, functions of service. You may have heard of Lambdafunctions, for example, with AWS. The ability to run codes very discreetly, not to necessarily have to care about having to spin up servers to run these functions. So serverless computing, that definitely is one pattern that's kind of core to Cloud-native technologies, which answers the scalability question. It answers the ability to execute discreet pieces of code very rapidly and at scale.

 

And then Kubernetes is, I guess, a complementary technology but is perhaps -- I would be more concerned about being able to execute any type of software. You can put it into a container and you can make it run. Then, Kubernetes is very affectionate to that type of design pattern. 

 

So arguably -- and there are people who perhaps have a bias for particular types of technologies, but arguably Kubernetes is more tolerant of the hybrid approach of -- certainly tolerant of the heterogeneity of how you want to compose your systems. 

 

That gives great utility to organisations, and how they think about responding to the market, and being able to de-couple what the concerns are of the other system and to be able to interact with other systems. And to do that, at scale and have what we're trying to achieve in these deployments is observability. So it’s to understand that, any moment in time,the system’s health, and then be able to react to problems in real-time. 

 

And as load is presented to the environment, so let's say it's an organisation, it’s a book shop, it’s approaching Christmas, the latest top-selling book has come on the market, you need to be able to scale to the potential demand that may present itself on any particular day. Maybe because of a sales campaign. 

 

So, the burstability, he infinite scale that Kubernetes, and for that matter serverless computing, can provide is highly attractive. Then, when the campaign or that season of promotions has kind of waned, being able to scale back down your infrastructure. That is really quite something. And that's one of the reasons why we are seeing so much demand insome of these Cloud-native technologies.

 

David: ​So, focusing on security specifically, how does Glasswall use Kubernetes in its product strategy?

 

Paul: ​So we have an open architecture approach to how we allow our customers to deploy Kubernetes. So that mightbe as part of a managed Kubernetes service, say AKS from Azure, and be able to deploy the Glasswall CDL platform, as we call it. Which is really just a Kubernetes-based environment, which runs our core engine in pods across various nodes within a cluster, and to be able to then provide industrial scale to our customers, to be able to remove those threats from files at a breathtaking scale.

 

For example, you may want to go from having a hundred files an hour being processed, and for the threats to be removed, but then to be able to burst up to, say, a hundred thousand files in the same hour, because of some particular need that is presenting itself. 

 

The way that Glasswall thinks about the deployment of our core technology, which is essentially the engine SDK, which runs, as I say, in a Pod. And that's really the goodness that we provide to our users in helping them to analyse the threats that potentially exist in those files, and then to remove those, in those files, and to return a completely safe and rebuilt file back to the end-user.

 

And really, Kubernetes is just a means to an end, in being able to provide that utility at incredible scale. The flexibility which the architecture affords our customers and users is really quite impressive. Because, you can integrate and bolt-on additional plug-ins to that environment. If you want to [00:11:23 –inaudible] some kind of user interface, say a Filedrop that users will be able to perform whatever I’ve just spoke about, but to innovate a visual way. That's something which is easy to configure.

 

Or, if you want to be more programmatic as to how you deploy the technology for it to say link to a software application, to paint a picture here, you've got an application where you're receiving, say, passport photos or resumes, CVs from candidates that are perhaps applying for a job, or want some help in looking for a job. And potentially, some of those files may contain threats. They may be malicious in nature, and you need to be able to hand over those files to a service that is going to have greater efficacy than just an antivirus scanner, that's going to be able to remove those threats and return the file back to the software application, for it to move forward, in actually being able to then deal with the data,which is receiving from the user.

 

So there's lots of different ways in which we can deploy the technology. And I mentioned there it can be set up as a part of a managed Kubernetes service from, say, AWS, Google Cloud or Azure, of you can run it in your own data centre, or in your own private-public Cloud, and to manage say the Helm charts, would provide Terraform scripts, which allow you to then run the service in a way that you want it to be configured. So,because it's open architecture, you have ultimate control as to how that operates within you're environment. And that's really leveraging the power that Kubernetes provides-- the utility to the user.

 

David: ​When we were talking about this, offline, you kind of used a really useful analogy to help me kind of understand those points. Can you share that with us again?

 

Paul: ​Yes, there's a constant debate about whether or not you should be treating your deployments in relation to Kubernetes as pets, or as cattle. I don't think there's really a right or wrong in this. There's probably a balance that needs to be stroked. 

 

Let me first explain what I mean by the analogy. You think of pets as something that you love and nurture everyday, spend a lot of time with. You are affectionate towards each other, and you are totally invested in that relationship. 

 

Whereas, arguably with cattle, whilst you're very respectful to the animal, you sort of care about that animal, but you are ultimately-- there is a utility there. At the end of the day, that relationship won't survive forever. The animal will move on, and that's just the nature of that particular relationship. There's perhaps less emotional investment, there’s less time being spent with that animal.

 

That's kind of the frame of the analogy. We’re bringing it back to Kubernetes and how you think about deploying your systems. You kind of need to really decide where you want to be on that continuum. Because it matters.

 

If you're going to invest lots and lots of time in building up a system, tweaking it, patching it, but doing it in a way which isn't necessarily repeatable. So if you're not doing that as infrastructures, as codes, to use the jargon, so that's really co-defining everything, every change you ever make and writing that down into a script, essentially, that can be run to kind of recreate any changes you make.

 

If you don't have that utility, or that ability to recreate what you have, then that investment of time and effort and maybe love that you have for that system, may be misguided. Because, when disaster strikes, if you can't recreate that environment very quickly, and with certainty as to how that will be still back up in a small amount of time, then that could be a challenge. 

 

There's a school of thought which encourages people to think about the deployments, Kubernetes, and for that matter, any type of system where you are kind of-- you have less affection for how long that environment might exist for, and if the worst comes to worst, if the internet weather or the network weather goes against you, you can’t figure out how to return a deployment back to a good healthy state, that you stand up and have one very quickly, using your infrastructure as code, and then you say goodbye to that cluster, you terminate its existence. You are kind of more cold, less caring in some regards as to how that cluster or that deployment in Kubernetes is functioning. Just take more of a utilitarian approach to how you interact with those deployments.

 

The good news is Glasswall really is agnostic in how we think about, and how we service those types of design preferences. Because, as you think about that continuum, there are merits to lots of different approaches. It's where, really, the customer wants to draw the line as to how they want to configure their systems, their solutions in Glasswall, because of the way we approach our deployments, we can provide that flexibility and utility to customers. 

 

And that's, again, coming back to our philosophy of having an open architecture, which fundamentally puts the power into the user’s, into the customer's hands to decide how they want to deploy and to maintain their systems. We are completely agnostic to that. But it's something that we are very deliberateand cognisant about, making sure we are able to support that.

 

David: ​Okay. And that also illustrates a difference in the way Glasswall approaches scalability from others in the sector?

 

Paul: ​Arguably. I mean, to have that approach where we try to meet the customer where we find them. And there's going to be incredible heterogeneity in terms of how organisations think about their deployments, their design preferences will be. Some might just want to receive a machine image, an NMI for image, for example, for AWS. And that's what they want to stand up, and that's good enough. In some cases it will be, to be able to deploy into a managed service. As I mentioned, as your AKS, so the managed service is taking care of the patching, arguably the security posture of that Kubernetes environment.

 

In some cases, people, customers will want to deploy, and for them to have effectively a Kubernetes native deployment onto their bare metal, and that's something which we can support as well, and be able to provide that flexibility. 

 

Of course, there are lots of vendors that have similar approaches, but what we're trying to address here is the need for flexibility, and to avoid customers crucially being locked in to any particular paradigm. Or any particular commercial service which then becomes resistant to change, or something which is potentially going to cost them more money than they would choose to spend for a particular type of deployment. We are trying to work with the grain of the customer's organisation, and give them that flexibility as to how they can move forward with Glasswall. 

 

David: ​Great, so just to bring up to the light for us, Paul. Can you pick out a typical use case?

 

Paul: ​So, the way that Glasswall is offering our services, is probably very similar to many other organisations. So we have, as I mentioned, at the heart of what we do is a software engine which is performing an analysis of files, and helping the customer to remove the threats from those files. It's quite simple in nature. It's complex in terms of the analysis, and the rigour which is being applied within that kind of processingunit. 

 

But, beyond that, the main concern then is being able to do that at scale and to have that scalability, the resilience to be able to observe what is going right, what's not working, to be able to self heal. And these are all features you get out the box effectively from a Kubernetes environment.

 

In many ways, we are right in the middle of the road, in terms of how we think about the use of this technology. We have a RESTful API, which exposes, from a programmatic perspective, how integrators development teams can actuallyinteract with Glasswall, and not have to click mouse buttons to avail the service that we provide but actually to write their own code, or to use code which we provide them, so they can integrate Glasswall into the enterprise solutions. 

 

That's kind of very much a normal pattern of how Glasswallthinks about deploying Kubernetes within the customer's environment or within our own hosted Cloud environment. Examples of use cases, though, that we see quite frequently;so we will have customers that maybe they have to -- unfortunately, they had some kind of infection, some breach within their environment, they need to redeploy the infrastructure, so we are talking about servers, desktop machines. Just everything they need within that corporate environment to run and maintain their organisation. 

 

They’re pulling files back from backup, and before they deploy those files back into that clean safe environment, that’s just been step backup again following the infection, they want to make sure the threats that may exist from that -- from the backup tape, or whatever the backup storage was, that those files are now clean from the original malware, maybe they ransomware, which have originally infected those files.

 

We see lots of organisations needing to have the capability via partners of Glasswall, to be able to remove those threats from those files. And to do that at scale, and to do that with great urgency. 

 

And so, Glasswall, in that example, provides a very scalable solution in a moment of time where there is great need andthere’s time pressures to be able to do that, and do it quickly, but to a very high standard of security.

 

And then, of course, there is the more, kind of, everyday business use case, where you have users, employees within,say, an office environment, or remotely, working, touching files. They’re, perhaps, unsure about the providence of that file. Where did they come from? Who’s touched it,previously? Who’s written to that file, potentially created malicious code that might exist in, say, that Word document that has a macro contained in it?

 

What you really want is not organisation, is to ensure the employees have the ability and confidence to interact with data, and to do that at speed, to perform the job without getting in the way of what they do. But, in order for that to happen, they need to have the tools and the services at their fingertips to ensure that data files are safe to interact with. 

 

And one example that we might cite there, is having the ability to drag files into a drop zone, that then automatically removes the threats from the files, and puts it back into a location that the user can then interact with, and to go about their business. And not just for one file, but maybe, say, for a thousand files. But, to do that in a couple of seconds, or a couple of minutes, and for them to be able to have the confidence that they can move on and go on about achieving the objective for that day. 

 

And so, Glasswall, in that setting is really completely transparent for the user. We are not getting in the way, in fact, actually, that is an example of security helping the employee to move faster. Because, they’re not having to second guess whether or not something is safe to open.

 

David: ​Excellent. And just, finally for today, Paul, is it possible to look a few years ahead? Three to five years ahead,perhaps, and think about what impact Kubernetes might have on the delivery of cyber security at scale?

 

Paul: ​Yes, so I think Kubernetes servers computing, they will be the main stay in terms of IT deployments within organisations. So, scale enterprise applications will be leveraging Kubernetes. We haven't necessarily spoken about this at length, but the adoption of containerised code, so,software running in containers, typically these days as micro-services, this has been really revolutionary over the last, I guess, six or seven years. So, Kubernetes kind of burst onto the scene in 2014. I think, over the next five years, we are going to see that continue in terms of adoption and strength. 

 

But also, we’re going to, obviously, see innovation. We are,perhaps, going to see more substitutes or innovations, and so,how we deploy this type of technology will certainly evolve. And I would expect the popularity of substitute services to come to the fore as well. But I think, for now, as far as we can see into the future, Cloud-native technologies, serverless computing, Kubernetes containers, they are here to stay. And I think they are really helping organisations to move forward in thinking about how they adopt security patterns to achieve a security posture which fits with their risk appetite. 

 

As I mentioned before, you don't get security for nothing. It doesn't necessarily come for free or, as you open the box with some of these Cloud-native technology; you have to work at it. And then, harden them and make sure that you are being very deliberate about how you deploy these environments. And Glasswall is laser-focused on ensuring the security of our solutions is as bulletproof as it possibly can be. 

 

But I think organisations-- this is a huge opportunity for organisations to, I guess, re-tool and to streamline how they think about deploying their software to be as responsive as they possibly can be to the market and to the needs of users. As far as I'm concerned, very excited about the future, and how we can ensure that security is baked into the things we're trying to achieve within business.

 

David: ​Paul, always great to talk to you, and thanks for your time.

 

Paul: ​Thanks very much, David.