Back to Engineering Blog

RampUp 2020: RampUp for Developers Recap – Enabling Programmatic Integrations and Flexibility through APIs

  • 18 min read

RampUp 2020: RampUp for Developers Recap – Enabling Programmatic Integrations and Flexibility through APIs

Speaker:

  • Davin Chia – Sr. Software Engineer

The fifth RampUp for Developers session presented LiveRamp’s API strategy, the core capabilities of our platform, and the process of creating RESTful APIs. LiveRamp’s underlying technology platform is based on four subsystems: identity, connectivity, data stewardship, and data access. When decoupled, each of these subsystems can be made valuable to a wide set of customers and customer use cases. Our API strategy enables customers and partners to integrate with us and use each subsystem in a programmatic way, similar to the automated methods of programmatic buying). Once these APIs are created, users will be able to select different areas of the platform to match their needs.

You can watch the video below, and can read the transcript below that. You can also download the slides from the presentation (which also appear throughout the video) here.



Davin Chia: All right, guys. Save the best for last, right? So, second last means second best. I’m Davin. I lead the API team at LiveRamp, and today I’m going to talk about enabling programmatic integrations. Basically, some of the lessons we’ve learned over the past year as we’ve built APIs, and embarked on our programmatic or platformization journey. So, I’m going to start by talking a bit more of the background, so everyone appreciates where we’re coming from as we started this journey. Andrew already briefly covered this in his presentation. I’ll go into some detail, and then I’ll go to the meat of the presentation, which is how we’re tackling this, and some of the tooling that we have built to enable us do this faster. And I’m hoping to go into some technical detail and how that works, and I’m going to end with a small success story of our platformization journey thus far. And then, interesting things we can look forward to.
Davin Chia: Oh, that was my title slide. Cool. Before we start; some numbers, so everyone has an idea of the scale we work at LiveRamp. So, we run a variable high-throughput workload, right? This is about 100,000 jobs a day to over 500 partners and our age systems. So, our systems hitting clients and users are going through 50,000 to 400,000 requests a second at one time. Any one time, right? All of this is running on about 50,000 to 90,000 calls a day, and reading and writing about a hundred petabytes a day to get all this done. So, what we’re working with here is really big data. And this challenge platformization is difficult. It’s challenging, right? We have to be very thoughtful about how we actually go about doing this.
Davin Chia: So, the background, as Andrew said, and as you guys who have been sitting through all the presentations have been familiar with this; we at LiveRamp run a pipeline architecture. So, this diagram, everyone has seen a thousand times now. Typical ETL platform. Files go in. We enrich those files. We use identity graph to project into different identifier spaces, and then we deliver to different destinations, right? Leveraging all integrations. So, that’s where we’re coming from. About a year and a half ago, we started on what we’d like to call API foundations. And this really was a response to the market, wanting access to different parts of the pipeline, right? So, what we did was put APIs on different parts of the pipeline, and enabled clients to programmatically configure, change different configurations that would affect the final outcome, the output off the pipeline. But this also meant that our internal systems are largely unchanged, and this actually was an intentional move. It’s more of a tactical response to buy us some time to actually tackle the real problem, which is, as we all know now, platformization, right? Or rather, how do we construct APIs for the LiveRamp platform?
Davin Chia: So, the first thing we had to decide was what to build, right? What we think our platforms ideal form is like. And our ultimate goal is to go from this pipeline to something that more closely mirrors our conceptual and our products. And we want something that clients can use to combine and mix for their own use cases. So what we want is. Again, this platformization diagram that we have seen many times. So, what we need is a north star so all teams can align against and point towards and this needs to include the product org and their requirements as well. So, we took all tech leads and product leads to an offsite for three days, generated this massive complex architectural diagram, which we were not sure, because it’s complicated and not helpful. And so we have this simplified diagram, but the gist of this is, now we have two layers, right? We’ve separated architecture into application layer and a platform layer with the idea being any sort of client facing product has to go through the application teams, and application teams are then free to mix and match the platform layer to achieve their angles. So, in this case, onboarding the product is to enable.
Davin Chia: So, that also allowed us to clarify some inter team team interfaces and also assign ownership to what was previously shared. Right? So, now that we have this sort of idea of where we’re going to, our aspirational platform architecture as Andrew likes to call it, the next question we want to resolve is how do we go about doing this? And there are really two levels to this, right? An org level how, and a team level how. And the org level how is standards, and then the team level how is tooling to help develop in general against these standards?
Davin Chia: Oh, sorry. The slides are messing up. Cool. So, first let’s talk about the org wide standards that we decided. LiveRamp basically decided four things. Okay? We decided that number one, we’re going to move from Thrift to HTTP/REST, so all new services internally need to use HTTP and be done in REST or better. And we’re doing this because HTTP is ubiquitous. Everyone understands it. If internal APIs are doing HTTP, they are also much easier to externalize, and REST because thinking in terms of resources leads to natural clarity of thought. The second thing we decided was we wanted to do all development spec first, or rather, contract first using Swagger, right? What this means is everything is decided upon, a very well defined interface that is written down, so all parties involved in the API development workflow understand what’s going on. So, what API is supposed to do and how it’s meant to be consumed. And we decided to do so via Swagger. For people who aren’t familiar, Swagger is an open source, one of the more popular open source standards for specifying HTTP and REST interfaces.
Davin Chia: The third thing we decided was internal API standards, right? And this really is giving our LiveRamp developers a set of assumptions that they can use when they’re building the APIs. Right? And this includes things such as name spacing, API versioning, paths, timestamps, what are your written IDs or links, and the idea is to minimize any sort of problems as you send your teams off to do the work. Because the worst thing is people go off to do their works, and when they come back, their APIs don’t mesh together. So, establishing these standards prevents that.
Davin Chia: The fourth thing we decided was to use Kong. So, LiveRamp uses Kong as an API gateway. Kong is where we host all APIs on, and handles all the operational overhead of running APIs. So, think authentication, rate limiting, monitoring, metrics, and so on. And the point behind here is actually, as an org, standardizing on API infrastructure to help your teams minimize the mental overhead when they’re developing APIs. So, our teams no longer have to think about this, and they can focus on writing the business logic. Cool. So, with that, we’ve sort of clarified the general way we’re going to build APIs and reach our platformization goal. The next thing we started to think about was how does this actually affect my life as an engineer? Right? And what we want to do is we want to empower the LiveRamp developer to develop maximally within these conventions. And what we need is tooling.
Davin Chia: And the approach we took is what I like to call seductive tooling. And this is really optimizing for the average experience. I think, as an engineer, we tend to favor edge cases and bad actors when we’re designing something. And we didn’t find that very helpful. What we find more helpful was focus on the average experience and then commit to being around to help out when edge cases form, right? And as a result, is your developers, internal developers also have much more confidence in using your tool. So, the two things that came out of this approach is code generation and a monorepo, which really is our end to end API development workflow at LiveRamp, and I’m going to talk more about that later.
Davin Chia: So, first, let’s talk about code generation. So, for those of you not, code generation is taking your Swagger spec. Remember, we want to do a contract first development, and producing code, right? In whatever language that the team wants the code to be in. And the main benefit of code generation is you are enforcing the spec, right? And how that is done is by those three next points, right? Serialization models, easy clients, and your server interfaces. And we’re going to go through them briefly one by one, right? So, serialization models is basically, this is the code that will take the data in a HTTP request, and turn it into language native types, right? So, this code is very tedious to write, and is often error prone, because it’s easy to make mistakes when you’re transcribing a spec into code. We have code generation, right? We get this as easy as a single import, right?
Davin Chia: And also because the code is generated, there is guarantees that the spec is being followed. Number two; easy clients. So, this refers to generator clients for consuming the API, instead of having to muck around with if a language is standard HTP libraries. Now, teams can import and generate a client, and start consuming API just like that. And this is also very helpful with testing, because the teams implementing the API spec can use the same clients to test what they implemented. And again, because this is generated from the spec, the clients are guaranteed, or with a high confidence, to consume the API the way it’s meant to be consumed.
Davin Chia: The last thing is server interfaces, and this refers to the interfaces that the team will be implementing, right? And these are the bare interfaces. So, before you implement just the routes, the naming, and the types they are accepting. So, with code generation, this is all generated, and again, the spec is enforced because it’s generated, right? And now, teams are able to just focus on implementing the individual routes, instead of having to transcribe this spec into their code. The one caveat for this is that it’s not out of the box. We have to do some work to figure out the best tooling that will work for an internal ecosystem, and that’s because support for standards vary from language to language and from tool to tool, and at LiveRamp, we’re using open API 3.0 generator.
Davin Chia: So, there’s a clear example on the benefits of code generation. We had an API that we initially wrote by hand, and then later incorporated generator models. And in doing so we were able to reduce the lines of code of this API by a thousand, even though we added more tests to make sure that the generator code was working. So, this is something we do recommend and we do like. All right. So, the other part of this seductive tooling approach is a workflow that integrates all part of the API developer workflow at LiveRamp, and this is in a monorepo, and how we came up with this is we sat down and we thought to ourselves, “How API work happens?” And there are really three phases to API work, right? Or any work, really. Phase one is planning. So, this is thinking about the form of the routes, right? How they integrate with other APIs, what they do. And in this stage, you’re also most likely to involve multiple parties. So, you have your engineers, your designers, your product people, various teams all agreeing on what this API is.
Davin Chia: Phase two, right? You have implementation. So, this is thinking about what language the API is going to be in, what framework you’re going to be using. And this is where code generation, which we just spoke about, comes into play. Phase three, we have consumption of the API. So, this is when the API is already completed and ready for consumption, right? So, three phases, and like I said, our solution to all three phases was a monorepo. I’m going to go into some detail on how the workflow actually works, but what this is is actually a centralized repository for all the specs at LiveRamp. So, all the implemented API specs, as well as the proposed API specs all live here, and this means that there is one central place where a developer can go and keep abreast of all API development, which is very nice.
Davin Chia: The other thing about this approach is that now there’s shared tooling and documentation all in one place, which makes it easy to maintain or upgrade. Some people ask me, “Why monorepo? And again, that’s because we want it to be contract first. With a monorepo, all your specifications are put in a forefront. They become first class citizens in your API development workflow, and it’s very clear to people what has been built and how to consume it. All right, so let’s go to the workflow. I’m going to go through the happy path of this workflow, because there are many different edge cases, and we don’t have time. The first thing to note is that this monorepo has two branches, the master branch and the master proposal branch. So, the master branch represents everything that is implemented and can be consumed, and the master proposal branch represents all the specs that we’ve committed to and want to implement. Cool.
Davin Chia: So, all right. Step one. Let’s say we have a new API that’s proposed. What we do is we branch off the master proposal branch, and we create a new API proposal branch, right? And on this branch, the proposal is actually discussed and reviewed by all the stakeholders. So, this is where all your PMs and all your tech leads and all the engineers discuss on what this is supposed to do, and how it fits in with existing APIs.
Davin Chia: Step three, we approve or we accept the proposal, and we commit to making it. So, this is really phase one planning, right? And as a result of that, we merged this into the master proposal branch to signify we’ve committed to this, and we also generate all the clients, models, and server steps we were just talking about. So, this is where code generation comes into play.
Davin Chia: Stage four, we actually implement the API, do the work that we’ve committed to doing. So, phase two, implementation. And lastly, stage five. So, the API is implemented. We want to release it for consumption, and now we merge the same spec that we’ve implemented into the master branch, and as a result of that, we also generate clients. They’re now stable clients that all teams can use. And we also update a documentation portal, and this documentation portal is an internal, beautiful, human-friendly way of browsing the specs instead of looking at code, because no developer wants to read someone else’s code.
Davin Chia: Yeah, so here’s a screenshot of our documentation portal. This is actually ReDoc, which is an open source tool that we have modified to support multiple Swagger specs. There is a nice dropdown, so a developer can choose all the specs that they are interested in. And finally, the spec itself. So, we see the standard, classic three panel API specification that we see a lot of big companies use like Stripe, Yelp, Microsoft, and whatnot. All right. So, that was the API developer workflow, and now I want to talk about small success story that we have had thus far. So, we have done platformization. This is the direct-to-distribution API, and this is our first true platform API, which allows a user to distribute only, right? What that means is, back to this diagram, instead of having to go through the whole onboarding flow as one would have to previously, now a user can use this API to hit our distribution systems, and thus just leverage all our integrations just like that. And this is very useful for users that don’t necessarily need all the other features of the pipeline, such as our identity graph or our matching or our data management.
Davin Chia: So, this has been very successful. We have seen delivery time has gone down from twenty hours to one. So, our clients are very happy. And the interesting thing about this API is that it was originally developed by hand, and then, we noted down all the problems, build the tooling, consume the tooling that we built, and dog fooded our way into the development process that I just presented, right? And now this workflow is being used by teams at LiveRamp to build more APIs, as well as the distribution team themselves to build more features into this API.
Davin Chia: So, what’s next? We’re hoping to use this workflow to build more APIs. We’re currently in the process of breaking up and becoming a platform, and I think we have a couple of interesting APIs along the way, some of which are like segmentation API, data marketplace buy APIs, are now in the works. And within a year or so, we’re hoping to expose all these to partners such as you guys, so all you guys can build apps on these APIs, either for your own use cases or your client’s use cases. Yeah, so that’s very exciting, and please watch out for any news coming. And lastly, yeah, I would be curious to know what kind of use cases you guys would like to utilize. Yeah, that’s it. Any questions? I think we have about five minutes. Okay, well, I’ll be around if people have any questions. Oh, yeah?
Audience Question: So the example you provided with direct-to-distribution; in that situation the customer uploads already known identifiers since they bypass RampID?
Davin Chia: Yeah, so that’s an API that would be useful if a customer knows what they want to deliver. So, it has their own identifier. It has their own segments, and all they want to do is leverage our integration. So, delivering to Facebook, MediaMath, and whatnot. Google. Yes.
Audience Question: My second question is; would the GitHub repository list all available APIs today? Where would that information live?
Davin Chia: Ah, so right now this is internal only, but yes, for internal developers that doeslist all APIs that we have.
Audience Question: Is there a plan to publish the list?
Davin Chia: Not yet, but soon, so watch out.
Audience Question: How would you change your API spec vetting and proposal process that you described, if you could do it all over again. Are there things you would want to do differently?
Davin Chia: So, the question was how would we change the API process, vetting and approval,
Audience Question: For a new company that wants to build, or for a team like yours. What would you do?
Davin Chia: That’s a good question. I kind of liked this workflow, but I came up with it, or I was part of coming up with it. I think probably making it…. Or picking more better existing tooling. Hopefully, by then, there are better tooling available to make this easier. We have to do quite a lot of stuff in house, adapting existing open source things. So, if I were a new company, and I didn’t necessarily have three or four engineers that I can devote to developing this, I would try and really leverage what Google or Amazon already has to push us forward. Yeah. But I’d still focus on doing a contract first, because I think we have found that, in the process, it really does help clear up any sort of miscommunication.
Audience Question: How are you leveraging Swagger and open API for validation?
Davin Chia: Yeah, that’s a good question. So, that really goes back to the code-
Audience Question: What was the question?
Davin Chia: Oh, sorry, the question was how we’re leveraging Swagger and open API to validate all of this. So, I actually had a slide that was an example of Swagger, but I think that was taken out. Swagger really is a complete specification of the HTTP spec that we’re trying to build, so it’s very detailed. Generally, you can think of it as some metadata for versioning of the API. And then next, you have the routes you’re trying to build, so basename/what all the HTTP operations are trying to support to your GetPuts and whatnot, and even the request and response. So, it’s very detailed, which means it’s verbose, but it’s completeness means that just with a spec, if you get through that, you know everything that you need to do.
Davin Chia: Right? So, that’s step one. Step two, the validation really comes from the code generation parts of it. We are leveraging open source tools, which are generating code off that spec. And yes, nothing is foolproof, but because it’s open source, it’s used by thousands of companies. We have very high confidence that a lot of the major bugs have been caught. So, there are some here and there that we… I guess when we started this process, we had to do our own vetting. So, you’re right, there’s some manual work, but yeah, the act of just generating it and the generators also have validators to check that the spec that you’re given is a valid spec, but the act of just generating it and having all those generator code really helps just validate the baseline implementations, especially our clients and the server interfaces you’re trying to implement. Long winded answer. Yeah.
Audience Question: What’s the level of automation testing coverage for your APIs?
Davin Chia: That’s a good question.
Audience Question: What was the question?
Davin Chia: The question was what’s the level of automated testing coverage? I would say there is some testing coverage, and the automation depends from API… Depends on the team, actually. You can broadly break our APIs into two categories, one which are more real time APIs, and one that are APIs that kick off batch workflows. For the first category, the automated test coverage is much better, because it’s easier to test these because they return in real time. And so, it’s easy to set up a staging environment. There’s less dependencies, but for the second class of APIs, which is kicking off batch workflows, it’s not where we want it to be. And again, that’s the pipeline that makes setting up all the staging environment difficult. So, platformization should help. But yeah, it does too much. There’s a lot of overhead with that, and we haven’t really found a good way to tackle it. And I don’t think any company really has, because there’s also the cost rate. There’s both a correctness testing and a skill testing they need to be testing for. Yeah. Yes?
Audience Question: How do you test the robustness of the API, like external capabilities…
Davin Chia: So, the question was how do we test the robustness of the API?
Audience Question: …to things like…
Davin Chia: Yeah. So, I think that depends on the performance requirements of the APIs. I was briefly on pixel serving, and so I can speak to that, how we do that. Most of our APIs don’t have that strict performance requirements. We see maybe a hundred requests per second or a thousand requests per second, but pixel serving sees 40,000 to 400,000 like we said. And a lot of that is very good integration testing, and essentially, doing canary deploys. So, at that scale, unless one is willing to spend a lot of money, you can’t really test the full scalability of the system. And what we actually do… I mean, I used to do, like a year and a half ago, was do canary deploys and see, monitor over one or two days, whether that specific part has any sort of correctness issues or any sort of scalability issues, like maybe we introduced a memory leak or CPU usage is going up and all that. Yeah. Okay. Well, thank you, guys.

Interested in more content from RampUp?

Clicking on the links below (to be posted and updated on an ongoing basis) will take you to the individual posts for each of the sessions where you can watch videos, read the full transcript of each session, as well as download the slides presented.

RampUp for Developers’ inaugural run was a great success and was well attended by a variety of attendees. Many interactions and open discussions were spurred from the conference tracks and discussions, and we are looking forward to making a greater impact with engineers and developers at future events, including during our RampUp on the Road series (which take place throughout the year virtually and at a variety of locations), as well during next year’s RampUp 2021 in San Francisco. If you are interested in more information or would like to get involved as a sponsor or speaker at a future event, please reach out to Randall Grilli, Tech Evangelist at LiveRamp, by email: [email protected].