Hosted byCharles Lowell and Taras Mankovski
October 3rd, 2019.
In this episode, Charles and Taras discuss "big ideas" and all the things they hope to accomplish at The Frontside over the next decade.
Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.
This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.
Transcript:
CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. Today, we're going to talk about big ideas and the future of Frontside.
TARAS: Yeah, starting with is Frontside is good idea?
CHARLES: No, we're going to just talk about how do you know that an idea is good? We've touched on it a couple of times before. Like how do you, how do you go about validating a big idea, how do you discover a big idea? What do you do?
TARAS: And then, even when you have big ideas like big tests, what does that mean for the world? How do you make a big idea an idea that a lot of people like and agree with and actually use on day to day.
CHARLES: Yeah. It turns out that it's not easy. There's a lot of work involved with that. A lot of it crystallized around the conversations we're having about what exactly is big test and recognizing that big test isn't really a code base. It's not a toolkit. Even though it does has aspects of those things, it really is an idea. It's an approach. It's a way of going about your business, right?
TARAS: Yeah. Especially when you put big test functionality in place, when you start doing big testing and then you put together things like using Mocha and Karma, big tests in that kind of test suite is really just like interactors and the idea of big testing. There's nothing else. All the interactors do is just give you an easy way to create composable like [inaudible] objects, so you don't have to write -- you have the components but you don't have to write selectors for each element in the component, especially if gets composed. But that's like a very small functionality that does a very specific thing. But big test itself, it takes a lot of work to actually -- we had this firsthand experience on the project we're working on right now. We are essentially introducing like Ember's acceptance testing but for React in a react project and having to explain to people what is it about this that actually makes it a really good idea and having people in the React world see that this is actually a really good idea. It's kind of incredible. When you actually try to sell something to somebody and convince somebody that this is a good idea is when you realize like how inadequate your understanding of the idea really is. You really have to start to break it down and understand what is it about this that is a really big idea.
CHARLES: Yeah, I completely agree 100% because to be clear, we've actually been doing this now for two years almost. So, this is not the first React project where we've put these ideas in place. But I think in prior examples, we just kind of moved in and it's like we're going to do this because this is what we do. And we have firsthand knowledge of this working because we've operated in this community where this is just taken on faith that this is the way you go about your business. You have a very robust acceptance test suite. And because of that, you can experience incredible things. When you and I were talking before the show, we were kind of commenting on inside the Ember community, you can do impossible things because of the testing framework. You can upgrade from Ember 1 to Ember 3 which is a completely and totally separate framework, basically. You're completely and totally changing the underlying architecture of your application. You can do it in a deterministic way and that's actually incredible.
TARAS: And what's interesting too is that the React core team kind of hinted a book at this also in their blog post about fiber or moving to fiber because one of the things that they talked about there is that knowing how the system is supposed to behave on the outside allowed them to change the internals of the free app framework, specifically about test suite for the React framework, but it allowed them to change the internals of the framework because they were testing kind of on the outside. The system kind of is a black box and that allowed them to change the internals and the test suite essentially stayed the same. So this idea of acceptance testing your thing is really fundamental to how Ember community operates. But other communities have this as a big idea as well. It's just applied in different areas.
CHARLES: Right. And so applied to your actual application, this is something that's accepted in one place, but it's not an accepted practice in other places. But you can make your argument one of two ways and say, "I have experienced this and it's awesome. And you can do architectural upgrades if you follow the patterns laid out by this idea." And that works if you have a very, very high, high trust relationship, but you don't always have that, nor should you. Not everybody is going to be trusting you out of the box. So, you're going to have to lay out the arguments and really be able to illustrate conceptually practically how this is a good idea just so that people will actually give it a try.
TARAS: And it takes a lot. One of the reasons why it was easier for us to introduce big testing to the current project we're working on is because we were able to write, like we've done the implementation for this in a previous project. So when we were convincing people this is a good idea, we're like, "Look, your test suite can be really good. It's going to be really fast. And look, we've done it before." So the actual process of convincing people that a big idea is a really good idea is actually kind of a complicated process that requires a lot of work and partially requires experimentation. You have to actually put an implementation in place and show that you're going to have to build up on your successes to be able to get to a point where you can convince people that this is actually a really good idea. People who have not heard about this idea, and especially people who might have counter, like they have people of authority in their community that have counter views. For example, quite often when it comes to big tests, when you bring up big tests, people will reference a blog post by a Google engineer that talks about how functional testing or acceptance testing is terrible. And for a lot of people, a Google engineer means a lot and the person makes really good points. But that's not the complete idea. The complete idea is not just about having an acceptance test suite. It's a certain kind of acceptance test suite. It's an acceptance test suite that mocks out at boundaries so you don't make API requests to the server, you make an API request to a [inaudible] server using something like Mirage or whatever that might be. So, the big idea, it has like new ones that makes it functional, but getting people who are completely unaware, who don't necessarily look up to you as an authority to believe you like, "Yes, that actually sounds like a really good idea." It is not a trivial task.
CHARLES: No, it's not. Because the first time you try and explain it, you're arguing based on your own assumptions. So, you're coming to it safe in the knowledge that this is a really, really good idea based on your firsthand knowledge. But that means you're assuming a lot of things. You're assuming a lot of context that you have that someone else doesn't and they're going to be asking questions. Why this way, why this way, why this way? And so, you have to generate a framework for thinking about the entire problem in order to explain the value of the idea. And that's something that you don't get when it's just something that's accepted as a practice.
TARAS: I think simulation is actually a really good example of that. If you haven't had experience with Mirage, if you don't know what having a configurable server in your tests does for you, you will probably not realize that similar ideas apply to, for example, Bluetooth that when you're writing tests for your Bluetooth devices, or you're writing tests for an application that interacts with Bluetooth devices, you actually want to have a simulation there for Bluetooth so that you can configure it in a kind of similar way to the way you would configure a Mirage for a specific test. You want to be able to say, "This Bluetooth device is going to exist," or, "I have these kinds of Bluetooth devices around me, they have the following attributes. They might disconnect after a little while." There's all kinds of scenarios that you want to be able to set up so that you can see how your application is going to respond. But if you haven't seen a simulation with something like Mirage, you're going to be going like, "I don't know what the hell why would this be helpful."
CHARLES: It seems like lot of work. One of the things that we've been working on, as I said, is trying to come up with a framework for thinking about why this is a good idea because we can't just assume it. It's not common knowledge. For example, one of the things that we've been developing over the last year and more recently in the last few months is trying to understand what makes a test valuable. At its essence, what are the measures that you can hold up to a test and say this test has X value. Obviously, something like that is very, very difficult to quantify. But if you can show that and you say, "This test has these quantities and this test has these quantities," then we can actually measure them. Then it's going to allow people to accept it a lot more readily and try it a lot more readily. So, the ideas that we're playing with right now is that you kind of have to evaluate a test on one on speed, tests that are fast, have an intrinsic value, or rather test that are slow. The upside that you gained from the test is very quickly bled away or offset if the test is slow. So, I can have a very comprehensive test that tests a lot of high value stuff. But if it takes three days to run, it's going to be basically worthless. Another axis on what you can evaluate is in terms of coverage. I'm not talking about coverage of lines of code. I'm talking about use cases and units of assemblage. So, there's the module. Those modules are then stitched together into components. Those components are stitched together into applications. Those applications are downloaded onto browsers. And I would consider it a different unit of assemblage. Your application running on Firefox is a different assemblage than your application running on Chrome, is a different assemblage than your application running. You have this access, which is the coverage of your unit of assemblage. That's another way that you can evaluate your tests. So if I have a test that runs only on Node in a simulated dom, there's a cap, there's an absolute cap on the value of that test and it cannot rise above a certain point. And the other thing, another access that we've identified is isolate ability, an ability to run a test. So if I have a test suite comprised of 1500 tests, but if one fails in the middle, I have to restart the test from the beginning. That's going to decrease the value. Maybe it's related to speed, being able to run the tests without having to install a bunch of different dependencies. So that's another access. And so trying to really understand the variables there, that's something you have to be very systematic about thinking about the tests so that you can actually take your idea and explain it to someone who's not coming at it from first principles. Imagine you have to explain an if statement to somebody who's never programmed with an if statement before.
TARAS: That is going to be very difficult.
CHARLES: It's going to be difficult, right?
TARAS: Yeah. In general, it's very challenging to go from an experience that somebody had, like overriding somebody's experience conveying your own personal experience is very difficult. And getting someone to experience something is very difficult. And so that's what I think a lot of the work that we've been doing over the last little while has been breaking down these problems into a way of understanding them so that we can actually explain why these things are important. Like what we've had to do this recently with a Bluetooth work that we've been doing. We have a partner that's implementing a Bluetooth abstraction for mobile devices and trying to convey to them the value of being able to simulate devices. That's something that we saw with Mirage. We knew that being able to simulate devices in tests in different environments is extremely valuable. We know this firsthand from our experience. But trying to justify to them why we think this is important and why they should rejig all of their thinking about how they're going to architect this middle layer between the native APIs and the application APIs, why they should rejig this layer to make it so that it will allow for simulation, like convincing very technical, very knowledgeable and experienced people that these ideas are important, it requires kind of fundamental understanding of the value that they provide.
And I think this touches on what really I think Frontside is going to be doing going forward is creating conditions for this kind of thought to be cultivated, to be able to create business environment where our clients gain benefit from this kind of very deep insight, insight that transcends the source of those ideas. Because there are certain ideas that are really fundamental and they're beautiful ideas. For example, in React world, a functional component is a really beautiful idea. It's a really simple concept. And kind of going along along with what you're saying about this kind of fundamental ideas, they will drive you to the next point you couldn't even imagine. I think the work that the React core team has done -- they've set out this functional component as a primitive, but it wasn't possible for a long time to really make the functional component a reality until they've gone through the process of actually understanding what connects all the dots so that you could eventually get to a point where like, "Oh, actually this [inaudible] API is a way for us to enable the fundamental concept of having a component." Not only is it simple, but it actually works. You can write a performance application using this fundamental concept. So, the work that we're going to be doing is first of all, making it possible to have conversations at this level. So, people that we're going to be working with Frontside, clients who we work with, companies that work with us, who partner with us, it's going to be all to support creating this environment where we can have this kind of ideas. Then being able to extract these great ideas from the frameworks and actually making them available across frameworks. You don't have to be locked into a specific community to be able to benefit from that.
One of the first steps we're working on now is now that we have an understanding about big tests, we believe this is a good idea, we have ways to justify why it's a good idea. We have clients who have benefited from this good idea. Now we're in position to fill in the gaps that are missing from big tests, from being something that could be used in any framework. We want it to be really easy for people to be able to say, "I really love acceptance testing, but I have to work in a React project or I have to work in a Vue project, I have to work in an Angular projects." It's like, "It's no problem. I have big tests. Big tests is going to give me what I want." The big tests, the idea, but it's going to come with all the little pieces that you need to be able to assemble the functional test suite that is going to give you the benefits that Ember's acceptance test suite provides for Ember projects.
CHARLES: I mean, why should you have to compromise on these things. If it is a good idea and it is a fundamental idea, whether it's how you manage concurrent processes, whether it's how you manage testing, whether it's how you manage routing, why should you have to compromise? Why should you have to say, "You know what? I love..." I don't know, what's a comparable system? "I love air conditioning." Why should I have to go into a car that doesn't have air conditioning? Because every single car has a different air conditioning system, or every single house has a different air conditioning system. I'm showing the fact that I live in Texas here by using this example. But we've developed air conditioning systems that are modular so that they can be snapped on to pretty much any house and you don't have to build a custom thing for the circulation and refrigeration of air every single time or have it just be like it's part of the house. Or we have this wood that ships with ducks. It's like, no, we want to separate out that system.
TARAS: You don't have to commit to living in a specific area to get the benefit of being comfortable. You don't have to give up the comfort for specific areas. The good idea of air conditioning is not restricted to just one area. You can actually experience it wherever you go and have it be available wherever you would go. You have to be able to, like if you're moving to a new house, you can install air conditioner and now you're going to have like cool air. That's the kind of the theme I think of what we're going to be doing. But there's so much work to do because we're kind of, I would say, probably 30 or 40% into having -- if we were to look at like big tests as a big idea is used and is well known in our industry, we're probably maybe 5% on that. If we take into consideration what it takes for us to be able to convey what it takes to convince people is a good idea. I think we're, if you add to it, we'll probably add another 15%. So the next step is actually creating some tooling around it that would make it really easy for people to consume. Because right now, what's really unfortunate is that you have big tests that is available. If we set it up in the React project, doing it with create-react-app right now is kind of difficult. So we're going to make that easier. But then companies still need to make a choice between the ergonomics [inaudible] Cypress or the speed and the comfort and the control that you have with big tests. We have to basically eliminate the need to make that choice by making the tooling that you get in Cypress and making that tooling available for big test projects.
So that is going to kind of bring us up to maybe 60%. And then the remaining part is taking the software that enables the big idea and then making the software work across all different frameworks so that it's really easy to install. On Angular, it's just going to be add a plugin and it gives you big testing. You don't have to rely on their unit testing or you don't have to rely on the end to end testing as using Selenium. You just have big tests and it works just as well as it works and React and just as well as it works in Ember. So you can get the benefits of that. Going through the whole process and then bringing into the world this way, we have a lot of work to do because every part of the architecture of building modern frontend applications requires this level of effort. If you look at routing, routing is extremely inconsistent across frameworks. There are different opinions for every framework. There are different opinions of how to implement it. And that's problematic both on an individual level and a specific project level because it's so hard to know what you should use. And in many cases, it's not complete. What constitutes a routing system on React looks different than what would be in Ember, and it looks different than what it is in Angular. And each one of them has their own trade-offs. So if you find yourself in a situation where you've been using React and now you have to use Angular, you have a steep learning curve. But this also has an interesting effect of like, "Well, we have a world now where micro frontends are a thing." There are really good reasons why companies might want to use multiple frameworks for single platform because different frameworks have different benefits and different teams based on their location might prefer a different framework. There's lots of different reasons why companies choose and we want developers to be able to choose specific framework. But how do you do that when you need to have basically micro frontend architecture, you need to provide an SDK that provides a consistent user experience like -- you need to provide an SDK that provides a consistent developer experience, but then that developer experience because it's developer experience, needs to enable consistent user experience regardless of what framework the team decides to use. So now, routing needs to be more robust. Routing without having a strong concurrency primitives is going to be very limited and it's not going to be as portable across frameworks as one that actually has very robust concurrency primitives.
So now we'll start looking at concurrency. We've got concurrency primitives that are different for each framework. We've got Ember concurrency, we have a Redux-sagas. We have observables. React is introducing suspense and their hooks. And now each one of those things is different. So now, we need a concurrency primitive that is framework agnostic that is externalized, that we can consume to build stuff. Now that's a piece of the routing system. There's other things like state machines. State machines are an important part. If you look at the authentication system, the core of an authentication system is a state machine. How do you express a state machine in a way that is framework agnostic, in a way that works in every framework. That's an area that we need to explore. So, there's a lot of work for us to do to take these big ideas and externalize them from the framework so that they can be consumed by frameworks. And the applications that use this frameworks, that is a lot of work. And that's essentially what Frontside is setting out to do is hold these good ideas and extract the pieces that are actually core to these ideas. And then make them available independently and then see what that will provide. In a world where we have all of these pieces in place and we have kind of the granular little pieces that we need to be able to have a really strong concurrency primitive, we have a great routing system that's using this concurrency primitives. We have a state machine mechanism that can work with any framework, has very comfortable APIs. We have all these pieces. What kind of collaboration will that allow? What is the world that that will create? I think for people who are in the Ember community, we saw firsthand what it means for us to stand the shoulders of giants. We have wonderful, brilliant people creating really awesome tools. Now, if that same level of collaboration was happening in a way that was framework agnostic, but we were all standing on shoulders of giants and we were helping each other create something like this, what kind of world would that create? That's what Frontside is setting out to find out.
CHARLES: Yeah, and it's going to be fantastic. I should say it's going to be really, really fun. It's going to be challenging. Like you said, there is a lot of work to do.
TARAS: A lot of conversations to have. I mean, I think it's difficult to have these conversations because we all have good reasons for thinking certain things and a lot of times it requires understanding each other to find the place. Core teams do this. It's getting everybody to align on where we want to go, provide leadership like doing all of that work and doing it in a way that is actually possible. Because to create an environment with this kind of research that is happening, it requires a lot of money. Charles and I cannot do this. A small group of people probably cannot do this either because we need real use cases. We need real projects to make sure that these ideas are not dream implementations or just like vaporware. We need the pressure of real projects, real clients' relationships to create software that is going to make a huge difference for their developers. So we need those relationships with clients so that we can actually create the conditions for this kind of thing to emerge. That needs to also happen for us to make this kind of a situation possible. So, there is a lot of work to do but it's a really worthwhile cause.
CHARLES: And I think that it's one of the reasons that we start with testing as the most important use case because it's something that you don't want to be just saying, "Hey, you want to come help us fund research to make everything better?" There is real pressure, there's financial pressure and there's a duty to make sure that if a project has a deadline, that deadline is met. You want to come in at cost and on time. That's the goal of every project. And you want to try and strive to do your best to achieve that. And testing is an area where the research almost always pays off. I mean, where it has clearly. You can invest a lot of money and get a lot of reward in return because you're losing so much money by not testing or by focusing your investment in testing on lower value targets.
TARAS: For people that are familiar with acceptance testing, I think that just goes like a no brainer. Acceptance testing just enables you that. So by putting big test as a first kind of objective, but putting acceptance testing as a first kind of milestone, it sets us up to benefit in the way that Ember community has benefited where the acceptance test now became a mechanism that made it possible for the framework to change. And so, we're kind of setting up the same conditions here. When you start working with us or with any of the companies that work with us, the first benefit you get is you get the acceptance testing. Now, developers are more productive because they know that they have a certainty that if something broke, they will know. If they broke something, they will know. But what that also gives us is a way of introducing change in a systematic way. So, if we find that we have a new routing mechanism, we can start to refactor our React router setup to use this new routing mechanism and we can do it safely because we have our acceptance testing in place. So in many ways, it's kind of like the first step. And it's actually a great first step for business and is a great first step for tech technology. That's kind of a really awesome overlap.
CHARLES: In the podcast with Dylan Collins, we talked about this is like step one is make experimentation cheap because experimentation can be wildly expensive and you can throw away a lot of money if you're just experimenting without constraints. And there is nothing like a great application/acceptance test suite that puts those constraints in a completely and totally unambiguous way. So I think that's actually a great summary of our next decade.
TARAS: Yeah, I think so too. I think it's a good to show you the perfect next step. And I think this is going to show us the perfect next future though we can't really imagine it right now. But I do know one thing, this future is going to include a lot of people. Fundamentally, what we're setting out to do is a big idea. And so, big ideas are brought to the world by a lot of people. And I think one of the things that's important is that from day one, we include everyone in the process. For anyone who's interested in these big ideas, we want to make it possible for you to participate in. So that's the reason why Frontside is going to have an RFC. Like on our Frontside website, we're actually going to start writing and publishing RFCs. The first RFC that we're going to have is going to be for big tests, which is to lay out why big test is a big good idea for frontend for our industry. And as we start talking to people, there's actually lots of applications beyond frontend. So, we're going to focus on writing these RFCs and sharing them with the world so that we can include as many people in it as possible so everyone can participate. And ultimately, hopefully, that will have an influence on our industry and actually allow us to leverage these big ideas in a way that is not tied to any particular source of these brilliant ideas.
CHARLES: Yeah, hat tip to the Ember community for the whole RFC thing, which I guess they didn't invent either.
TARAS: Yeah, they borrowed it also, sorry. But I mean, it's been adopted everywhere. And I think it is a really interesting way of making people part of your process and making it our process as our meaning, like everyone who works with us. Whether you work for Frontside, that there's a client of Frontside or you work with the company that is partner with Frontside, whatever that might look like, we want to start to create places where we can be having these conversations and together surfacing these great ideas and making them part of a tool sets.
CHARLES: I think that's a perfect way to wrap up. Like I said, you pretty much described our next decade. Maybe if we get more people, it's going to be significantly less and we'll move onto the next thing even more rapidly. I am fully pumped. I'm ready to see this. There's a lot of work to do and I feel great about getting to it.
TARAS: I'm very excited about all the work that we're going to be doing, so let's get to it.
CHARLES: Yup. All right. We'll see everybody next time.
Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at contact@frontside.io. Thanks. See you next time.