Transcript: How Smell Could Transform Robotics: The Next Frontier in Sensing

Table of Contents

Interview

[00:00:00] Start

[00:00:00] Audrow Nash: Is there a sense we've been missing in robotics? What about smell? I talked with Kordel France, who's the founder of Scentience AI. Where they're working to make smell as accessible to robots as vision is today. But why smell? I hadn't thought of it much before this interview. A lot of information is communicated through smell. Is the fruit ripe? Does the bag have an explosive in it? Does someone have lung cancer or other health problems? It's amazing what kinds of things you can infer about the world. If you have a good nose, and I think we'll see a wave of startups in the next few years building robot companies around the sense of smell from agriculture to security to health care and more, and enabling that is exactly what Kordel France is working on with Scentience AI. You'll like this interview. If you are curious about a new type of sensor and the big impact it could have on robotics, and especially if you want to be part of the first wave of robotics startups that comes out around applications leveraging a good sense of smell. So grab a snack. Get comfortable. And here's my conversation with Kordel France.

[00:01:22] Kordel France & Scentience Overview

[00:01:23] Audrow Nash: Hi. Would you introduce yourself?

[00:01:24] Kordel France: Hello. I'm Kordel France. I am a roboticist, area engineer, and, founder of Scentience.

[00:01:32] Audrow Nash: And Scentience is a new thing for you. Will you tell me about where you were just a little bit ago, and then how you came to Scentience?

[00:01:41] Kordel France: Sure. So I've started, a few years ago, a machine learning company that was focused on olfaction or the sense of smell and if you can think, you can think of this, that what we are building is like a camera is like a digital vision sensor, and microphone is a digital audio sensor. We were trying to build a digital scent or smell sensor and particularly engage towards medical applications. And that company was called Data Diagnostics. And they built we built a I was one of the founders, and we built a product that could detect different chemicals on the human body, so indicative of different medical conditions. So coronavirus was one of them. Pneumonia was another one lung cancer was another one. And, yeah, since then I've moved on to kind of outside of the medical field into, the company Scentience and working to build what, really build the sense of smell for robots.

[00:02:39] Smell’s Role in Robotics

[00:02:39] Audrow Nash: I like the name, so it's like “Sentience”, but it's “Scentience” with the “scent” part. Oh, yeah. That's great. So tell me about smell more generally. Why is this, like, how does it work? And then why is this kind of an unappreciated sense, especially in robotics? It seems.

[00:03:01] Kordel France: So the sense of smell is really nothing, new to kind of the world as far as sensors go. I mean, we all have carbon monoxide detectors in our home. Smoke alarms. Right? So you can think of those as electronic noses that look at one chemical in particular. However, they're really not super sensitive. You really need a lot of smoke in order for those to go off. And if you look at a lot of the other influences around us in nature, like moths, dogs, insects, even cellular organisms, they all have, olfactory senses that help them navigate. And, you know, especially dogs, they, their primal sense is or their primary sense is olfaction. And so I don't we don't really have mechanisms to replicate the accuracy and the capability that dogs have to navigate. And, to, to interpret the world and the air around us and olfaction is a really interesting sense because, the chemicals around you in the air, you can observe with three and three ways you can see them optically, you can hear the acoustics, and you can react them with, chemical reactions. Now, there are different theories for how humans, smell, but, most argue that we react well, there's a chemical reaction that occurs in the nose that, introduces a signal to the brain, that helps you understand as you're smelling baked bread or strawberries or something like that. Now, we're with Scentience. We're trying to build sensors that really allow you to observe, to quantify the data around you in a very, very sensitive manner. So looking at a parts per trillion, which or part per quadrillion resolution, and to give you a scale, humans can detect it a part per thousand or part per million depending on a compound. So very we're going much, much deeper. And the, trying to make it so that's very rapid. So we can instantly observe what the air around us is telling us. There's so much data in air right in the air around us. And so, a lot of the sensors right now are not super fast. They take minutes or seconds or sometimes even hours to really get a, response back from a stimulus. And so we're trying to fix that problem with the, our sensors.

[00:05:19] Smell Sensor Problem Solving

[00:05:19] Audrow Nash: That's awesome. Okay. I feel like there's a lot to unpack there. But so why why we interested in this. Why is scent like what kinds of problems can you solve. If you had good sense in theory I would think all the things dogs are trained for like bomb detection or drugs or whatever it might be. But what what are some of the problems that you imagine being able to go after? If you have really good, I don't know, sense sensors?

[00:05:47] Kordel France: Sure. So you you, made a perfect foundation for it. So a lot of the things that dogs are used for, they're detecting explosives or drugs, and for border security. So that's pretty low hanging fruit for us. That's one thing we're going after. If we can enable a drone or a robot to. Yeah, to go to basically if you can tell them go find go find the explosive substance. They could clear a soccer stadium or football stadium or a school or whatever and not endanger a dog or human, potentially, and especially if you can swarm them together. And then, some other exciting applications are like, in aerospace. Right? So a lot of the ways to determine signs of life, they send soil samples or rovers or some, samplers to try to collect sand and bring it back or soil and bring it back and observe the chemical composition, or they try to analyze it, you know. Right on the side of itself. And the current instruments are very heavy. They take a lot of power. And, they're very, they're very sensitive to kind of, space is not their the best environment they've been in design for because they've been designed for the lab. So, with our sensors, you can shrink things down into a much smaller size, weight, and power package that can allow you to get data back, very quickly. Every kilogram that goes up on a rocket is literally millions of dollars. And so, you know, if you say if you send something up, that's the payload of a fridge and we can compress it down to something that's the payload of a, you know, an energy drink can or a pop can or something like that. And that's that's much better, right?

[00:07:20] Audrow Nash: Yeah.

[00:07:21] Kordel France: And then additionally, another exciting domain is agriculture because, crops, food, drinks, quality control and plants, etc. they all emit chemicals that indicate the relative state, and their, kind of their general health status. So like, for example, flowers and we'll admit a certain chemical indicating they're about to, they're approaching their end of life, right? They're lacking chlorophyl or they're lacking certain chemicals that that make them healthy. And so if we can detect that, you can detect through food spoilage more accurately, better quality control on, on crops in general, and over groceries. And then similarly with the cosmetic industry as well, quality control, different perfumes, cosmetic products, etc.. So, yeah, there's a lot of different applications there. But I can zoom in on any one of those.

[00:08:14] Smell Identification Mechanisms

[00:08:14] Audrow Nash: Cool. I really like the idea of quality control, because I guess, like a lot of quality control when I think of it as visual. And with this you get, you get a very rich additional sensor that you can use that, I mean, for like produce or something would be unbelievable. And I imagine also for like, skin products and stuff where they made correctly this kind of thing. That's so cool. Okay. So yeah, I mean, those areas to zoom in on or those areas are very cool, but I want to keep going deeper into to understand the whole thing of sense or so when you keep saying chemicals, when we're talking about scents, how does detection work? Like what are what are you doing? And I understand, okay, we have the carbon monoxide sensor in the house and it is looking for one compound. But how do you make it general. So that you can smell multiple different things. And then how do you, how do these detection mechanisms work to identify like one part and trillion, as you were saying, in this kind of thing, like how does all that work?

[00:09:22] Kordel France: So certain chemicals favored different methods of detection. And I mentioned three of them. There's optical detection, acoustic detection and then chemical reactive detection. So to talk about optical for a moment, the certain chemicals in the air, they will, for more or less terms refract with varying wavelengths of light. And so if you can find that specific wavelength and you get a signal back, from that refraction, then it's an, it's indicative of what chemical that might be. Right. So, alcohols might refract in a very certain way. Organic compounds in general might refract this very specific way or a very specific wavelength. And so this, spectroscopy or these, these infrared techniques are one way to detect certain chemicals. Now, some favor optical detection, others favorite chemical, favorite chemical Co detection. So as an example for chemical detection, if you think of a battery right, there's an electrode or there's a anode and a cathode and there's an electrolyte and the, electrolyte is charged. Now it's, it's charged. The electrolyte is designed in a certain way such that if it hits a certain charge, then it will react with specific group of compounds, for example, alcohols or, aldehydes or something like that. So it basically the as the chemicals, interface with the electrolyte, they send a charge back depending on how the electrolyte, the electrons interface with the electrolyte. And so it gives an indication of what chemicals are around you. And then similarly with, acoustic sensors, the vibration of chemicals at a quantum level, you can you can detect the frequencies of those vibrations and those different frequencies determine what chemical you're looking at.

[00:11:10] Audrow Nash: That's cool. Yeah.

[00:11:11] Kordel France: So it's a matter of like the, you know, the the mediums are all there. The ways and detection are all there. The problem is really how do we compress these lab instruments that have been used to illustrate these techniques down to such a small form factor that they can be used on, you know, handheld devices or robots?

[00:11:32] Audrow Nash: Yeah. So how do you go about that. Because that does sound like the big problem. So I guess you're saying that these are, it's well-established in a sense to use site acoustics and chemical reactions to find things that are to basically infer about what is in the air and from that you can infer something like scent and but how and but so these are fridge sized things or large devices. How are you going about and combining and minimizing these like I imagine that that's probably the bulk or that that's a huge challenge that you've been working on with the previous company and no current company.

[00:12:16] Kordel France: Yeah, a lot of my PhD research went into it as well. To try to understand, you have to understand the current state of the art and how these systems work. And, there have been a lot of great engineers that have worked on these things, and so they've optimized them really well. And so it's a matter of fact, it's a matter of really can we optimize further? And if we can't, what constraints can we relax in order to get a smaller size weight and power factor. So like for example, there's an instrument called the GCA CMS, which stands for Gas chromatography Mass spectrometry machine. And it's about the size of a fridge. And it takes hours to run an air sample. Now it's very accurate and it gives you very detailed results. But if you can imagine something like, a dog which has like a 100Hz refresh capability and its olfactory senses, that's not very competitive to something that takes hours to run. Right? So maybe if we can relax the accuracy and the number of compounds we're looking at, we can get a smaller, you know, size weighting power factor.

[00:13:20] Audrow Nash: Okay. And so you did that for those three domains of sensing. How do you, how do you make it so that there is because I imagine and maybe it depends on the use case that you want. Okay. I imagine it's quite specific on the use case. Tell me about that a little bit.

[00:13:38] Kordel France: Yeah. So if you want just like passive monitoring like for example with like quality control of food or drink or produce or or for or you don't need 100Hz refresh capability because you really it's just it's the problem doesn't define that type of a solution. So you can get away with more, a longer, refresh rate, a longer detection time and have more accurate responses. Now, something like olfactory navigation. Like what? How a moth or how a dog actually navigates the world. That's where a really fast refresh rate comes in handy, because where you really need to be able to you need to be able to update your world state much more quickly and respond to actuators. Right. If I need to drive left because the scent plume is moving left, I don't want to wait an hour to do that because the scent plume changes.

[00:14:29] Audrow Nash: Every one.

[00:14:29] Kordel France: Second. So you've got to be able to react in a very, very quick way. So, those types of things that are taking into consideration, for the products that we build. So, we have two, one that is two products, one that is a fast responding, very low sized weight and power, product that, really is tuned for olfactory navigation and one that's a little larger, that, about the size of a coffee maker that allows you to profile the air around you, or an air sample, and yet, at the expense of a longer detection time, get much more higher accuracy and a much larger array of chemical compounds that can be analyzed.

[00:15:08] Use Cases

[00:15:08] Audrow Nash: Okay. Very cool. I would love to hear about the use cases you're targeting for both of these. The small one and then the large one. Did you have the small one nearby? You held up something earlier? I don't know if that was it.

[00:15:19] Kordel France: I do, yes. So we've got, one of them here. This is a smaller one. And, it's really built to, it's, a little shrink in size as the company grows, but, we've built it to be able to be used, by robots, right, so that they can integrate this with a very low sized weight power factor. Have an API that they can they can, talk to, very easily and really navigate the world. I mean, if you're if you're familiar with what's going on with, like Sega or Tesla robots or App Tronic or Fourier or any of these really big robotics companies that have gotten billions of dollars in funding. They all have vision, they'll have audio, they'll have speech capabilities. They have haptic capabilities and tactical capabilities since they can walk around, but they're missing the sense of smell. And when you mention that to people, they're like, okay, well, why do I need that? Right. Well, if you think about the applications, you really expand the capabilities that that robot can do by giving it the sense of olfaction or smell. So being able to detect when there are high traces of carbon dioxide in the home or being able to quality control food and manipulate, produce around to, you know, fit in your grocery store, right. Like, those are things that a robot currently won't be able to do unless it's given a sense of smell.

[00:16:42] Near-Future Innovations

[00:16:42] Audrow Nash: Yeah. It's so funny. Imagine imagining a robot personal shopper picking out fruit or something in the grocery store. With that one, what do you imagine are some of the like the low hanging fruit for applications? Because like, my impression is humanoids are pretty far away, from being in homes and being, like, generally like they might be very useful in manufacturing and very small, or maybe not small, but particular niches, I imagine. First, but there's a lot of robots that are not humanoids. What would you imagine would be some of the like, startups? We'll see. Using this kind of sensor in the next, like say three years or something like this.

[00:17:26] Kordel France: So Aero Aerial Robotics are a interesting one to start with. If we can imagine a drone that has vision capabilities and navigates by vision and GPS, if you go into like a tunnel or if you go somewhere where you can't really get a good GPS signal and then it's dark, you're you're options become limited on how you navigate. Right. So and you can navigate by there are ultrasonic sensors. There's radar. There are other ways to do it. But those radars very power consuming and and if you could put an olfactory sensor on the robot and you can tell it to basically get as close to the ethanol as you can or get as close to whatever compound you can, it literally acts like a dog and tracking the scent and, you know, trying to find its way through, so that's I mean, there are a lot of olfactory navigation capabilities that I feel like, you know, robots can do with between aero robotics, ground robotics, like, with spot. Right. And then underwater as well, because, surface dwelling, or ocean floor dwelling, creatures partially navigate by scent. Right? So some crabs navigate by chemical traces that they, they actually can observe in currents in the water. And, GPS is not an option when you're in the middle when you're like for submarines, right. They currently use sonar, which is, is very useful and works, but it'd be helpful to have, another, another modality for redundancy for navigation. Right. If you can maybe, like, if you can imagine a scenario where you have to track a current or you have to track, a certain chemical, that, you know, is emitted from a, like, emitted from a current, from a specific landmark. Then you can just track that chemical the whole way there and then know where you are. Right. So there are a lot of, interesting applications in olfactory navigation, which is, to be honest, the north star of the company. We want to make these sensors as small and lightweight as possible. But, we in order to do that, there's, there are a lot of interesting applications that we can address along the way with, that aren't necessarily navigation based.

[00:19:40] Audrow Nash: Yeah. For sure. And just for completeness, with the, larger coffee maker sized one, what are some applications you imagine for that one?

[00:19:53] Kordel France: Agriculture is a really interesting, application there because there is there are a lot of compounds that are emitted, during or whether it's, you know, growing crops in a field or in, a flower shop or something like that. And so being able to monitor chemicals there is pretty interesting. Because they give a very good state on the health of the, actual, produce being grown or the or the crop being grown. And can give you an idea of what diseases might be affecting it. What it's like, what its longevity will be, what its yield will be. Wow. Which is yeah, there's a there's a lot of interesting, insights that can be gathered there. And then similarly, the, within the cosmetic industry, they, and I I'm not kidding you. They have trained human professionals that quality control, like, for example, Chanel five or awkward these, you know, or whatever. And they've been trained to observe the smell and say, yes, this matches what the standard should be. Quality control test. Now, I don't know about you, but I've lost my sense of smell, during the last few years. Throughout the pandemic. And so, if I was a quality control, if you have a cold, right. It really hinders your sense of smell. And so humans can be a hard standard to match against. And there's not really a great machine out there that can observe a wide profile of chemical compounds, in a very fast manner. So that coffee maker size machine could be used to quality control over cosmetics, perfumes and the like paints, like in automotive compounds.

[00:21:35] Audrow Nash: Really?

[00:21:35] Kordel France: Yeah. Yeah. And automotive or or aerospace when you, when you paint or when you anodized specific materials the, the paint needs to be in the primer. It needs to be at a certain chemical setting and there's a, there's a shelf life for it. If the humidity is too high or the temperature is too high or there's a slight imbalance of benzene or something within the paint of the bonding will be off, and it won't be as, as as efficient as it was really designed. So being able to quality control that is extremely helpful as well.

[00:22:08] Audrow Nash: Yeah it's really cool. So what I suspect is that for a lot of the things you're talking about, there are very specialized, probably large and expensive devices that can do this kind of thing or like people that are trained and I bet you there's labor shortages there for that kind of thing. I bet you also like, I mean, I've gone and tried to smell candles or perfumes or whatever it is, and after a few they all smell the same. And so I, I imagine there's like heavy fatigue for these where they have to, like, do something to, clear their nose or something so they could smell again. But yeah. So it strikes me that there's a lot of maybe specialized things that exist that serve these use cases that you're describing. And I'm sure that this is not for all of the things that you could do with what you're describing. But it sounds like what you're going after is a much more general approach, that can be leveraged to do a lot of these tasks. Is that fair, or how are you thinking of it?

[00:23:10] Kordel France: Yeah. I mean, I think olfactory sensing in general is largely an untapped, as largely an untapped market. Right. I don't think we've we've nearly hit the potential that we that we can. So, like I said, the North Star, the, the general application we're building for would be navigation, olfactory based navigation. But if we open up people's eyes to observe that there's a whole other domain of, I, you know, faction that should be introduced and observed, then, that's that's that'd be fantastic. Because if you look at like AI right now, the different modalities that are receiving a lot of investment are like vision and language and speech. So ChatGPT is language and vision and, and now speech, and they have all these chemical capabilities or they have all of these, these sensing capabilities, but they're missing the sense of olfaction. Now, now, one might argue, was like, why would I want ChatGPT to smell? Right? It's like, well, maybe we have an investor, but that was

[00:24:09] Audrow Nash: Yeah, it could be like, is this fruit bad? Is this food bad? Whatever. That's. And a lot of other things. Yeah.

[00:24:14] Kordel France: Yeah. That's that's an excellent example. Right. If you send it a picture right now, maybe it can, maybe it can describe. Well the apple doesn't look super great. Maybe it's okay. Maybe it's not a eat, but maybe it looks okay, but it's off gassing certain alcohols, right. Which come from fruit. And if the olfactory sensors can pick that up, that can be like, well, it's actually below the healthy level too, right? You should. Yeah. Yeah, it's too ripe. You shouldn't eat them. Which is which is really interesting. Right. So if, we have like, language models, we have a vision language models, and which combine both of those senses. So I'm waiting for, an incentive to building these, I'd like to see more investment in olfaction vision language models where you combine all the senses together and you're just fuzing multiple modalities, vision, language, olfaction into a single, model that can be that can be really observe used to observe the world in a better manner.

[00:25:09] Application Development

[00:25:09] Audrow Nash: Yeah. I think that's so cool. So one thing that strikes me thinking of like thinking as an application developer for this kind of thing, I don't know how to think and smell. Like one thing that has been interesting is like in the past, I've used a thermal camera. So we put this on a drone and I was using this for detection of different things. And I thought this was really cool because there's all sorts of new ways you can perceive with it. So, for example, you can look on the ground where someone was standing and you'll see little warm footprints. You can tell if a mug is hot or where it was like, you can see all sorts of cool things. You can tell if appliances are running from a distance, like all sorts of things that I wouldn't think of until I look through one. For an extended period of time and kind of see how it inter or how it perceives the world. I imagine that smell is very similar in that it's hard to understand without going and trying it a lot. And I know we have our senses of smell, so maybe that's a good analog, but if you're going one for a trillion particle, one particle in a trillion and picking it up, it might, we might our sense of smell. No, may not be a good analog to it. Tell me how to think and smell. The weird thing to say, but, like, how would you go about developing an application or trying to understand what this would be good for? How would you be an application developer trying to learn how smell works?

[00:26:47] Kordel France: It's a really it's a terrific question. And it's really the, in my opinion, probably the one of the biggest hindrances from olfaction moving at the pace, the rest of the modalities. Right. The vision in language in audio. So if you imagine an image it has basically every pixel, every color can be quantified with red, green and blue. There are three colors that represent all of the colors in an image. And the standard for images are PNGs, JPEGs, etc. those if I want to. As an AI or machine learning engineers, I want to go build a machine learning model. I can go online and I can download literally terabytes of open source data sets of these standardized vision formats and PNG, Jpeg, etc. that all root down to RGB pixels. And a similar method applies with different frequencies in audio. So there are standards for all these different data modalities that we train AI on. But if I want to build an olfaction model, there are no open source. There's not there aren't terabytes of data sets online that I can probably.

[00:27:52] Audrow Nash: None or almost none.

[00:27:53] Scent Technology Standardization

[00:27:53] Kordel France: Right. And while there are different data sets, they're not standardized. Some people think, yeah, some people think that, olfaction should be viewed as a multi-spectral time series. Others think it should be viewed as a graph. This because it views a spectrogram. So there's not even an agreement on how we should represent the sense of smell. So I think it really part of what we're trying to do is Scentience is really develop a common consensus between industry leaders to say, what should that be? What's the state of the art? What is, you know, everyone for people that have been working in the industry for decades, why do you think it should be represented by a graph? Why do you think is represented by multispectral time series? As a researcher, I can argue for any one of those, but it's it's it's we need to get common consensus before, we can move forward to actually develop something that, is standardized, like vision, language, etc.. And so that's, that's part of what we're trying to do with our data is we've had to try to standardize, in multiple formats. But I think that as we start to progress and move forward, we'll be able to standardize how people should think about the sense of smell. Because we aren't going to be able the quality control, the sense of smell as we get down to like a part per trillion level, because humans can't really detect to that level, similar with like, images or sounds that have frequencies beyond with which we can hear. Right. If you'd so like dogs can hear about above a certain frequency that then humans can and, it's the same thing with olfaction. So we have to have sensors that can validate whether or not a chemical compound is present. So it kind of there are these different foundational, I guess, pillars that need to be developed before developers can really springboard and make, all of these olfactory applications in the world. And I hope that we're moving in a direction where we can define that for the state of olfaction moving forward.

[00:29:49] Audrow Nash: Oh, yeah. Yeah, it seems like so for the robot operating system. ROS we at the end I wasn't involved, but at this time, but a lot of the benefit that I think the robot operating system has done for the robot, the robotics community, is that we have standardized things in some respects. So we've made it so that there is a robot description language, and we have a bunch of common messages that, are used for all sorts of different things. And that to me has been really valuable because just seeing from when I started in robotics to now, it's like it's much easier to share data and, share software and all sorts of things. And so I imagine it will be a somewhat similar journey with scent. But I wonder also if because of robotics and where it is and where all the interest in AI and stuff, there might be a much faster convergence in scent. And also given that it's like from my naive perspective, it is a smaller dimension of data to represent, but maybe that's not true at all. But you have the different classes to represent, different smells. And I guess there's a lot of discussion on how to do that because it's a complex scent. But, I wonder if it will go faster, I guess is my point. And that would be exciting if you guys were to be probably the big pusher in this.

[00:31:20] Kordel France: Yeah, I agree, I think it will go faster because we've we've standardized so many different things within the last few years that I guess we have that fluency now as, as a community, as an engineering and research community. I think that one thing I'd like to see, and it seems like we're moving forward in that direction, is we're looking to build an analogy to the RGB image format. So all images can be represented by three different colors. Perhaps I'll olfactory senses can be, can be detected or interpreted by a, an array of specific values. Right. So maybe alcohols can be quantified as one aldehydes, aromatic compounds. You know, if you have a level of these maybe that indicates the level of benzene that's present, right. Or the level of ethanol that's present. And so instead of having to build out these large exotic, you know, sensing applications, we have an abbreviation for what, every chemical around us looks like in a much simpler format.

[00:32:19] Audrow Nash: That would be so cool. Okay.

[00:32:22] Kordel France: And then if you move forward with that, right, then you can publish, you know, these, these values, right. So you can publish these arrays into a data set that others can use to train and others can use to download, and build, you know, large scale machine learning models similar to how humans don't, like we have vision data sets, right? We have snapshots of vision, which are images. We can have snapshots of chemicals which might be, you know, graphs or like I said, multispectral time series snippets that can be used to represent a compound and its relation image, etc.. So that's I think you're right. I think I think we'll, we'll see a much faster acceleration with olfaction because we have the fluency with other synonyms.

[00:33:02] Scent Recognition Dataset

[00:33:02] Audrow Nash: Oh yeah. Do you think, if you break it down into something that looks kind of like RGB, where you have the different compounds, could you imagine seeing something like and you mentioned image, image recognition and image data sets. Would you imagine seeing something like image net which is a big data set. And like what do they do? They do comparisons for accuracy, or something like this. Like, could you imagine having something similar for scent?

[00:33:33] Industry vs. Academia

[00:33:33] Kordel France: Absolutely. So one thing we are working to develop, because we have so many different sensing instruments and different chemical compounds for which we're using to develop our sensors, is it's not called sensor net, but we'll say it's called sensor net. It is an analogy to ImageNet, right? It's a vast array of olfaction data that can be used to train a large model, like like a foundation model, in parallel with like ChatGPT size level application. So hundreds of billions, trillion parameter models, they need extremely large data sets. And if we can build that foundation data set that sense. Net, if you will, then that gives researchers the ability to go infuse that modality into these really large scale models.

[00:34:17] Audrow Nash: For all of this, it sounds like there's more of an industry focus than academic focus in this or how how are you balancing the different communities that you might interface with and provide value to for all of this?

[00:34:33] Kordel France: That's where I really appreciate the skills I was taught, in my PhD, skills. I've taught my PhD because it there the there's a very niche community for olfaction. They exist, but there are a few researchers that have literally devoted their life to advancing the state of amazing. And the amount of knowledge they know is so, so under bonkers. In fact, a lot of the research papers that I use to cite my own work were very difficult to find and they had very low citations, but there literally foundational for the field of olfaction. And I just feel like they are low cited because they, there's not a huge appreciation for the sense of smell yet because people don't quite understand the applications for what can be done by observing the air around us. But the work is, is is extremely foundational. And I mean literally, I think that some of these papers will look back in a few years and they'll be viewed as the attention is all you need.

[00:35:28] Audrow Nash: Seminal.

[00:35:29] Kordel France: For, you know, Transformers or the lottery ticket hypothesis for printing neural networks or something like that. So yeah, the community is very niche. It's growing. But I'd like to see a larger research effort and a larger investment, opportunity with or effort within olfaction, because there are a lot of startups and language models, a lot of startups in vision, Transformers and all these things. But there's I mean, name I can name two startups at top of my head that I would say are really advancing state of the art in action. And they're yours. Yeah. One and another one that's called Osmo. The interesting thing about Osmo, so they have a heavy research backing as well. But, Jeff Hinton just trying to join their board. Nobel Prize winner in AI. Right. So that's pretty impressive. But we have an equal passion. I feel like in the mission that we're trying to pursue, but they're looking at scent generation and we're looking at scent detection. So they have a really, really, really compelling AI product where they can generate different scents based on what you describe it. So visually and verbally and verbally and they're working with a lot of perfume companies to develop, you know, the next Chanel five, the next Aqua Digital, the next really compelling Cologne or perfume that people are going to love and they have, a really good team that's as receiving a lot of investment. And I'm really excited to see where they go.

[00:36:56] Startup Landscape in Smell Robotics

[00:36:56] Audrow Nash: Super cool. Yeah. It's funny that it's such a small thing like just from my perspective looking at scent as a new sensing modality for robots. It seems like there's probably I don't know when it will be yet. I actually love to get your opinion on this, but there will be a wave of startups that all because like mobility in robotics, indoor outdoor is becoming if it's not already solved for a lot of things. And so it's like, okay, well, what can we strap on to these robots to make them useful? And sense is something that like the ability to smell. Would they had there's so many applications that I can imagine there. When do you think there will be a wave of startups around this? I know you probably hope as soon as possible, but what would you think? Like two years? Five years? What are we looking at?

[00:37:49] Kordel France: I think it's on the order of five years. And I think in ten years the wave, will start to not collapse, but relax. But I think years five through ten, we'll see. Is that.

[00:38:00] Audrow Nash: S-curve? Yeah. You get to the top of the s at the ten years kind of thing.

[00:38:04] Kordel France: Exactly. Yeah. That's that's the way the current landscape looks like. And I mean, I really think in my ten years we'll look back and be like, how did we not? We have cameras everywhere. How do we not observe the air around us more? There's so much I mean, even from a medical or human perspective, there's so much data in breath. And now it's not even being explored to its full potential, let alone like environmental air capabilities or factory navigation capabilities.

[00:38:31] Audrow Nash: And.

[00:38:33] Kordel France: I think there will be a lot of health care and medical startups that, look at olfactory capabilities.

[00:38:37] Audrow Nash: Really.

[00:38:38] Smell Sensing API

[00:38:38] Kordel France: In breath. Yeah. And then in robotic applications as well.

[00:38:44] Audrow Nash: Yeah. So interesting. I think that it will be a big wave. So now you're saying you said earlier that you want an API for smell. And so I'm imagining that, if you can break things down into like RGB for compounds that you identify, that would be very helpful. But I'm wondering how it would work for like, I don't know, I'm trying to smell something and it gives me some compound values, then I don't really know how to map to anything in the real world. And I can go collect a bunch of data, but it's it's, I guess labeling that data becomes a bit of a challenge. And I imagine that for many things like, say, fruit detection or something like you could, you could have a public library that would do this kind of thing. Well, and tell you, is that apple too ripe or not? How would you imagine having that or I guess, tell me about your API ideas and tell me about how high up in the logic do you think you're going to be going. And maybe it's an incremental keep improving, keep getting higher. Kind of like a luxon s but for smell. But tell me about that. Tell me about your API. And tell me about how you expose things to users.

[00:40:00] Kordel France: So one thing we're trying to do to make it really is, is easy and compelling to use as possible is focus on fusion, fusion with other modalities. So at to your point, olfactory senses by itself maybe aren't super valuable, right. If, if, someone's blind and they can't hear, or like, a animal's only navigating by scent, it might have a, harder tip or harder time of trying to observe the world around it. But if you add in vision at an audio, you have the ability to touch and move around the world. You get a much different, you get a much different perspective. So we're trying to, with our olfaction vision language models that we're building, we're trying to make it so that with the API you get the, the olfaction data back. So I know what the smell is. But now if I want to send a request and interact with like a another camera or audio or whatever, then I'm able to fuze that data together and say, okay, I see that there's carbon monoxide in the scene. There's nothing that I see in the scene right now that, gives off carbon dioxide. But I do see a garage that I'm looking at lets instructor move the robot to move left or right. So if I move left, I start to see a vehicle. Okay, well, yeah, that makes sense. There's a carbon monoxide emitting exhaust pipe from the vehicle that's running right now. So, you kind of start to interact with the mobility capabilities of the robot and the vision capabilities. So, right now some vision language models have the ability to say, like, if you give them if you if you were to prompt them and say, I see carbon monoxide in this image, where is it at some in our experience, they will force fit something in the image to say like carbon monoxide is here, whether or not it is right, but it can't actually detect it. It's just saying, oh, I'm looking at a garage, you know, maybe the floor are made in carbon dioxide, which is nonsensical. But if you can take if you can incorporate spatial and olfactory intelligence into it, say, look, I definitely see carbon monoxide here, but there's nothing that's it looks like it's emitting a strong chemical, chemical scent of carbon monoxide. Let's move around and see if there's something, you know, close by. So that logic of being able to say, I see a garage, there might be a vehicle nearby, because I know those emit car that those that emits carbon monoxide. Let's look around and see if I can find a vehicle like it's a very, very rudimentary example. But you can see how olfaction by itself isn't that super interesting. But when you fuze it with other modalities, you get a lot higher sense of intelligence. And so if we can make our API able to fuze with other modalities and, interact with other, other APIs on the market that we get, we can get that level of intelligence that we really want to achieve.

[00:42:40] Sensor Integration

[00:42:40] Audrow Nash: Yeah. I feel like saying is not interesting is kind of the wrong thing. It's not you don't have context and because you don't have context it's not as meaningful. That's first kind of thing. But I wonder like to me what it seems like is you guys are doing something like the way I'm thinking of, Scentience in my head is similar to Luxonis. Do you know Luxonis, the camera company?

[00:43:07] Kordel France: I do not.

[00:43:09] Audrow Nash: So it strikes me that. So they do a lot of vision related work and a lot of depth camera. I've talked to Bradley, their CEO, a number of times. And he. Well, it seems like you guys are very well aligned, just in different modalities. I wonder, but so it sounds like your challenge becomes more multimodal. And I wonder because that to me sounds like a much harder problem. Now, to make it multimodal, as opposed to just exposing sent information to this kind of thing, I wonder if a partnership with a company like Luxonis, or something could be very valuable to help with, heavy lift of, like, integrating with cameras or something like this, or or maybe you just abstract it and say we can send an image to anything like a ChatGPT and give our thing which with context, how are you thinking about it? Because now that you're saying multimodal, it sounds much harder to me, which that might be the undertaking in a sense. But how do you think of it?

[00:44:10] Kordel France: Yeah, I fully agree. The what we're trying to do with the API is make it so that we can we adopt interfacing standards, conventional software, interfacing standards like, RESTful requests or WebSockets or something like that that allows you or yeah, other other interfacing standards that allow us to be easily adopted by other, for example, camera companies, because we don't want to become another camera company, other company.

[00:44:35] Audrow Nash: Competitive.

[00:44:36] Kordel France: That. Right. And we'll, we'll, we'll all starve by trying to build the next best camera with no competitive advantage. So if we were to.

[00:44:44] Audrow Nash: Two hard things at once.

[00:44:46] Kordel France: Right, right. So if we can partner with, you know, the actual state of the art in vision, in audio and speech that we're able to, and they can interface with our API, we become just, you know, all better together in the vector sum of kind of all of our efforts, which is, which is much better.

[00:45:04] Audrow Nash: Hell yeah. Yeah, that sounds really cool. And you think that the, like, the transformer action models, like, you think that's the way to go with all of this? Like that's where you guys are betting. Like, to me, it seems like there's a lot of value before that type of integration, but that does seem very exciting in the long term. But how are how are you thinking about it? And there are a.

[00:45:30] Kordel France: Lot of things, I'm very passionate about this question. Because, you can get a long ways of math. So my, my, my, my background is all in math. Or at the beginning of it, it was all in math in my undergrad and I fundamental is all math. But before you need to even look at a transformer, there are a lot that you can do before, like just conventional signal processing techniques.

[00:45:54] Audrow Nash: Oh, totally. And so awesome machine learning on your labeling. Like you label your data and then you're just it's like supervised machine learning, where you can just tell if that fruit is rotten in a very narrow case, this kind of thing where the product is bad, whatever it might be.

[00:46:11] Kordel France: Yes, I totally agree. And so, a lot of the efforts originally started in that format. Now transformers become extremely compelling when you have a lot of data and you have a lot of compute. I mean, all of the large language models are stacked transformers and, but they're trained on the whole internet. Companies have hundreds of millions of dollars to train these models and a lot of GPUs. So we we don't have the budget to do that. And we don't we don't have the data to do that because we don't even have a data standard. So to solve that problem, we are trying to construct the data according to the most commonly accepted data standards and then, build out the most intelligent model that we can. Now the interesting thing with olfaction is we have to use a lot of active learning. So we can't just build a model and freeze it. It just it doesn't it doesn't work super well. And here's why. Because if you have, you if you have a chemical in one location and you change. So let's say we have a Chanel five in, in Texas, and then we take that to Berlin, and then we take that also to Dubai. It's going to look slightly different at the most, granular level because there's different air quality standards. There's a mixture of maybe a higher level of carbon monoxide in Berlin. And there is Texas. There's, a different level of humidity and temperature. So all these things really evolve the fundamental way that the chemical is observed. Chanel five hasn't changed, but the chemical scent has changed, in the way it's the way it's interpreted. So we have to use a lot of active learning to say, okay, well, I was trained to look at carbon monoxide at this humidity level, this temperature and this humidity or this pressure. But how does it change according to variations in environmental factors and it's different per compound. So, if you can imagine the equation that you can map to each compound, each compound will have a different equation. Effectively carbon dioxide will very differently than acetone than benzene when introduced with humidity, temperature, pressure.

[00:48:20] Audrow Nash: Super super complex mappings basically between compounds and what you sense. Yeah, it's kind of thing because environmental factors and all sorts of complex things okay.

[00:48:32] Kordel France: So we have to we have to build this active learning capability that basically allows the machine learning model to say, okay, I know I have a I have a set of priors, I have a set of knowledge, but I can update my knowledge as I interact with the world. Now, you can imagine that that gets pretty dicey, because you don't want the machine learning model to become a runaway train. So how do you quality control it? Well, there are a lot of mechanisms out there which we can get into if you like, but, we you really you want to actively learn while making sure that the models learning the right things and not learning the wrong signals.

[00:49:04] Audrow Nash: Not overfitting.

[00:49:05] Kordel France: Exactly. Yeah. Not overfitting. And, that can become very difficult because especially if you think of like a privacy perspective, people don't want models that are on that are always learning. And if you're always sending data to a cloud to interact with, or to update a model that can make people feel uncomfortable, right? So it's a matter we're trying to harmonize the problem of building a balanced, well-rounded data set that encompasses chemical compounds and all different, some and all different scenarios. Data privacy. How do we ensure that we're, we're, protecting people's privacy with data and not constantly sending, more model updates to, you know, edge devices, etc.. And then how do we build this to scale to a much, much larger, you know, standard that to the point where we don't need to actively learn anymore because we have such a good standard like ImageNet or something like that.

[00:49:58] Weather Data’s Impact on Smell Sensors

[00:49:58] Audrow Nash: Yeah. How do you so do you have any sort of like weather sensor on your devices. You do or. Because I'm thinking that would be really cool, have temperature, have humidity and I don't know you have to like you have to. So that is highly so it goes in with your vector of data. And then you use it to interpret whatever it is, and make more accurate assessments because it's so dependent on those environmental factors.

[00:50:25] Kordel France: Yes. And in fact, most chemical sensors I'd argue, are worthless unless you condition on temperature and humidity pressure you can get away with. Oh, sometimes, but.

[00:50:34] Audrow Nash: It sure varies so much too. But maybe like, I mean, I'm just thinking, one company I talked to was trying to use a barometer, which does pressure to identify what floor they were on in a multi floor building, and it didn't work at all, which was crazy. Because of all that air conditioning systems and things in the buildings that it just was meaningless data, which was so interesting. So that was a big surprise to me.

[00:51:01] Kordel France: What's interesting, that's a that's a really good point you bring up, because one thing I definitely underestimated as we approached these problems was how to model the air around you and how to model plumes. Right? So from like, you know, factory navigation perspective, you have to be able to model the plume and understand how the air around you is changing. So, like computational fluid dynamics, which are used to model like how airplanes behave, that actually comes into play hugely with what we're doing, Because we have to understand how the air around us is moving and responding to its environment. So, like, if we're outside, there's and we need to be able to know what the wind speed is, the wind direction. And can we detect that, in real time? And how is it changing? It's not as simple as just pulling from like, weather APIs because from a local. Yeah. From a local setting, you need to know exactly what's going on and how it's changing. And, from if you're looking at like an indoor perspective, the problem with, with olfactory navigation is that you don't know if there's an Hvac system. And is that causing the draft or is an open door. So that's where vision comes in handy. It's like, how do I contextualize what's around me to understand how the plume might behaving, to then understand how the chemical signatures are changing? And, that's a very hard problem to solve.

[00:52:20] Audrow Nash: Yeah. It seems like a direct modeling approach would be just intractable. Completely, because you need perfect sensing of your environment. And then even if you did have that, like, say, you have a perfect model of the building and all the air flows and everything else you would like the the fluid dynamics of the whole air, would probably be very challenging and time consuming to model, in any real time sense.

[00:52:46] Kordel France: Yes, it's yeah, it's extremely challenging.

[00:52:49] Audrow Nash: And.

[00:52:51] Kordel France: Yeah, there there's a bunch of great work out there. You know, honestly, there's a lot of really great work that happened in like the 60s and 70s from like very old, DARPA research papers and AFR or Air Force Research Lab research papers on plume modeling and, how to model turbulence and everything that have been really influential and, yeah, ultimately, I think it's going to be all neural network based, kind of inspiring.

[00:53:18] Audrow Nash: Learned.

[00:53:18] Kordel France: Yeah, learned. But, there's been a lot of really great work. That's been published decades ago, that's been influential.

[00:53:26] Audrow Nash: Yeah. I imagine with that kind of thing, you almost need an embodiment in a sense, like you need something in the physical world that's going to move around and learn. Or, I mean, maybe you could even gather some information from simulation if you were simulating these plumes. But, it feels hard from the perspective of a company that makes the sensor, in a sense, like like an alterna model or maybe something that you're pursuing or some other company that could be a partner or something. I'm thinking of like, the first interview for this new podcast was with a company I did, called Electric Sheep. And what are you doing? Yeah. They're awesome. But so their whole thing is that it's embodied and it's providing some value, but it's clever because over time, they are getting lots of data and they're using that on a few platforms. And it gets to get more and more sophisticated because they have a justification for having more robots out there and getting more data, which is that it's already providing value and what it's currently doing. And I feel like it's hard to scale without an embodiment or without a financial justification for scaling. But what are your thoughts around this?

[00:54:52] Kordel France: Yeah, I fully agree because there's is a scalability question around the embodiment and then around compute to definitely. And how do we I mean, if we want to build, you know, true foundation models for olfaction, we need a lot of compute, a lot of data. And, that's I mean, I heard one metric that GPT four was trained on, like $100 million. Oh.

[00:55:14] Audrow Nash: My God.

[00:55:14] Kordel France: Which is like, no startup in the world can afford that, right? You literally have to be a power tech player to build that type of model. Yeah, really. So yeah. Go ahead.

[00:55:24] Audrow Nash: It's probably changing for that kind of thing. I, I, I'm not totally aware I have been following that closely, but I think a lot of open source models have been trained much, much cheaper. And that's exciting. So like maybe getting from 0 to 1 is so expensive, but getting from like 10 to 11, might only be 100th of the cost or even less 1000 to 10,000 there. Who knows. But I wonder if it could be similar with Santen. But you guys are probably doing the 0 to 1, which who knows,

[00:55:59] Kordel France: Well, yeah, it's, Now it's a terrific point. The pruning and distillation of these large models is starting to occur, which is great. And so the costs. Yeah, you're right. The costs will come down. The we're trying to really solve, with open source, vision language models, trying to solve the problem with custom factor models. And then how do we do build a custom fusion solution? So instead of retraining, you know, these large vision language models from scratch, how do we take the state of the art that's open source so we don't have to retrain it, use those weights, and just turn them, in, in in relation with our olfaction models. So there's yeah. That creates, that saves a lot of money. And I think that that's probably ultimately the way to go. But I guess we'll see as we move forward.

[00:56:43] Scaling With User Data

[00:56:43] Audrow Nash: Yeah. My, my impression would be if there's like an, a low hanging fruit that provides a lot of value and lets you get out there, then that would be awesome, because I'm thinking also like Tesla with their self-driving, where they just have all their users feeding them like millions or billions of miles of road data that then they get to leverage as opposed to say, like cruise or other companies that had to hire people to go drive the cars. It's like very different scales of data. And then I think Tesla benefited tremendously from all that data. And if there was some justification for having lots of those sensors out there getting lots of data, that would be absolutely awesome.

[00:57:28] Kordel France: I agree, right? Tesla did a really amazing thing by being able to have all these data gatherers on the road that autonomously just so clever.

[00:57:36] Audrow Nash: And you pay for it, too. Yeah, it's like it was just brilliant. And I mean, but it provides real value to people. But you pay for it to be the additional thing, sending back data for them, which is just like, I don't know, wonderful and crazy, isn't it?

[00:57:52] Kordel France: Yeah. I was like, if you were to propose this in a pitch meeting 20 years ago, people would be like, there's no way that'll work.

[00:57:59] Audrow Nash: But if you go, yeah, totally.

[00:58:01] Kordel France: There's a, there's a, something you, you commented on a moment ago made me think, when you mentioned the embodiment and include modeling and when I first started my PhD, I was trying to build olfactory navigation for graves, and, I became I had some success, but then what became very difficult to work through was the rotor wash of the UAV.

[00:58:26] Audrow Nash: Oh, yeah.

[00:58:27] Kordel France: Because that can help you because you're kind of pulling the air in to some degree, but you're also adding a bunch of noise to the signal and drowning out you. It's it's a it's a it's ultimately a negative thing. And so the way I tried to solve it was, actually putting like, little antenna like that were a forward.

[00:58:44] Audrow Nash: Motion.

[00:58:45] Kordel France: Of the UAV rotors, that were like ahead of the or ahead of the, the rotor wash and that seemed to help. But there's, there's a research group in, so a technical university in I think it's Delft out of the Netherlands with a partnership with Carnegie Mellon. They were able to solve it by modeling the plume. So I wasn't modeling the plume. I was just using as low compute as possible, and figured I'd just get ahead of the plume with the antenna. But they modeled the plume. And then on the base of the quadcopter, they stuck the sensor. So like, there's a small pocket of air that's undisturbed on the top of the quadcopter in the center. And so they stuck the sensor there, and they performed olfactory navigation that way. And then they had some, like, better plume modeling of like, this, this this noise is due to rotor wash. This noise is due to this noise due to whatever. So it was an interesting perspective to see, like the way that they solved it. And it's another it's another tip of the hat to how difficult it is to do like olfactory detection, because there are all these different things that could even be imposed by the robot itself on the factory signal. So, it's interesting really.

[00:59:54] Audrow Nash: Yeah. Everything in the environment, I don't know, I just, I so we have a dog, cute little medium sized white guy, like white dog. And he, it's awesome watching him smell in the yard. Like, he'll catch us. Like, you'll see him tilt his head, and he'll have gotten a sniff of something, and he'll kind of follow it like you can watch him follow a smell. It's just it looks effortless for him. But I imagine having a robot move in such a way because, like, I'm imagining, like, a trail of the smell coming from the source with, like, little breaks throughout that time. But he's following it and moving in such a way. It's like you need sophisticated movement. And the trail is weirdly shaped and all sorts of things. Like, it's a it's a tricky it's a tricky problem, probably with all of that.

[01:00:46] Kordel France: Absolutely. Yeah. Dogs honestly are underestimated on how, they can track sense for kilometers and, yeah, the, the sensitivity of their noses. There's an incredibly higher proportion of compute in their brain attributed to, the olfactory bulb than comparison to humans. And so, I was reading a paper that was talking about how in humans, smell and taste are integrated in the brain and in dogs, smell and sound are integrated. And so if you think about like, I'm not sure if this is true, but as I was reading the paper, I started to think about how you can you can hear chemical like you can detect chemical signatures through the frequencies that they emit at a quantum level. And so I'm wondering if that's how dogs ultimately detect things. And that's why sound and smell are integrated in the same part of their processing. I don't know. I don't know if that's true, but it's just it's interesting to think about because, like, why is it just more compute? They have more compute in their brain for olfactory processing? Or is it fundamentally a different mechanism? I'm not sure. But it's interesting to think about.

[01:01:53] Audrow Nash: That is really interesting. I wonder, it'd be cool in like ten years. The theory is proved correct or 20 years or something like that. And it was documented here with, like your theory of how that works for this kind of thing that I'm using. That would be so cool.

[01:02:08] Kordel France: Be cited in all these research papers.

[01:02:10] Origins of Smell Technology Interest

[01:02:10] Audrow Nash: Yeah. How did you get interested in smell to begin with? Like doing your PhD in it? I agree with you that it's a completely under served sense. But how did you come to this as a big interest? Because it clearly is something you have pursued for many years and are very interested in. And I think there's a lot of potential in.

[01:02:33] Kordel France: Yeah. So actually, I so, in 2017, my co-founder and I started, a company called Secret Technologies, and we were building we specialized was, hey.

[01:02:47] Audrow Nash: That's a great name.

[01:02:48] Kordel France: Yeah. So yeah, we were building edge applications for AI. So back then it was much harder to pack neural networks onto like, a mobile phone or like, a Ros application. And so we specialized in condensing these big neural nets down into something that could be used in more edge applications. And we were contracted out by, a company, called So Tech that wanted to build these AI applications for or this AI model for their sensors that could detect, coronavirus and pneumonia. So, we were originally just contracted out by then, but we aced the job so. Well, and our AI become such a foundational part of their product that they ended up like acquiring us in. So kind of became, technical team, of that startup and then.

[01:03:37] Audrow Nash: So cool. Congrats.

[01:03:38] Kordel France: Yeah, yeah. And then we, we, the, the original intent was, medical and industrial. So we were supposed to focus on kind of, industrial applications like explosive detection and when the pandemic hit that, our investors just went all into the medical domain, and they never really came out of that. And so, I ended up leaving that company just because, it's very difficult to innovate in the health care field. But, that's really how we got started as we, we had no any observability into the world of olfaction until we ended up taking that job. And then from that point forward, I was like, I was hooked. I was so fascinated by it because I saw so much more than what we were doing by just looking at us. And, I just a lot of the problems that I assumed were solved had not been solved. So I just thought the field was way more mature than it is. And it's I don't think it's a lack of talent or lack of of knowledge. It's just a lack of awareness about what you can do with the sense of, of action and how much data there is and the breadth around us. So, yeah, from that point on, I was hooked.

[01:04:53] Audrow Nash: Oh, yeah. Yeah. It does seem like something. It's like once you see it, you can't unsee it for this kind of thing about how powerful it is to understand, send around us. I wonder, what would you think? So how would so like all of this talk about all of this? Like, I want to start building something that has some sort of scent model sensing for smell, modality. How would people get started with. I know you guys at, Scentience are building sensors for this kind of thing, but, like, what? What is the path to building something with scent look like? And what is the timeline look like? For you guys with your devices and I guess, how does it work for makers who want to do stuff with all of this?

[01:05:46] Kordel France: Honestly, I think it comes down to the first step, the most fundamental step that we have to do for anyone trying to work in olfaction is to standardize the data medium. So how do we make sure that, like vision is PNG and Jpeg? How do we build that type of standard for our data and get a common consensus from the entire community that this is what we're going to do moving forward? And I think if we can do that, then everything else, is enabled because Scentience itself is at risk of going in the wrong direction if we choose the wrong data standard in someone with more authority, more money and more power comes in and says, now we're not going that we're going this way. And all of our processing is optimized for one data standard, or we build our, hardware to be optimized for a specific, you know, like olfactory processing technique. Then, you know, there's there's always that risk. So I think if you could, if we can standardize the sensible action, then that's the first step. And then the second step, I think, would be to just get extremely fluent in the state of the art and not like within the last five years, but what's been done in the last like 40 or 50, because there have been some incredible problems that have been solved that maybe were introduced in like the 70s or 80s, but never went anywhere because there just wasn't a strong enough interest or maybe there wasn't enough compute. And so if you throw a bunch more compute at the problems that they were trying to solving, you can really you get either, a more scalable solution or you get a solution that's much more accurate. And so olfaction is literally I mean, senses itself is built on the shoulders of giants. And all the work that's been done in the last few years, the GCMs that the gas chromatography mass spectrometry machine is telling you about, a machine the size of the fridge, very accurate. Takes very long to, to get, a result, that's been around for like 50, 60 years and it's still being used because it works. But there's this part of it is, there hasn't been a whole lot of optimization on it. I mean, there has been a little bit, but not at the rate of something like, the rest of the computation industry. So I think a thorough understanding of the state of the art is, can, will like, be hugely valuable for anyone that wants to enter olfaction because it'll save you from spending a lot of money in the wrong direction. And, I guess ingesting a lot of noise that really doesn't have a lot of signal as far as, like what's important for, moving the industry forward and which AI model should be used, etc.. I think those are probably the two. Those would be the two enablers. I wish I would have paid more attention to what has been done in the past for a faction, because I saw I started solving a lot of or I assumed a lot of problems had been solved that hadn't. And I started solving problems that were already solved, which a lot of which made me spin my wheels. You know, a lot of the first few years. And so I started reading a lot more and stopped, stopped building and just started observing and listening for, you know, the next probably year after that, to just get an idea of what has been happening and what's been going on. And it was I wish I would have done that from Deloitte. Usually I open it.

[01:08:58] Audrow Nash: Yeah. But do you have like, like I think that's quite cool. From your perspective, it seems, is that, or from like your experience, it seems like sharing your knowledge on this kind of thing, could be very valuable to guide people down this, because I think there's going to be some people that listen to this, that want to go start a scent related startup now or in the near future. And, the path as clear as you can make it will benefit them for this kind of thing. I don't know if you guys are intending to do like a blog post, blogs, or any community outreach kind of thing, but that could be really cool to kind of educate an army of people that are going to go do this, startups for this kind of thing. That seems like a very valuable thing from my perspective.

[01:09:52] Kordel France: Yeah, I fully agree. The so we will we have a blog that's in development right now. And so that was something to look forward to. What we hope is that, I mean, Scentience, we really don't want to become like an application based company. We don't want to focus on one industry in particular. If you imagine, like Nvidia is the GPU, they build GPUs. They don't care what you use it on, what you train it AI model, self-driving car or video game industry. So yeah, we want to be the olfactory processing unit company and just build the set of APIs and the hardware that enable all the developers moving forward. Because if we try to attack every application right now wouldn't work, we'll dive into indigestion versus starvation.

[01:10:34] Audrow Nash: So, yeah, it's a good way to put it.

[01:10:36] Kordel France: Yeah. So we don't want we don't want to try to, to go we want to try to go horizontal, but if we can build a very, very cutting edge and a very accurate set of hardware, set of sensors, or factory processing units, and the next set of APIs that are so easy to use that developers just it's just cake for them to be able to integrate with other sensing modalities. Then there are so many startups that can be started. Oh yeah. And totally just based off of that. Right. Because the key enabler right now is like, if you want to build, if you want to build an olfaction startup, that, you know, that does quality control on food or agriculture or, detects organic compounds on the surface of Mars, you need better sensors. And so it starts with better sensors. And, yeah, the folks and the people that use sentence products to build those applications, I feel like we'll be extremely wealthy in the future.

[01:11:31] Adoption Strategies

[01:11:31] Audrow Nash: Probably true. Do you? So one thing with you talking about the standards, and nailing the standard first and related things, one thing that it makes me think of is do you ever read there was an essay on Lisp, called worse is Better. Do you know it? Yes, it makes me think of that. And so the idea of that was c b lisp because it was easy to implement on lots of different machines. And so it wasn't necessary. Like Lisp is far better and I like I love programing in Lisp, but if there was a good type list, I typed list that had a good ecosystem, I would be programing it all the time. But C won out and became like Linux and all sorts of operating systems use it, and all sorts of software was written on it. Much, much bigger than Lisp in my knowledge. I wonder if there's like, a first mover advantage for this kind of thing. If it's a portable format and you just have data already, because it would be like if someone interested in this area can just bootstrap something with what you've already provided. Like if you look at like open CV or a lot of libraries that are like OpenCV still gets a lot of community use and their, software is pretty painful to use. In my experience. But it's just no one else has taken the time to implement these algorithms because OpenCV already did. And so it's not a big differentiator. I know they're like sci Pi doing some of the stuff, but I wonder if, I wonder if there's value in like or I guess, what do you think of that kind of a what do you think of that kind of comment where it's perhaps there's just value in getting something out there to everyone so that it gets adopted and providing some data versus nailing it down perfectly. Because getting something out there gets by, in and by and makes it sticky. What do you think?

[01:13:38] Kordel France: I fully agree, and that's the direction we've taken with Scentience. Right? So we've got those two products that we pushed out. And yeah, while they're in close sale, they're still for sale. And so we have customers that are using them and we're trying to gather as much data as possible, as quickly as possible so that we can, we can start to build out a large library of chemical compounds. And sure, the products can be optimized even further, but they might take more resources, more money. And if we can just gather feedback to improve that product development cycle, we can mature the product much faster and so love it. Yeah, we're trying to make data driven decisions with, I would say, I would say subpar, products, but products that probably could be optimized in iterating. Yeah, yeah, yeah, curating. There you go. And with, with hopes that we can get sensors out there that people then can use to build out all the other applications that they're thinking of. But yeah, I, I'm in full embrace of kind of pushing things out before they're perfect to try to get as much data because I feel like a if you if you wait till it's perfect, you can get overcome by competitors. Or you can that last 10%, which is always the hardest part, could be in the wrong direction. And I'm a huge believer of like talking to customers and letting customers drive, some of the decisions you make, not all of them, but some of them to inform, like what's going to sell faster. And so basically, if you get the right customer, that is going to give you feedback on how things can be improved. Then you can use that to build a product that they love, that they will go, you know, by and spread the word on and become kind of your own marketing team to some degree. So I fully agree with your thoughts. Oh yeah.

[01:15:25] Audrow Nash: Okay. That's really cool. I really like that you're heading in that direction. Tell me about, you mentioned data privacy for customers that are using your products to kind of help you get data. Tell me a bit about that whole pipeline of how you are absorbing data from other people using it. And, it sounds very valuable. Once you have the data. And actually, it's it's also interesting of, like, how would you label it for everything to understand the context of what it is, because otherwise it's just kind of arbitrary numbers in your schema, of RGB like things for smell. But tell me, tell me about your whole data harvesting from customers. Or data harvesting from devices that are out there.

[01:16:11] Kordel France: I suppose so, well, the there's the Scentience app that connects to the devices to stream data and, give you more insight into what's going on and what the device observes. And on that app, there are two models. There's the edge model, which allows you to have no internet connection. There's no talk to the database. It's in our agreement. So there's no saving anything. Right. Like the intent of that edge model is to basically gather trusted customers to say, we're not harvesting your data. We're not, we're not we're not doing anything that, might be that those edges. Right? Yeah. But there is a there's a way to opt in to have a more sophisticated model, more sophisticated olfaction vision language model. Now, that model is larger. So it lives on the cloud and it makes requests. And that model does relay data back to a database to be able to try to predict and detect, what's going on around it. So we're trying to give customers, ability to, to work both ways. Now, the one that lies on the, on the device gets slightly personalized over time. So we have to constrain it a bit because as it learns, we don't want it to become a runaway train. But in order to ensure the promise made to customers that none of their data will talk to our databases, we're not saving anything from it. So, for example, like if if the device does not detect a whole lot of benzene and for several days there's no benzene detected, maybe we can adjust the model a bit to like, maybe put more, or adjust the weights to focus more on other compounds besides benzene.

[01:17:50] Audrow Nash: So prunes itself in a sense. Right. Like these aren't that important. We won't worry about that that much. Okay.

[01:17:56] Kordel France: They're they're still recognizable, but their sensitivity goes down because the model now kind of adjusts self to not be true to that. However, with the one that lives in the cloud, that is taking a bunch of data and it's learning from everyone's so everyone's doing it to try to make this, universal model that can, that can that can, detect every compound in every condition. So, yeah, there's, there are different ways of handling it. We'll see which one ends up kind of, varying faster. Or I guess gathers trusted customers faster. Maybe people don't care, but, we're saving olfaction data. It's a new standard. I would think that maybe, maybe you don't care, but perhaps at the end of the day, if you do, I mean, dogs authenticate humans by scent, right? So, yeah, they they first interpret that you're their owner by smelling you. And so. Okay. Well, that in the future may become personal identity, personal identifiable information. Right. And so it's like is that something we'll run into? The future is like your olfactory signature is now PII. Maybe. I don't know, but,

[01:19:00] Audrow Nash: We'll cross that bridge later.

[01:19:02] Kordel France: Yeah. We're trying to put the necessary, guardrails in place to to be ahead of that. If in, you know, the future does so people look back and say, wow, like, oh, they were thinking about from the very beginning, even though it's not a law yet or it's not a standard like Scentience was. Thinking about our privacy.

[01:19:18] Audrow Nash: I like that. Hell yeah. Okay. Do you so how do you see your gathering, the data? How do you label it? How do you get how do you make sense out of it? Like, it strikes me that you probably need a good bit of context about what's going on to make it useful. Or do they tell you how how does that work? I suppose so.

[01:19:40] Kordel France: With the vision capabilities of the app. If you're if there's no camera tie to the API. So like the device here interacts with, you know, different audio and vision APIs for different cameras and and microphones, if there's no data attached to that, then the app, you have the option to turn on the camera, right? So and audio. So it, we're contextualizing based off of the scene that the camera sees and the audio around it. Now it kind of makes more sense why that data might not be saved. Right? We don't want to film people's homes and and say that data wherever.

[01:20:12] Audrow Nash: They're using it. Yeah.

[01:20:14] Kordel France: Yeah. So it's contextualized based on what it sees in the image and tries to relate. Okay. Well, I see this. I compounds how do I map them back to the objects in the image with high enough confidence. And then those mappings are saved. If it's if it's not the edge model, those mappings are saved to the database where we can then analyze and say, okay, like we was this, was this correct? Like, did we predict that the alcohol like, scent was coming from, the series of fruit across the room or the carbon monoxide was in probably in an outside environment where there's, a lot of vehicle traffic. Right? Like, does it make sense? Contextualized based on the image in the text that was sent with the with the model. So that process would be fine tuned over time? The way we build that is we try to make it as intelligent as possible, and then we prune it down, that becomes an edge model. And then, now that model lives on the device, it's intelligent enough. Yeah. And that a cycle recurs. Right. And we build another big model primitive, prune it down, and it just updates over time.

[01:21:18] Audrow Nash: Yeah. Hell, yeah. Do you have companies or the customers that you have already? Are they startups that are using this or like are or are they individuals or companies that are trying to optimize some process? I'm just imagining how valuable it would be to just have like your sensor on a stick with a camera and someone goes and bops all the fruit in the grocery store kind of thing, or hold it over the fruit for like two seconds, five seconds or whatever. I don't know that based on the refresh rate, but like, I don't know what what kinds of uses are they doing with it? I suppose.

[01:21:52] Kordel France: So for the the smaller device, there are a lot of agricultural companies, most of the agricultural that are using it with some, but surprising, but makes total sense now that I see a lot of their use cases.

[01:22:04] Audrow Nash: So one I would have guessed to. Yeah.

[01:22:06] Kordel France: Yeah. And then for the coffee maker, size one, research institutions are expected to be the biggest customers because, so far with the conversations we've had, they want to be able to profile the air for various experiments. Right. And while that's not necessarily that application is not necessarily helpful for science itself, the data we get, I mean, research institutions have chemical like all these different chemical compounds, which you have licenses for. And it can be very difficult to get unless you have, you know.

[01:22:37] Audrow Nash: Super clearances of some sort.

[01:22:39] Kordel France: Exactly. So, we get a bunch of data from them with, with, with agreement that allows us to improve our olfaction models over time. So we would never be able to get like, like, for example, fentanyl or explosive, right. Like, how are we going to get that anywhere else other than a research institution, unless you have really great friends, I guess.

[01:22:59] Audrow Nash: But, what a thing.

[01:23:02] Kordel France: Yeah. So that that helps us. I mean, if we want to be able to detect, you know, be like border security, drug detection, we've got to have that target somehow. And so, yeah, being able to require that stimulus from the research institutions is going to be super helpful.

[01:23:18] Audrow Nash: That is super cool, Yeah. What a neat thing. I'm just I really like the idea of just having a stick that has a camera on it like it has, and you just, like, go in, you like, put it up on everything and you see what it says. And then after a bit, it learns what the thing is. So you can, like, identify the same object or something like that. Like something like that would be so, so cool because one of the needed one of the things that was most enjoyable about working with a thermal camera at some point was I had one of those little handheld things with a camera on top. So just hold it, and I'd walk around looking through it, and having some sort of thing like that for scent would be amazing. Like, I would absolutely love that. If you make one, please. I would love it. Yeah, you can. I'll spend a bunch of time walking around, bop and stuff with it and trying to see what it thinks.

[01:24:09] Kordel France: If you put I guess if you put your your there's a sentence app, right? If you put your iPhone on a selfie stick and you have the senses close, I guess you can kind of like replicate the same thing.

[01:24:18] Audrow Nash: Yeah, yeah. So cool.

[01:24:20] Kordel France: And if you imagine, like from a robot perspective, like these humanoid robots, right. If they, while their eyes aren't on a stick, the cameras are on their head. And as they walk around and they have a sense of smell, they can start to learn, like, pick it.

[01:24:32] Audrow Nash: Up and.

[01:24:33] Kordel France: Yeah, associations between objects and chemicals and everything. And, yeah, there's a lot of opportunity they're excited about.

[01:24:40] Future Predictions

[01:24:40] Audrow Nash: Yeah. Yeah. It seems like a super exciting space. What do you so with looking out in the future, what do you expect the timeline of things to be? We said kind of it'll be bigger in five years and it'll asymptote at reaching everyone. Basically in ten years. What do you predict like years one, two, three, four, five? Like, how do you think everything will go? What do you think for Scentience just looking out in the future. What do you see?

[01:25:12] Kordel France: I think we're going to have equal approach or converge on equal adoption for olfactory sensors and a factor data, in a similar manner that we have for cameras and microphones, etc., and audio data. So like in the future, instead of seeing like ChatGPT with vision language capabilities, there will be factory capabilities that allow us how cool some sort of sensor, right? My ultimate goal and what I think will happen is, well, those sensors will be integrated into like wearables and our smartphones. Right? So like we have an audio sensor for a microphone and there are four cameras on every smartphone. Now we'll have olfaction sensors that provide real time air quality monitoring on the phone itself. So cool air within wearables. The interesting thing with wearables is so there's there's air itself, there's your breath, but with the same technology you can look at sweat, and all these other data points on your body. Right. So chemicals that are emitted from your skin or the chemicals that are made from your sweat, your saliva, there are I feel like a lot of the, like, smartwatches and everything in the future are going to have chemical sensors that give a much better indication about what's going on in your body. And so I really hope with breath that someone makes it through the clinical trial process and, those, a lot of breathalyzers that can analyze your, your, the data in your breath in real time because there are so, so many applications with that. But clinical trials take a lot of money.

[01:26:44] Key Takeaways

[01:26:44] Audrow Nash: Yeah. And medical device companies take a long time. Yeah. To it sounds so exciting. What do you hope that our listeners and watchers take away from this whole conversation?

[01:26:59] Kordel France: I hope there's a lot more awareness about all the applications that olfaction can be attributed to, and all the possibilities that we can do by utilizing the data in the air around us. And if there are, is there a there's a motivation to move forward? I hope that, that they'll consider using Scentience, data and sensors to springboard their applications into whatever industry that they might decide or whatever use case they might decide to use. And I hope that there's just a much bigger embrace for, olfaction AI moving forward so that we can really build it on par with the rest of the modalities and give robots the sense of smell.

[01:27:42] Audrow Nash: That's awesome. Let's see. And then are there any links or any other information you'd like to share with listeners and watchers.

[01:27:53] Kordel France: Yes. So the app, where is going to be available on the App Store next week. The our website training Scentience. I Scentience with SCA like the sense of, scent olfaction scent. There's not a whole lot of the website right now because we are still under stealth, by argument from our investors. But, within the next few months, you'll see a lot more traffic. And there's an option to sign up for our newsletter online as well, so that you can get, a lot more insight into olfactory applications and, just everything you can do with, Scentience olfaction sensors.

[01:28:31] Audrow Nash: Oh, yeah. Awesome. Well, this has been super interesting. This is something I don't consider often, and I think there is a lot of potential around this. I can't wait to see the giant wave of startups that come out of this.

[01:28:45] Kordel France: Likewise. Yeah. Thank you so much for your time. And thank you to your audience. A big fan of the podcast and, big fan of, all the guests you agree on. So, thanks for being the, thanks for being an inadvertent teacher to me for every episode.

[01:29:00] Outro

[01:29:00] Audrow Nash: Oh, yeah. Thank you very much. All right. Bye, everyone. You made it. What an interesting idea to use smell for a whole number of things. What killer applications do you think smell could be used for? I'd love to know in the comments. I'm going to be thinking about this for a while. Anyways, that's all for now. See you next time.