Archive FM

Data Skeptic

Neuroscience from a Data Scientist's Perspective

Duration:
40m
Broadcast on:
20 Nov 2015
Audio Format:
other

... or should this have been called data science from a neuroscientist's perspective? Either way, I'm sure you'll enjoy this discussion with Laurie Skelly. Laurie earned a PhD in Integrative Neuroscience from the Department of Psychology at the University of Chicago. In her life as a social neuroscientist, using fMRI to study the neural processes behind empathy and psychopathy, she learned the ropes of zooming in and out between the macroscopic and the microscopic -- how millions of data points come together to tell us something meaningful about human nature. She's currently at Metis Data Science, an organization that helps people learn the skills of data science to transition in industry.

In this episode, we discuss fMRI technology, Laurie's research studying empathy and psychopathy, as well as the skills and tools used in common between neuroscientists and data scientists. For listeners interested in more on this subject, Laurie recommended the blogs Neuroskeptic, Neurocritic, and Neuroecology.

We conclude the episode with a mention of the upcoming Metis Data Science San Francisco cohort which Laurie will be teaching. If anyone is interested in applying to participate, they can do so here.

[ Music ] >> Data skeptic features interviews with experts on topics related to data science, all through the eye of scientific skepticism. [ Music ] >> Lori Skelly earned a PhD in Integrative Neuroscience from the Department of Psychology at the University of Chicago. In her life as a social neuroscientist using fMRI to study the neuroprocesses behind empathy and psychopathy, she learned the ropes of zooming in and out between the macroscopic and the microscopic, how millions of data points come together to tell us something meaningful about human nature. She's currently at Metas Data Science, an organization that helps people learn the skills of data science to transition into industry. Lori, welcome to Data Skeptic. >> Thanks Kyle. >> Awesome, really glad to have you here. So, I asked you to come on the show largely to discuss how your background in neuroscience overlaps with data science. I think it's a really interesting intersection we don't hear a lot about. I also have heard you described as a social neuroscientist. Maybe could you start by telling me what the social specialization is? >> Yeah, absolutely. When you refer to a social neuroscientist or a cognitive neuroscientist, a lot of times it just refers to the area of study in particular that you're looking at. So, in my lab at University of Chicago, we were interested in the interactions between individuals of the same species. So, a lot of neuroscience is just looking at one organism as if they were within a vacuum, but we're interested in how organisms interact, which is a huge part of any organism's life. That can be anything from studying how ants in a colony create like one giant collective mind to just studying within a human brain how people perceive each other, how attention is grabbed by social-based information on things like that. >> Oh, interesting. I would guess it and I'm coming at this as an outsider, but when I think about neuroscience, I'm drawn to the FMRI machine. So, first before we get into that, because I ask you a lot about it, am I right in bringing that prejudice to the table? Is that a cornerstone piece of technology? Or are there other things that should be in the limelight a bit? >> The FMRI machine also drew my attention as we'll talk about, but it is only one tool that people use. Even in studying humans, there are other ways to map their brain using EEG. You might have seen people with electrodes over their head. There's something called MeG, where you can measure the tiny magnetic changes inside of someone's head. People undergoing brain surgery can have an array of electrodes put actually onto their brain and measured directly, and then if you get into the animal world, there's all sorts of techniques where you can work with cells directly. You can record from living animals as they go around and do things. So, there's a huge toolbox available to people studying the brain and nervous system, and FMRI is just one of them. >> I would guess it goes without saying that a tool like FMRI is producing a really high velocity of massive amounts of data. Can you provide some scope on the types of measurements these machines produce and how much data is generated? >> Yeah, and it is a lot of data, and I guess you could call it high velocity, but it comes in spurts usually, right? So, in order to do an FMRI experiment, I have to get funding probably from the government, plan the study, get approval, go over it with a fine tooth comb and then collect some data from my subjects, and the number of subjects is usually small. The biggest study I ever did myself was 80 people. In that time of data collection, it's pretty high velocity, but over time, it's not something like Twitter where the data just keeps coming and coming and coming. So, we don't really have such a big data problem that we can't use pretty normal equipment to deal with it, but just to put it in kind of scope. So, if you do a functional MRI experiment, you're taking measurements from the brain and you're dividing the brain into little cubes called voxels, and a traditional size of a brain image could be, let's say, 64 voxels by 64 by 30. So, you're chopping the brain into this rectangle of rectangles, and that's like between 100 and 150,000 voxels. Then, you're taking those images every one and a half to two seconds, and your experiment might last five minutes. So, at the end of the day, I think I calculated that that 80 subject study I ended up with about two billion data points, which is a lot of data, but it's not a lot, lot, lot data. And further down the road, when you're analyzing the data, you might need to do some matrix transformations that would mean a whole lot of memory or some tricky computation to do that. But for the most part, I would definitely call it medium data rather than big data, for sure. Makes sense. Interesting, you described the grid layout, 64 by 64 by 30. There's a number that sticks in my head, could be wrong, so feel free to correct me if it is, but the number of neurons in the human brain is roughly 10 to the 10. Yeah. So, when I put these two metrics next to each other, it looks like you have an incredibly low-resolution tool. Is that accurate, or am I looking at it the wrong way? That is accurate. So, we have medium resolution as far as spatially and kind of low resolution temporarily. So, also, you know, an action potential is on the order of milliseconds, or like a couple of milliseconds. So, and I'm taking measurements at 1.5 second chunks. It's a crude tool, but it is a way to look inside of the mind of humans without cutting them, so it's still quite valuable. Definitely. So, could you tell us a little more about the 80-person study you were working on? In the 80-person study, we were working with a group in New Mexico, and we actually had permission to go into prisons and scan the brains of people who may be criminal psychopaths. So, we were using a common battery to measure their level of being a psychopath, and we were showing them very simple images related to empathy. For my studies, it was things like people, little slideshows, little animations of people hurting each other, or shunning each other, and then simple videos of people expressing emotions. We were considering these as some of the building blocks of empathy, to see if the brains of psychopaths at that low level differed from normal humans, or if their deficits in empathy came somewhere down the line. And what did you guys end up finding? In my studies, I had the hypothesis that psychopaths actually should be pretty normal in those early stages of emotion perception, as far as at least in some of the basic areas of perception, right? Because a lot of psychopaths are very skilled con men, and they're good manipulators, and it's my thought that in order to do that, you have to be able to read people pretty well. So, as far as paying attention, attention areas and perception areas, those should be really similar, and then maybe some areas involved in emotions should start to differ around that point. And that's basically what we found, and especially in the ways that those different brain areas interact with each other. That's where we found the biggest differences. So, in working with all that data, I'm curious if you're using some of the tools that'll be familiar to data scientists, whether it just be plain old SQL or Hadoop and Spark, or does the field of neuroscience need specialized tools? Well, whether it needs specialized tools is an interesting question. I think the convention has come to be that there are a few pieces of software that we gravitate toward, just because there's a lot of development and active use. Of those, I was a MATLAB fan, and there's a package with MATLAB called Statistical Parametric Mapping, SPM. There are also programs called Free Surfer, which is open source, and Brain Voyager. I think those are the three most popular tools that people use. As far as storage, I know that some people are starting to do distributed processing, but again, even if I had a job that took a day to process, I really rarely cared because there was so much other stuff to do. I could work on my lit review and work on my papers while it was chugging, so I didn't really feel the pressure to learn new technologies at that point to work with the data. There's other projects that have giant collection, especially with the Brain Initiative, where they want to just be rolling through data very, very quickly, that they're collecting from multiple sites, and they probably use some more conventional big data technologies that would be familiar to data scientists, but in my experience, I never ventured into there during my time as a neuroscientist. So I would suspect that as you point out, it's a noninvasive technique in FMRI, that because of that, it's not as precise as if we were sticking needles and electrodes and whatnot into the brain, that as a result of that noninvasive nature, you're going to have a significant amount of noise and imprecision. So I guess my question is first, is that correct? And if so, what are some of the challenges for cleaning up that portion of the data? I think that intuition is quite correct. There are so many sources of noise and imprecision, as you say. My guess is you'll never hear an FMRI researcher complain about data cleaning, because it's such a luxury, just in messy, normal world data compared to what we're used to. Just to run through a list of various things you have to do to get your data into shape. The first kind of the elephant in the room is that when you're measuring using an FMRI sequence, you're actually measuring something called the bold signal, which is the blood oxygenation level dependent signal. So you're measuring the difference between oxygenated and deoxygenated hemoglobin in the blood and the brain. So you're actually using vascular changes to infer neural activity, which is believed to be a really reliable substitute or marker of underlying regional neuronal activity. But that's just the first kind of addition of noise to the system. Also, when you're taking those 1.5 second pictures of the brain, you're actually taking those slices of the brain in sequence. So it takes me a second and a half to go through and take those like 30 slices of brain imaging. So they're not all taken at the same time. First, I need to do a slice timing correction to kind of make those all seem to be at the same time or to correct for that in my model. Then people are not perfect. They're going to fidget in the scanner. I need to do some rigid body motion correction to move everything back into the same place again. And because of the field inside of the scanner, if you remember an MRI scanner is a giant magnetic field, that can introduce a little bit of extra noise too, when I'm moving my signal back around within the space that it's inferred that it came from. From there, I take that whole corrected theories and I sort of get it into a standard space, which involves another rigid transformation. I might also, if I want to compare two brains to each other, I might need to warp them into a common space. Now there are some techniques that allow you to get around that, but most fMRI studies will involve warping each person's brain onto a standard brain so that sort of each voxel matches up to areas that you can make inferences about. Then you will want to do something like a high pass filter to get rid of signal drift from the scanner. As the scanner works over the span of minutes, the signal starts to drift. And finally, you'll probably do some spatial smoothing. You can use a Gaussian kernel to smooth the signal. And after all that, talk about resolution, we're actually usually discard a little bit of it to boost the signal and to sort of statistically prepare ourselves for some of the assumptions we'll be making later. So that's a lot of data cleaning to do. Absolutely. Percentage wise, what's it break down to in terms of cleaning versus analysis? The good thing about cleaning is that a lot of times you can totally automate it. So you set up your pipeline and let it go and just check in with it. It gets dangerous if you're not doing some manual sanity checks to make sure everything is going properly. But once you have your preferences set up, you can make it do itself, which can be true in other kinds of data collection as well. But but that's the nice thing. It takes a really long time, but it doesn't take a lot of my time after I've done it maybe once or twice. Do neural scientists use machine learning techniques that will be familiar to most data scientists? Yeah, they absolutely do. There's modeling and modeling and modeling some more. The interesting thing was when I moved from neuroscience to data science, it took a while to figure out that some of the models I had used in the past were models that are being used in data science because they don't always have the same names. So the most common tool that's used in the statistical analysis of fMRI is a general linear model. And in this case, you sort of take each voxel's time series and you convolve it with well, and then that's like your outcome variable. And then as your predictor, you create this other time series where you convolve sort of the time series of what you showed the people in the scanner or what they did, you can ball that with the predicted shape of what a brain response would be. And then you kind of fit those together. And so that's your predictors and your outcomes. So you just had this a great big, you know, linear algebra matrix and you fit the model to that. The different parameters in your model will be what you showed the people in the scanner. There might be different types. You convolve each of those with that hemodynamic response function, your predicted curve of what the brain response would look like. And you use those to predict the actual signal that you got from each voxel. So in that case, you're basically just doing a multi linear regression for each voxel in the brain. There's other types of modeling that data scientists will be familiar with with so much data you imagine you might want to do some feature reduction. So you can use PCA and ICA. Also, we're really interested not just in what each voxel is doing that has a limited amount of interest, but you also want to know how different brain areas are working together. So once you've gotten some sort of initial processing done, you can use different models to see how brain areas work together. You can use dynamic causal modeling and structural equation modeling, which I think are best thought of as types of graphical models. You can, ICA used a model called psychophysiological interaction, which is not going to be a familiar term to machine learning specialists, but it's just another way that you can use the physiology between regions. You can have a seed region as a predictor voxel and then see how its correlation with other brain regions works as you control for what they're actually doing. So you can have like a task dependent correlation between brain regions. So I feel a little bit obliged to ask just because it's sort of, I don't know if it's meta or if it's inbred, but what about neural networks used as a technique for studying neuroscience? Well, I think that's a really interesting question. And something that's been really surprising to me is how infrequently people are using some of the, you know, machine learning 101 best practices to test their models and make sure that they're good. So I don't see that many studies that are using training data and then testing it on a subset of their data. They sort of just fit a model and that's it. So I'd imagine that there's a good degree of overfitting going on. There are definitely exceptions to that. And some of those studies would be ones where neural networks might be appropriate. I can't call up to mind right now any specific ones, but I'd imagine some of the sort of mind reading experiments would probably some of the really cool tricks that people have been doing with training models to predict what people are doing based on their brain activity. I'd imagine that some of those might be using neural networks. So a lot of your work studied the neural processes around empathy and psychopathy. Can you describe how you measure these functions in the brain? They're incredibly difficult to do. So empathy is a pretty high level concept. And one of my favorite papers just talked about eight or nine definitions that they'd found in the literature of just what empathy was. So our lab had a model of components of empathy or or things that are necessary for empathy to occur. We just started at the lowest level of that and started to work our way up. So that's why I was looking at low level emotion perception responses or watching other people experience pain to just start at that perceptual level. To actually be studying empathy, you'd really need to get to a point where you'd have either a measure of whether someone cared about it or whether someone was motivated to help the person who was in pain. And that makes it even more difficult because the farther away you get from sheer perception, the more murky things get. So sometimes I was really envious of pure perceptual researchers because their assumptions got to be so clean. That was really difficult to model and we did the best job we could. But even the definition of psychopathy is really difficult to formalize. There's a lot of noise going into that research from both of those sides as well. I read, I don't know if you're familiar with it, but John Ronson's book, The Psychopath Test. Yeah, I loved that book. Yeah, yeah. Are these is that a way that a neuroscientist would agree with or is that sort of more of a soft approach to measuring this sort of abstract concept? He's describing the test that I used in my research, the PCLR or the psychopathy checklist revised. That is the test that we use. It's administered in a bit more rigorous way when it's used for research. There's corroboration with someone's file or hopefully with other people who are close to them and have observed their behavior. But that is sort of the state of the art in a lot of psychopathy research, even though there's ongoing debate and even conferences about how people can do a better job of measuring psychopathy. But yeah, I really liked that book because it was accessible, it was a fun read. And for books about things that I've studied, it had an extremely low cringe per page ratio. Is that metric? Yeah, some things I read, I'm just like, oh no, but John Ronson's book was pretty good. Very few cringes. So I'm curious a bit more about the experiments you were working on. You've pointed this out a little bit. Different people have different reactions to things. So let's say like a puppy being injured is going to affect some people more than others. I don't know if you picked a scene from a TV show that was in the sixth season, someone who's watched seasons one through five, it was probably going to be more empathic to it than an outsider. How do you kind of universalize things to measure this abstract idea of empathy? That's a really good question. I think that one strength is that this stimuli that I used should have been sort of universally boring. So there weren't any puppies, there weren't any back stories. It was just sort of, oh, here's a woman and then the man pushes her. Here's a man and he trips over a thing. So they're very simple and not designed to cause a great big emotional response. We're not trying to bring anyone to tears. But at some level, the brain is processing things from a very simplistic kind of feature-based level and those get past forward and forward and forward into kind of our understanding of things. So from just processing the shapes and the edges and the colors of what you're looking at up to recognizing them as people, recognizing actions happening, there should be some like, hey, that's not expected response. And then maybe emotional centers would start to get involved. So we weren't looking for giant emotional responses. Those areas should be responding even at low levels before say that it would even be visible on your face or to external observers. It's fascinating that the FMRI is precise enough instrument. It can pick up on the simple responses. And along similar lines, I've always been really impressed with the degree to which insight can be found from this relatively low resolution device. It's just studying what I always thought was electrical signals, but I guess there's some capillary action you were mentioning as well. Do you have any insight to how these small connections and signals can be studied? We're not able to study individual neurons, which are, I guess, some atomic level, the source of emotional response. But we're able to study how these aggregate up. Do you have any insight into how that aggregation phenomenon gets generated? Yeah. Well, I guess like an analogy would be if you were standing in a server farm and you had racks and racks of servers and you had a detector that let you know when one server's processors were running, right? You wouldn't need to go down to the level of like the binary switch to understand that that server was running. Even though I'm not looking at individual neurons, and I'm not sure that it's even correct to say that those are the atomic source of emotional reactions, they're really just processing elements in some sort of network that's producing that response. And if we use the assumption that those calculations are often being done in sort of a spatially contiguous area, then that starts to take away some of those issues. Now, that uses a fair amount of assumption and that's a really interesting area as well. But the fact that I can't study one neuron at a time doesn't mean that I can't learn a lot about areas of the brain that are involved in processing of various activities. I've seen a few headlines that probably have a very high, what did you call it, the cringe-to-page turn ratio, along the lines of being, you know, I guess we could label them psychic, whether or not they use that word. And so on the low end, there are claims about like maybe measuring a subject's response or choice before they're consciously aware they made it. And then on the more extreme end, I've seen papers saying that by studying specific neural areas, they're able to reinterpret like the actual letter that a subject saw, like actually read the mind in some sense of the candidate. Could you share your perspective on these sorts of lines of inquiry? Yeah, I think that the first study you were referring to about making a decision before you actually do something, that's a pretty classic study. It wasn't enough for my study because that doesn't have the temporal resolution to be able to do that. But I think that study was about whether or not you're going to hit a button. And just as so information from your senses is being pushed forward and forward and forward into more abstract areas, information about how you're going to act in the world starts in abstract areas and moves, you know, kind of down and down and down until it's actually controlling your muscles. So you could be thinking about kicking your legs, right? And somewhere, if you really think about kicking your legs, like if you have like an athlete's visual imagery of that, at some level, a part of your brain representing your legs is firing away as if you were doing it. And then somewhere up the line from there is some brain area that's probably putting the brakes on that and saying, no, don't actually kick your legs. Then if you decided to really kick your legs, there would be brain activity that would take the brakes off of that sort of flow from idea into action. And some other area would say, yeah, go kick your legs. And so it is feasible to catch that signal. If you know what kind of action someone's going to take, then you can pinpoint that area, measure that activity. So in that very kind of contrived situation, sort of know that they're going to do something before they do it. That one is low cringe. And it's fascinating that they actually did that. But again, you can see sort of the amount of setup you need to get that to happen. And similarly, with the mind reading experiments that do happen in fMRI, I think the first one I saw was like, whether people were looking at a face or a place, then you had one where people were reading different words and they could predict which word the person had been reading. And then the coolest one, I think was from Jack Gallant's lab from Berkeley. And they had people watching YouTube movies and they were watching a new test set of movies and they could sort of reconstruct really bizarre, ugly looking versions of what they were looking at. And that was like the creepiest, coolest one. But the kicker is for all of that, first off, you have to have someone in the scanner and you have to have a whole lot of information about what kind of thing they're doing. It wasn't out of all possible activities. I know that you're watching a YouTube movie and it looks sort of like this. The second issue is one that's very familiar to machine learning practitioners and that's labeled data, right? So each of these has a ton of labeled data. They would, in the word reading experiment, they'd have them read the words over and over and over in the faces and places. They would have them look at lots of faces and places scenes. And in the video on the YouTube one, they watched, I think they watched hundreds of hours of YouTube videos to construct models of how their brains responded to shapes, colors, and edges so that they could then show them a naive movie and guess what it was based on how their brain was responding. Now, that's not to say that that isn't just fantastically cool and really huge leap in how machine learning and brain research are working together, but it is not time to put on your tin foil hat just yet because of I don't think that anyone was surprised that they did that after they volunteered to watch so many YouTube videos and they actually scanned themselves in that study. The researchers were the subjects because you have to have someone motivated enough to really pay attention to the movies to train the network for a long time. Interesting. Yeah, so I know that you and I are obviously not the same person, but we both have brains that have similar regions yet there are striking differences that make us different people. Would it be fair to say that in training on those hours and hours of YouTube videos that these are highly specialized models that apply to the single subject or is there something generalizable there? At this point, yes. I am sure the Holy Grail is to train a network on me and predict what you are looking at and I don't theoretically see why that shouldn't be possible. It might need a few more layers of fanciness to make it feasible, but to my knowledge, no one's close to that yet. What's a good red flag for someone who's, you know, an armchair neuroscientist and is reading studies that might be a bit above their head? Is there maybe a heuristic we could give that's on the plausibility scale of what could be possible or what's just science fiction? Yeah, I think that there's this gradient from the paper. So the scientist writes a paper. A lot of times their institution will do a press release and then journalists will read the press release and write stories. So at each level of that process, more baloney is added to what they actually found. Anytime you see the word proves, that's a big red flag. Scientists almost never say that they've proven anything. It's a big no-no. With the mind reading experiments, if they say they can predict things, everything is a little bit fuzzier currently than a lot of the claims that are being made. I wish there were a place you could just go. I've always daydreamed about this like neuro debunking forum where you could just like snopes a story and get some, you know, people to help you out. There are a few really good blogs, but they don't have complete coverage of the field, obviously. Any of those you want to give a shout out to? Sure. Some of my favorites and they all have kind of similar names are the neuro skeptic, and that's a blog on Discover. I also love the neuro critic that's on blog spot, and then Adam Calhoun has one called neuroecology on WordPress. Those are three favorites. I should give a shout out to him. Maybe you know it, but it's a neurologica. Yeah. Yeah. Yeah. Cool. Yeah. Steven Navella is one of the main writers. Or maybe that's only his blog. He's a big name in the skeptics world. So I found that through him and I always really enjoy what he has to say there. But getting back into some of your more, I guess, would you call it lab work when you were doing experiments? I called it that. Yeah. Yeah. I didn't have a lab coat or anything, but. Well, I would guess that when dealing with all your data, there's some need to control for multiple comparisons. I've been told that in genomics, many researchers rely on the Bonferoni correction. In fact, at one of my previous guests, I was kind of pushing on this issue, and her response was, well, yeah, the Bonferoni correction is pretty strict, but we just increased our sample size. And I guess you can do that in genetics pretty cheaply, at least compared to the studies you were working on. I myself am really interested in false discovery rates as an approach to controlling from multiple comparisons. I'm curious in what ways neuroscientists deal with this issue? Bonferoni correction is, yeah, it's really neat. And it takes away all of your findings. And also, we also call it family-wise error. If any neuroscientists are listening and are not quite sure what we're talking about, let's just go back to our brain. And it has on the order of 100,000 voxels. And the way that you do a Bonferoni correction is you just multiply your p-value or divide it actually by the number of observations you're doing. So you'd need, you know, 0.0000005 confidence to be able to say that that's beyond the reach of a false discovery, right? As you can imagine that that stings and it is too conservative oftentimes for a false discovery rate. So that's sort of the idea that once you've done your initial statistical thresholding, you take those voxels that have survived the threshold and you say, okay, well, I bet that 5% of these are due to chance. And then you would remove the upper percentage of those based on that. So that's a less conservative way to go about it. But there's other means as well, especially in fMRI, we have the benefit of the spatial aspect of the data that we're looking at. So if you have one voxel that's sitting all by itself that's active, that's survived at statistical cutoff, that's fishy, but if you have a hundred voxels that are all sitting next to each other that are spatially contiguous, that's a lot less likely to happen due to chance. So you can use some spatial extent cut-offs to help with your false discovery or your false positives. You can also use masking to reduce the number of observations that you're actually using in your model in the first place. Because if you have just that square that was 64 by 64 by 30, a lot of those will be air or skull or they'll be muscle, you might even take out white matter, you might take out the super spinal fluid and try to just keep that down to just the gray matter voxels. So you can get rid of a lot of times three quarters of those observations and help yourself out a lot with that too. That makes sense. So I'm curious how your work as a neuroscientist prepares you to become a data scientist. Well I like what you said in the intro from a bio of mine that it's that zooming out from people level questions into very nerdy technical meticulous models of what's going on underneath the surface and then on the other side taking those findings and turning them into stories or recommendations or predictions. So all of that was a very familiar framework when I left neuroscience and decided to try out data science. As I started to understand the connections between the text meaks I'd been using and either their new names or their brother and sister models in the world of machine learning, that learning curve which seemed pretty steep got a lot shallower and it was not so bad to make that transition. Very cool. So tell me a little bit more about what your career has been like so far working in data science. So I started out at a data science consulting firm in Chicago called DataScope which is just a fantastically fun company. A whole bunch of nerds who most of them left academia and just work on really fantastic projects with cool companies. They taught me the ropes. They taught me a lot of programming. There's a lot on the job training and I'm really grateful for that experience. Through the course of that we started working with Metis that does data science boot camps. So we actually created their data science boot camp. Ermeczier and I did the first cohort out in New York and after that I went back to work with DataScope a bit more and just recently I've gone on with Metis full-time to be a senior data scientist with them. Originally their program was just in New York. We're finishing up the fifth cohort of new data scientists out of New York and now we're expanding to San Francisco. So I'll be in San Francisco in January to teach our first West Coast cohort and that is really really exciting. Oh excellent. Tell me a bit more about that. I know I have a lot of San Francisco listeners. If any are interested how could they get involved? I would invite them to apply. We have a website this is Metis.com and Metis is spelled M-E-T-I-S. Metis is uh from Greek mythology the mother of Athena. So that's pretty exciting. So goddess of wisdom is Athena? Yes wisdom and war. Oh and war perfect. Yeah and then Metis is kind of sorter her mother because Athena's sprang out of Zeus's head but she got there somewhere and then whatever. So that's where the name comes from. What our program does is it's a 12 week intensive boot camp. We take people who have some sort of foundation or a head start both in programming and in I say stats, math, machine learning some sort of quantitative applied or not applied thing. It's really hard to define that. You know if you've worked with numerical or quantitative stuff before you might be ready to go. We look for people with good communication skills and people who are really hungry to work hard and get a start in this field. There's an online application form. We follow that up with a Skype interview after they do a take home challenge and that's basically the story. We have tuition scholarships, partial scholarships for women for minorities and for veterans and service members. For San Francisco our early application deadline is November 23rd and then we'll have a final application deadline after that. And people in New York there's also a winter boot camp coming up there. That application deadline was yesterday but I think the class has a couple of spots left still. The final application deadline will be in December. And if I know we're releasing this just as those dates are kind of looming on us. If someone is listening to this in the future, where's the best place to go to find out the next session that might be in their city? This is medist.com will be the best source of information and we do just keep as soon as one boot camp ends another one starts. So we do a boot camp each quarter in each city. So if they're listening to this later there's probably one going on and maybe we'll have even expanded to more cities by then. Cool and before we sign off just tell me a bit more about what goes on in those 12 intensive weeks. Yeah so our boot camp is a great program. I'm so excited about it. We use a project based curriculum. We do five projects and sort of build a portfolio to show to employers or whoever wants to look. And we focus on the full process of data science. So it's not just a machine learning course. It's not just a programming course. It's about starting from a question, figuring out how to attack that question, doing it in the appropriate way. And we have plenty of machine learning and programming rigor in there. But everything is situated within these projects. And we think that that really helps people when they get under the job to be ready for the right kind of skills. Because your employer isn't going to hand you some data and say please do some supervised learning on this and select the algorithm that's most appropriate. They're going to say you know we need to think about ways where we can make more money next quarter or we need to evaluate whether we're segmenting our users in a wise way. So thinking from the sort of application into the data science tools and procedure is a part of our curriculum as well. Very cool. Yeah, we do that for 12 weeks. And then we have just outstanding career placement support. We just don't quit. We are a licensed and accredited program, which means that if we don't get people jobs, they're going to shut us down. So we really work so hard to get people into jobs that are good fits for them. Excellent. So as we said, this is medist.com and you want to be interested in that. Where can people find out more about you? About me. I'm on Twitter at Laurie Skelly. That's L-A-U-R-I-E-S-K-E-L-O-I. Or it's probably on the website where they found this podcast. Yes, it will be indeed. That's the best way to reach me. Yeah. Well, this is a lot of fun. Thank you so much for coming on the show, Laurie. Thank you so much for having me. All right. Take care. You too. More on this episode, visit latestcheppic.com. If you enjoyed the show, please give us a review on iTunes or Stitcher. [BLANK_AUDIO]