WEBVTT
1
00:00:03.000 --> 00:00:04.020
Everybody are.
2
00:00:05.159 --> 00:00:13.559
fausto cattaneo: All right, welcome back everybody, so our next speaker is the than tarkovsky and.
3
00:00:14.519 --> 00:00:25.140
fausto cattaneo: So he got his ma from kutztown University in Russia, he then he got his PhD from the University of Arizona, not in Russia and the United States and.
4
00:00:25.980 --> 00:00:41.340
fausto cattaneo: it's been at Los Alamos has worked at university of California San Diego and now is is a professor at Stanford in the department of energy resources engineering so he's going to tell us about bayesian learning on dynamical manifold so.
5
00:00:42.780 --> 00:00:43.110
Daniel Tartakovsky: Yes.
6
00:00:44.160 --> 00:00:47.190
Daniel Tartakovsky: Thank you very much for the invitation and.
7
00:00:48.240 --> 00:00:49.290
Daniel Tartakovsky: I really wish we.
8
00:00:49.320 --> 00:00:51.090
fausto cattaneo: could have had it in person.
9
00:00:51.750 --> 00:01:04.200
Daniel Tartakovsky: But that being what it is, so the title of my presentation is indeed bayesian learning on dynamic manifolds, but I think I will spend most of the time.
10
00:01:06.090 --> 00:01:13.020
Daniel Tartakovsky: discussing what I mean by dynamic manifolds and how to construct them and then Beijing learning is is.
11
00:01:15.150 --> 00:01:24.180
Daniel Tartakovsky: Is nicely being laid out into this into this morning store, but also, I will love the character that.
12
00:01:25.410 --> 00:01:27.420
Daniel Tartakovsky: Before I started, I want to.
13
00:01:28.470 --> 00:01:34.740
Daniel Tartakovsky: give all the credit to Francesca syringes of kind of G, both in my group.
14
00:01:36.030 --> 00:01:41.850
Daniel Tartakovsky: Who completed subs substantively to this presentation.
15
00:01:42.870 --> 00:01:45.810
Daniel Tartakovsky: Material system this presentation.
16
00:01:46.890 --> 00:01:52.080
Daniel Tartakovsky: So, by way of motivation what what is it that we want to do here.
17
00:01:53.310 --> 00:01:59.820
Daniel Tartakovsky: Well, so again, similar to the previous talk So what we have is is a model.
18
00:02:01.020 --> 00:02:12.120
Daniel Tartakovsky: And the model describes the dynamics of the State variable you which could be scalable vector and it involves a number of parameters.
19
00:02:13.470 --> 00:02:21.510
Daniel Tartakovsky: And it's potentially it has initial conditions and possibly boundary conditions, so the model parameters and and.
20
00:02:21.900 --> 00:02:30.330
Daniel Tartakovsky: In fact, there could be model error and the initial boundary functions that they uncertain three death is random, and so, then the solution to this.
21
00:02:31.230 --> 00:02:55.230
Daniel Tartakovsky: problem is the probability density function F and it's it's either a joint PDF at multiple points and multiple times, or could be single point PDF where it's done what we want to know is the probability density function of the of our state variable at any point X and any point in time.
22
00:02:57.540 --> 00:03:06.240
Daniel Tartakovsky: or interest and then, in addition to the model and model predictions, we also have observations at.
23
00:03:08.340 --> 00:03:25.140
Daniel Tartakovsky: Either discrete times or continuous and at various locations and this observation so noisy, and so that there is also a probabilistic treatment of of the observations and so so that's the.
24
00:03:27.510 --> 00:03:52.590
Daniel Tartakovsky: Problem of data simulation it comes to combine model predictions and observations and how to improve observation data model predictions by using observations and so i'm sort of two ways of doing that all modern to so the most are also the accurate or or or or the method that makes.
25
00:03:54.330 --> 00:04:10.860
Daniel Tartakovsky: Fewer predictions or fewer places fewer restrictions on on the problem on the model is Beijing update, which uses a model prediction in terms of the probability density function as as a prior and then.
26
00:04:12.060 --> 00:04:20.190
Daniel Tartakovsky: updates the uses base or auto update this model predictions will be the third likelihood function.
27
00:04:21.210 --> 00:04:30.630
Daniel Tartakovsky: And for for most problems so of practical interests are certainly for weather forecasting that was that that's too expensive to be.
28
00:04:32.070 --> 00:04:47.580
Daniel Tartakovsky: of practical use, and so there are various approximations to this method and one of them is that common filter which which assumes that the PDF for view is.
29
00:04:48.900 --> 00:04:50.220
Daniel Tartakovsky: gaussian and then.
30
00:04:51.330 --> 00:05:04.560
Daniel Tartakovsky: They are full characterized completely by them and the variance and and then there are various flavors of common filters so assemble computers sensitive and so forth.
31
00:05:05.610 --> 00:05:11.610
Daniel Tartakovsky: But then they they are known to fail, under certain conditions, especially.
32
00:05:13.050 --> 00:05:15.570
Daniel Tartakovsky: For us gaussian.
33
00:05:16.680 --> 00:05:20.760
Daniel Tartakovsky: So that if he has long tails or or, especially with his see.
34
00:05:22.500 --> 00:05:26.250
Daniel Tartakovsky: Multi modal distributions and so, then.
35
00:05:28.410 --> 00:05:36.000
Daniel Tartakovsky: My talk will consist of three parts one is how do we efficiently compute.
36
00:05:37.890 --> 00:05:44.850
Daniel Tartakovsky: This PDF here for view or over given model and then, of course.
37
00:05:46.380 --> 00:05:50.280
Daniel Tartakovsky: You know in years past there their.
38
00:05:51.330 --> 00:06:04.590
Daniel Tartakovsky: Whole workshops, like this one devoted to pry propagation to forward propagation of uncertainty, so I will try to compress it into 20 minutes or 15 minutes or so.
39
00:06:05.820 --> 00:06:13.650
Daniel Tartakovsky: So, so, then you will see that derivation of equations for for the probability density function.
40
00:06:14.790 --> 00:06:17.340
Daniel Tartakovsky: requires closure approximations.
41
00:06:18.390 --> 00:06:39.510
Daniel Tartakovsky: it's a classical course training problem, and so there is a large number of alternative closures, that people came up with, but the question is about it, and they are very Labor intensive so for for complex problems such as climate and weather forecasting that they.
42
00:06:41.580 --> 00:06:52.590
Daniel Tartakovsky: Not not really feasible to to derive and so, then in part two, we I say we will look at the question of whether one could use.
43
00:06:53.100 --> 00:07:12.720
Daniel Tartakovsky: machine learning tools to to act on them, mostly meaning without human interference inputs, whether they could derive closures for this PDF equations and then why are we doing the other what's the relationship to data simulation well as Aaron was saying that there is this.
44
00:07:14.340 --> 00:07:17.460
Daniel Tartakovsky: very nice framework that allows you to.
45
00:07:19.740 --> 00:07:32.190
Daniel Tartakovsky: variation variation of formulation of beta simulation that allows us to do that efficiently, but what people typically do is they would say that let's suppose that.
46
00:07:33.660 --> 00:07:39.990
Daniel Tartakovsky: The PDF for free for view is gulshan and what we are saying is that we don't need to do the Ad because.
47
00:07:40.440 --> 00:07:51.330
Daniel Tartakovsky: We have this manifold of non gosh and pdfs that is described by this PDF equations and so that's that's why we need this part one and part two, to do that.
48
00:07:51.750 --> 00:08:00.420
Daniel Tartakovsky: And so I will really focus on on this just to Part Two and part three will be in the conclusion, a couple of slides and and again that.
49
00:08:01.560 --> 00:08:02.910
Daniel Tartakovsky: Is that the standing that.
50
00:08:05.610 --> 00:08:12.210
Daniel Tartakovsky: Everything or matter what girl said this morning and his presentation will hold yeah.
51
00:08:14.250 --> 00:08:30.690
Daniel Tartakovsky: Okay, so part one, how do we compute this, how do we construct this dynamic manifolds so to be concrete let's let's, we will look at the scraper born in consideration law for for the state variable you.
52
00:08:32.220 --> 00:08:38.610
Daniel Tartakovsky: And it's it's a nonlinear consideration loss of that the flux is not nonlinear the.
53
00:08:39.660 --> 00:08:58.470
Daniel Tartakovsky: The actual term is nonlinear and what else was hot here are the main things that I guess horse here, but what we what we focus on is the fact that the flux function and contain a number of.
54
00:09:00.510 --> 00:09:01.440
Daniel Tartakovsky: parameters.
55
00:09:02.850 --> 00:09:19.950
Daniel Tartakovsky: And these parameters on sort of the number of parameters can be very wise if these parameters a safe space they are in the fields, you need to represent them with the preliminary cares relational links are small, so this and can be arbitrarily large.
56
00:09:21.000 --> 00:09:30.000
Daniel Tartakovsky: And, in fact, in the case of what if it is our equation, so if you if you said three are as white noise.
57
00:09:30.750 --> 00:09:42.660
Daniel Tartakovsky: linear so it's just you know in space time of dependent so next so so then and will be infinitely large because the animation X is it's it's either.
58
00:09:43.410 --> 00:09:57.930
Daniel Tartakovsky: Zero or small so so anyway, so this general prayer that i'm presenting here it's valid for pdfs with random coefficients so Sue valid for bonafide sarcastic PDS.
59
00:09:59.580 --> 00:10:00.240
Daniel Tartakovsky: Okay, so.
60
00:10:01.860 --> 00:10:08.100
Daniel Tartakovsky: The basic idea here what what we want to do is so so we start with this equation.
61
00:10:09.990 --> 00:10:12.360
Daniel Tartakovsky: Physical problem that that we are.
62
00:10:13.740 --> 00:10:26.790
Daniel Tartakovsky: interested in and what we want to do again is, we want to derive the deterministic equation for the probability density function of you, so of course there are other ways of doing that right, you could do the Carver.
63
00:10:28.020 --> 00:10:39.420
Daniel Tartakovsky: You could do printing on your cows and they build preliminary surrogates so all of them, one could do that, so what I don't want to spend time.
64
00:10:41.280 --> 00:10:45.030
Daniel Tartakovsky: and cost cons of doing that what.
65
00:10:46.200 --> 00:10:53.040
Daniel Tartakovsky: I want to stress is that what you want to do is, we want to derive a deterministic equation for the probability density function of you.
66
00:10:55.230 --> 00:10:59.940
Daniel Tartakovsky: And so to do that we start with the with defining this function pi.
67
00:11:01.020 --> 00:11:09.660
Daniel Tartakovsky: Which is just the delta function between our uncertain orient them prediction model prediction you, which means in space and time.
68
00:11:10.230 --> 00:11:20.280
Daniel Tartakovsky: And then the probabilities place I maybe I n capital you it's just it's a value deterministic value that it's an outcome of an experiment that that the music value that.
69
00:11:21.000 --> 00:11:32.700
Daniel Tartakovsky: Our state very both small you can fake and so why are we interested in this function party well, because if you take them sample average of pie in the space of.
70
00:11:33.900 --> 00:11:45.900
Daniel Tartakovsky: You so that's denoted by e or this bracket pie so but by the very definition, where it is it's an integral of five times the probability density function.
71
00:11:47.250 --> 00:11:59.190
Daniel Tartakovsky: Right, but if you remember recall that pie, is the belt the function log it here, and so you would see that the average of capital pi is the PDF of you.
72
00:12:00.270 --> 00:12:08.310
Daniel Tartakovsky: Right and so, then the derivation of the deterministic equation for the probability density function of this.
73
00:12:08.940 --> 00:12:20.220
Daniel Tartakovsky: stochastic equation here would consist of two steps so first you derive a stochastic or equation for capital pie and then that's step one.
74
00:12:20.790 --> 00:12:32.430
Daniel Tartakovsky: And step two you average that stochastic equation in the space of random coefficients to to obtain a deterministic equation for for for for the probability density function.
75
00:12:33.570 --> 00:12:39.870
Daniel Tartakovsky: Okay, so if you don't like dealing with probability density functions, because you know the.
76
00:12:41.430 --> 00:12:47.190
Daniel Tartakovsky: sharp and then there are all kinds of issues delta faucets if you have.
77
00:12:48.390 --> 00:12:53.100
Daniel Tartakovsky: God forbid anything you know the certainty and so treat it as the third mistake.
78
00:12:53.730 --> 00:12:59.700
Daniel Tartakovsky: So you have to deal with delta function so alternatively, you can say, I want to look at the CDF equation, so I want to see.
79
00:13:00.210 --> 00:13:16.080
Daniel Tartakovsky: How the cumulative distribution function of my state variables in space and time well, we could do that too, so now we would introduce the function pie that instead of a delta function kids okay decide function right and so again that's the genocide function.
80
00:13:17.250 --> 00:13:25.170
Daniel Tartakovsky: Where the argument this capital unions is the value that the random function or income statement small you can take.
81
00:13:25.710 --> 00:13:40.440
Daniel Tartakovsky: Why do we do that well because the ensemble average of this does defined function pie, is the cumulative distribution function and again this isn't exactly resolved just follows from the.
82
00:13:43.080 --> 00:13:46.110
Daniel Tartakovsky: Basic properties of this.
83
00:13:47.190 --> 00:13:48.570
Daniel Tartakovsky: function some integration.
84
00:13:49.770 --> 00:13:54.270
Daniel Tartakovsky: Okay, so then again if you want to derive a ctf equation, for your.
85
00:13:56.250 --> 00:14:08.640
Daniel Tartakovsky: stochastic PD so you you against follow these two steps you first derive an equation for this rental pie, and then you give it in the space of.
86
00:14:09.990 --> 00:14:15.600
Daniel Tartakovsky: Uncertain or in the coefficients okay so as an example.
87
00:14:16.650 --> 00:14:19.920
Daniel Tartakovsky: Just to fix prefix ideas consider.
88
00:14:21.390 --> 00:14:26.820
Daniel Tartakovsky: In the equation so, so this is a reactive scaling that is.
89
00:14:27.930 --> 00:14:38.160
Daniel Tartakovsky: being affected in the random velocity field V simultaneously undergoing our chemical reactions in this case degradation.
90
00:14:39.810 --> 00:14:46.800
Daniel Tartakovsky: According to any any non linear low So here we are looking at this power law.
91
00:14:48.690 --> 00:14:54.780
Daniel Tartakovsky: Right and so suppose that we saw here what what's what's on certainly is this thread and diversity field.
92
00:14:55.920 --> 00:15:01.590
Daniel Tartakovsky: And, and also potentially initial state, but the boundary can the initial state for you.
93
00:15:02.940 --> 00:15:05.340
Daniel Tartakovsky: So, then, we want to derive rather than.
94
00:15:06.510 --> 00:15:13.770
Daniel Tartakovsky: doing the simulations for this, you want to derive a deterministic equation for the cumulative distribution function.
95
00:15:14.790 --> 00:15:37.530
Daniel Tartakovsky: Of this small you, and so, in doing that, so to do that first step to derive an equation for capital pie, all we have to do is, and this is slightly for low, but have you done any analysis on that too so So what we want to do is report, a lot of how this.
96
00:15:38.820 --> 00:15:43.170
Daniel Tartakovsky: pie evolves in in space and time, so the derivative so five.
97
00:15:44.190 --> 00:15:52.890
Daniel Tartakovsky: X and time, so if you just say, I want to compute the time derivative of pie, will you use the chain rule.
98
00:15:55.080 --> 00:16:03.480
Daniel Tartakovsky: That they are dysfunctional derivative of pie, with respect, or small you that that step is a little challenging ready to compute.
99
00:16:04.050 --> 00:16:14.790
Daniel Tartakovsky: functional derivative, but if you note here that pie depends on function function you and and the variable code in the capital, you.
100
00:16:15.300 --> 00:16:22.980
Daniel Tartakovsky: As a difference between the two so so this derivative here is equals minus the bike.
101
00:16:23.400 --> 00:16:39.990
Daniel Tartakovsky: Capital do you and that's great because you again is it's it's it's allowed come over and experience it's not the function it's just a deterministic chords and it can be rent space so so what you would see is that divided T is essentially.
102
00:16:41.190 --> 00:16:52.320
Daniel Tartakovsky: The utility times this partial derivative four five with respect to capital view, so the same The same goes for for the divergence.
103
00:16:53.340 --> 00:16:54.630
Daniel Tartakovsky: Of pie.
104
00:16:55.830 --> 00:17:09.540
Daniel Tartakovsky: You would see that it's it's it's the divergence of the dot grab you multiplied by the by the you so the same thing you using properties of the delta function, you will see that.
105
00:17:10.980 --> 00:17:17.280
Daniel Tartakovsky: are small, you with you, he is uncertain, but you could represent it you could replace it with the.
106
00:17:18.330 --> 00:17:29.070
Daniel Tartakovsky: R rated at capital you and the corner, just like that way again, this is the part you so you will see everything is multiplied by department you so that tells us that.
107
00:17:29.460 --> 00:17:40.620
Daniel Tartakovsky: If we take this equation here the caustic equation here multiply it by the partial derivative four five with respect to you, then the first term here is replaced with.
108
00:17:41.160 --> 00:17:57.810
Daniel Tartakovsky: The partial derivative of time to to see the second term is replaced through this term, and so small, you uncertain state variable you is eliminated, and so what we now have is.
109
00:17:58.860 --> 00:18:07.650
Daniel Tartakovsky: This equation for capital P so so what what what's what's what's interesting about this equation is so we started with the nonlinear.
110
00:18:10.200 --> 00:18:11.640
Daniel Tartakovsky: These physical equation.
111
00:18:12.840 --> 00:18:27.330
Daniel Tartakovsky: In in the physical dimension so seeing three dimensions and what we've done here we mapped it on the higher dimensional space four dimensional space here, but this this this equation, here it, it remains.
112
00:18:29.160 --> 00:18:37.320
Daniel Tartakovsky: Random so it could cause confusion, but it's now linear right, and this is a classical problem or.
113
00:18:39.060 --> 00:18:54.570
Daniel Tartakovsky: At the reaction of massive scale, a party in a random day was it if you feel that it's been starving to death in turbulence, for example, and so this is completes the future state, we got an equation for capital.
114
00:18:56.070 --> 00:18:57.480
Daniel Tartakovsky: And so the second step.
115
00:18:58.500 --> 00:19:07.560
Daniel Tartakovsky: is to leverage this equation in this space, all of these and yeah there is there a lot of human.
116
00:19:08.910 --> 00:19:19.830
Daniel Tartakovsky: hours of human ears went into analysis of this problem, how to come up with an average equation, how to leverage the stochastic equation.
117
00:19:20.430 --> 00:19:33.060
Daniel Tartakovsky: And what's what's different here or what's interesting here is that if you could leverage this equation, as I said, you, you would get not just the equation for the mean behavior of the scale or you're scaling.
118
00:19:33.600 --> 00:19:43.410
Daniel Tartakovsky: But the meaning of pie, is the cumulative distribution function right, so therefore what we would get by encouraging the equation is an equation for the full.
119
00:19:44.760 --> 00:20:03.000
Daniel Tartakovsky: probability density function or in this case cumulative distribution function of you, but if the field that is is random them it's known that if you average it then, of course, you cannot give you've got a religion exactly you would get some some terms here that.
120
00:20:04.020 --> 00:20:07.440
Daniel Tartakovsky: are given exactly you're exactly what you want, they now.
121
00:20:08.910 --> 00:20:14.700
Daniel Tartakovsky: They present cumulative distribution function but then you would have this mixtures of.
122
00:20:16.560 --> 00:20:17.400
Daniel Tartakovsky: Of the.
123
00:20:19.170 --> 00:20:33.810
Daniel Tartakovsky: fluctuations about the medium and that that's that that's sort of a drawback of this method is that, very often, you cannot get in exactly equation for the CDF or PDF of.
124
00:20:35.580 --> 00:20:39.480
Daniel Tartakovsky: of your state variable and, of course, is yet another manifestation of.
125
00:20:40.680 --> 00:20:44.280
Daniel Tartakovsky: What no free lunch period right, you have to pay.
126
00:20:45.300 --> 00:20:46.050
Daniel Tartakovsky: somewhere.
127
00:20:47.400 --> 00:20:51.570
Daniel Tartakovsky: Because this this this problem here or.
128
00:20:53.370 --> 00:21:04.050
Daniel Tartakovsky: Passive skylar dispersion of massive scale, or in the random diversity field has been studied extensively, so there in any number of closures, have been.
129
00:21:04.770 --> 00:21:19.770
Daniel Tartakovsky: developed for that, and what one of the most popular one is so popular that in different fields, it has different names, but it's essentially all the same, so it's like Eddy few cvt in some fields in in.
130
00:21:21.300 --> 00:21:25.770
Daniel Tartakovsky: In other fields it's called marketer dispersion approximation direct.
131
00:21:27.750 --> 00:21:41.790
Daniel Tartakovsky: direct interaction approximation credit monitor button, so that the idea, then, is that that makes term of rotations gives rise to this diffuse editor.
132
00:21:43.110 --> 00:21:46.200
Daniel Tartakovsky: Here, and this arm.
133
00:21:47.430 --> 00:21:58.110
Daniel Tartakovsky: adaptive term and diffusion German this CDF equation that they are essentially given in terms of foreigners she's there, so if you think.
134
00:21:59.310 --> 00:22:08.580
Daniel Tartakovsky: In terms of in terms of turbulence right, but these are closed for expressions that you could compute them, in fact.
135
00:22:11.190 --> 00:22:13.200
Daniel Tartakovsky: One of the advantages of this.
136
00:22:14.580 --> 00:22:31.110
Daniel Tartakovsky: Of this method of distribution says that if you're in the diversity field happens to be white noise uncorrelated so it's given by infinite number of degrees of freedom in this test statistic dimension So then, then this this.
137
00:22:32.430 --> 00:22:36.960
Daniel Tartakovsky: Rain or stressed answer becomes essentially it's a built in function and.
138
00:22:38.010 --> 00:22:41.370
Daniel Tartakovsky: The one one show that.
139
00:22:43.020 --> 00:23:01.500
Daniel Tartakovsky: This expression here this equation becomes exact and then its accuracy increases as the creation makes of your input parameter decreases so that's that makes this method complimentary to see putting on pace of cows expansion.
140
00:23:03.660 --> 00:23:04.260
Daniel Tartakovsky: Eight so.
141
00:23:05.820 --> 00:23:13.410
Daniel Tartakovsky: he's an example how the method works, it remains accurate, for you know as time increases.
142
00:23:14.430 --> 00:23:19.980
Daniel Tartakovsky: And this is the pdfs they have long tails and you could see doesn't.
143
00:23:21.060 --> 00:23:27.810
Daniel Tartakovsky: suffer from long time integration problem so that there is controlled.
144
00:23:28.830 --> 00:23:29.310
Daniel Tartakovsky: and
145
00:23:30.360 --> 00:23:36.390
Daniel Tartakovsky: So that's just a sort of that that was that how you would construct.
146
00:23:38.130 --> 00:23:47.790
Daniel Tartakovsky: A PDF equation for for for the for that particular problem and then over the years we looked at the number of different problems.
147
00:23:48.420 --> 00:24:07.620
Daniel Tartakovsky: And not not not not only our group of course answer that you could use a way to do it to various kinds of problems I that they are, they are best for a parabolic equations but then of course we could work equations will have sharks either they do them then.
148
00:24:08.640 --> 00:24:19.020
Daniel Tartakovsky: That there is a kinetic formulation of what we are capable equations with sharks so that gave us an idea of using kinetic effects larger than you could use.
149
00:24:20.520 --> 00:24:30.360
Daniel Tartakovsky: It for lunch one systems with college noise that can be you could look at my local closures for parabolic elliptic PDS, that there is another problem.
150
00:24:31.530 --> 00:24:38.400
Daniel Tartakovsky: Rather, quite technical, so there is yet another closure, so the bottom line here is is that.
151
00:24:39.720 --> 00:24:44.430
Daniel Tartakovsky: Each physical problem would require a different closure.
152
00:24:45.570 --> 00:24:51.540
Daniel Tartakovsky: and constructing disclosures, as I mentioned earlier, is Labor intensive and and.
153
00:24:53.310 --> 00:25:01.830
Daniel Tartakovsky: requires some human ingenuity and and expertise so, then the question is, can we are in the spirits.
154
00:25:03.840 --> 00:25:13.140
Daniel Tartakovsky: Unfortunate spirit of machine learning can we dispense with few human knowledge to replace it with machine learning.
155
00:25:15.030 --> 00:25:30.270
Daniel Tartakovsky: So, so the question, then that will bear that the short answer to this part two of my talk is that, yes, we can, under certain conditions, and the way our approach to doing that is like that so so.
156
00:25:31.500 --> 00:25:38.580
Daniel Tartakovsky: I don't believe you would ever have enough observational data to do that, and in fact we don't really.
157
00:25:39.600 --> 00:25:42.540
Daniel Tartakovsky: need to have, I think, as.
158
00:25:45.510 --> 00:26:00.480
Daniel Tartakovsky: mentioned this morning, I think the main problems in you know when you talk about physics based machine learning, we are not in beta for each environment we actually data poor environment.
159
00:26:01.080 --> 00:26:11.460
Daniel Tartakovsky: And so what, what can you do when you have not too much about small data, while you, you need to rely on models to generate into.
160
00:26:11.970 --> 00:26:30.120
Daniel Tartakovsky: And, in this context, what what we have is, we have a physical model, it has uncertain inputs, so we would use in the Congress period, it will generate different translations of our motto and nations here.
161
00:26:31.530 --> 00:26:34.770
Daniel Tartakovsky: For for various realizations of the input parameters.
162
00:26:36.390 --> 00:26:46.740
Daniel Tartakovsky: And, and then the question is how do we use this ml ization so model to discover or to their own this PDF equation so.
163
00:26:47.430 --> 00:26:57.600
Daniel Tartakovsky: In addition, or instead of doing what I just described in Part One can we construct construct closures, not through through physics or.
164
00:26:58.470 --> 00:27:17.760
Daniel Tartakovsky: Physical intuition, but all mathematical approximations can we do it justice through data analytics machine learning so um I would argue that that there is the word on recently has been done on equation discovery.
165
00:27:18.870 --> 00:27:19.320
Daniel Tartakovsky: and
166
00:27:20.850 --> 00:27:37.320
Daniel Tartakovsky: My claim is that the setting where you want to learn PDF equations is the natural setting for for this field for for for equation discovery and why is that well because, if I want to discover this.
167
00:27:38.910 --> 00:27:47.730
Daniel Tartakovsky: differential operator l here, so I know a lot about PDF equations so I know that my.
168
00:27:48.780 --> 00:27:57.960
Daniel Tartakovsky: PDF equation here is to conserve probability right so PDF has to integrate to one I also know that in general.
169
00:27:59.640 --> 00:28:10.710
Daniel Tartakovsky: I know, but we know in danger of this differential operator is not local so think of boltzmann equation right we're very big when you have a conditional operator.
170
00:28:11.880 --> 00:28:20.850
Daniel Tartakovsky: also know that if I don't want to deal with integrity finance some equations so I could restrict myself to the class of local operators.
171
00:28:21.900 --> 00:28:38.550
Daniel Tartakovsky: And I also think focus block equations so, then I would know that the the PDF or CD or for the equation of this type, has to be has to have a problem with the room what that says is that.
172
00:28:40.620 --> 00:28:42.420
Daniel Tartakovsky: A PDF equation.
173
00:28:43.650 --> 00:28:44.760
Daniel Tartakovsky: cannot have.
174
00:28:46.020 --> 00:29:04.290
Daniel Tartakovsky: Derivatives of order hire them to so if its derivatives of order up to to them, you would get the focus like equation, and if you must have a few want to have derivatives order high then to them it's a you need to have influence.
175
00:29:10.500 --> 00:29:15.240
Daniel Tartakovsky: On, and that is to preserve all the properties of the PDF.
176
00:29:16.500 --> 00:29:20.610
Daniel Tartakovsky: So also what the previous discussion of.
177
00:29:22.170 --> 00:29:28.560
Daniel Tartakovsky: or derivation of this closures point points to is that are it's.
178
00:29:30.360 --> 00:29:52.830
Daniel Tartakovsky: tailor made for our post more than he read, where there is their versions of truth about, not a single true so, so we are I i'm not claiming that the equation PDF equation, I will discover is unique, in fact, we know that it's not unique because you could get various closures that look.
179
00:29:54.030 --> 00:30:06.930
Daniel Tartakovsky: Better in certain regimes, so it leaves the burden of proving that that might be the equation that i'm going to discover is Is that correct one that there is not no correct one.
180
00:30:07.830 --> 00:30:18.450
Daniel Tartakovsky: All the claims that it's consistent with the realisations the Karma realisations of the problem system of the problem that we're solving.
181
00:30:19.800 --> 00:30:23.580
Daniel Tartakovsky: Okay So how do we do that, well, we use our.
182
00:30:24.690 --> 00:30:35.820
Daniel Tartakovsky: spark spark regression technique for for equation discovery and that there are two ways of doing that that we explored, so one is our.
183
00:30:37.860 --> 00:30:53.340
Daniel Tartakovsky: Direct equation learning and that one says that you know, driven by our theory, we will say that our PDF equation if we restrict ourselves to the class of local operators.
184
00:30:54.450 --> 00:30:54.900
Daniel Tartakovsky: Then.
185
00:30:56.940 --> 00:31:11.100
Daniel Tartakovsky: that's that's this manifestation or physical already present a mathematical representation of the power theorem it just tells us that what you could have in your PDF equation, are the derivatives in up to.
186
00:31:12.150 --> 00:31:21.660
Daniel Tartakovsky: Read, which is great, that means that and it's also linear so so this dictionary or the all possible terms in in in a PDF equation.
187
00:31:22.140 --> 00:31:36.030
Daniel Tartakovsky: If that's if it's it's here and it's finite and it's relatively small and better here that the SEC coefficients are no questions and that equation that we want to discover.
188
00:31:37.140 --> 00:31:44.370
Daniel Tartakovsky: And, and so that's it, so the question is, then, how do we learn this coefficient better so that's.
189
00:31:45.540 --> 00:31:47.580
Daniel Tartakovsky: A fancy fancy name for.
190
00:31:48.630 --> 00:32:09.270
Daniel Tartakovsky: or it's a it's a simple, simple and I think simple way to understand equation discovery so so back to classical problem of beta simulation or inverse modeling essentially, how do we find the coefficients and our equation, so what we have here from the.
191
00:32:12.240 --> 00:32:16.860
Daniel Tartakovsky: Monte Carlo we generate and realisations you use.
192
00:32:18.060 --> 00:32:19.740
Daniel Tartakovsky: kd based.
193
00:32:21.450 --> 00:32:26.910
Daniel Tartakovsky: caramel density estimator from from the problems and the annotations to construct an.
194
00:32:28.410 --> 00:32:42.480
Daniel Tartakovsky: Observation observational empirical PDF of had at different points in space and time and space, here again, you have physical space.
195
00:32:43.920 --> 00:33:04.230
Daniel Tartakovsky: denoted by capital you, you have advanced space, the capital, you and then different times and then what you want to do is you want to minimize the residual the difference between the arm what your PDF equation here, what would predict and and the observations.
196
00:33:05.310 --> 00:33:16.050
Daniel Tartakovsky: And then, if you're not careful, if you just do just minimize the residual, then you are a danger of having this fool operator will be all terms here.
197
00:33:16.320 --> 00:33:27.600
Daniel Tartakovsky: So that that's probably okay to do, but you you you around the danger of over feeding your model and it's also not very elegant because it's a complex model, so you want to.
198
00:33:28.050 --> 00:33:44.160
Daniel Tartakovsky: enforce some sparsity here, so you want to have as few terms as possible so as many of us here to be zero and, as you can so you do that by first defining your problems are adding this.
199
00:33:45.480 --> 00:33:51.660
Daniel Tartakovsky: To one norm here so parameter hyper parameter gamma here determines the.
200
00:33:54.330 --> 00:34:07.590
Daniel Tartakovsky: sort of spot degree or specification and and then that so, then the equation discovery reduces to finding parameters better that minimize that minimize the procedure.
201
00:34:11.400 --> 00:34:14.220
Daniel Tartakovsky: So, so this this is.
202
00:34:15.360 --> 00:34:18.960
Daniel Tartakovsky: The equation learning that we explored.
203
00:34:20.070 --> 00:34:23.910
Daniel Tartakovsky: And constraint equation learning it's it's.
204
00:34:25.200 --> 00:34:32.340
Daniel Tartakovsky: Basically, the same idea only it says that but wait a minute, why do we need to learn this whole.
205
00:34:33.390 --> 00:34:40.980
Daniel Tartakovsky: operator here this whole library if we know in the previous step here that.
206
00:34:42.540 --> 00:34:49.200
Daniel Tartakovsky: We know some part of our idea for CDF equation, we only really need to learn this.
207
00:34:51.300 --> 00:35:09.600
Daniel Tartakovsky: Closure terms, so we only need to learn what we absolutely need to learn and and so that's that's that's what constrains right so you're saying that some some part of our PDF equation, we know and some the closure on the the closure terms, need to be there.
208
00:35:11.040 --> 00:35:28.740
Daniel Tartakovsky: And so, so if If so, then it's it's the same problem in either case direct learning or constraint learning, we need to solve this minimization problem when we're in general, we are minimizing our problem or.
209
00:35:29.940 --> 00:35:32.610
Daniel Tartakovsky: coefficients that vary in space and time.
210
00:35:34.350 --> 00:35:37.440
Daniel Tartakovsky: So, so there are different ways that we looked at.
211
00:35:38.940 --> 00:35:49.230
Daniel Tartakovsky: of doing that, so one is to use simulated annealing it's it's it's it's great but it's it's very.
212
00:35:50.790 --> 00:35:52.320
Daniel Tartakovsky: computationally intensive.
213
00:35:53.730 --> 00:36:01.740
Daniel Tartakovsky: Another another way is to say okay well let's approximate this this coefficients better, for example with.
214
00:36:03.180 --> 00:36:07.440
Daniel Tartakovsky: But you know meals so so that we so we don't minimize of.
215
00:36:08.730 --> 00:36:11.460
Daniel Tartakovsky: functions, but over the coefficients and this pretty much.
216
00:36:12.840 --> 00:36:20.640
Daniel Tartakovsky: and clearly, you know accuracy, but you increase efficiency, so another way that we looked at.
217
00:36:21.960 --> 00:36:35.100
Daniel Tartakovsky: And who didn't is to use neural networks, so I want to talk about those but, so there are different ways to solve this problem that that's that's I guess the bottom line and that's.
218
00:36:36.450 --> 00:36:54.330
Daniel Tartakovsky: that's how it works again on this simple problem of a relatively simple problem of addiction reaction problem with with with start with this problem that us suppose that de Su squared for simplicity.
219
00:36:55.530 --> 00:37:05.160
Daniel Tartakovsky: And so, in this, to begin with, the only thing that we don't know is the initial state easier, so if you do that, we want to discover this equation for.
220
00:37:05.700 --> 00:37:18.390
Daniel Tartakovsky: For the PDF for this problem as I showed to you in Part one you could get the exact PDF equation you don't really need to learn anything but we use it to see how well the method works.
221
00:37:19.290 --> 00:37:28.830
Daniel Tartakovsky: And so you could see that it actually does work because it's in the exact equation here, this is one that this.
222
00:37:30.780 --> 00:37:33.330
Daniel Tartakovsky: equation discovery give gives.
223
00:37:34.530 --> 00:37:35.940
Daniel Tartakovsky: Confusion coefficients close to one.
224
00:37:38.010 --> 00:37:42.150
Daniel Tartakovsky: Okay, so so slightly more challenging problem is.
225
00:37:43.320 --> 00:38:08.880
Daniel Tartakovsky: Is a similar problem of objection reaction on on the now the objection velocity is is a random variable so as as we again discussed in Part one in general, you would not be able to derive an exact equation, so that there has to be closure approximation involved one closure.
226
00:38:10.650 --> 00:38:23.970
Daniel Tartakovsky: So some sort of large entity, if you see it, it kind of closure would give give you a diffuser term where the diffusion coefficient is is essentially variants of your inputs and and.
227
00:38:25.800 --> 00:38:50.580
Daniel Tartakovsky: So, so can can be recovered this this result this physics based closure that works very well, so you could see in in in terms of the main square error on on this data so data not used for for training to solve this minimization problem, so you could see that the the method works.
228
00:38:51.870 --> 00:38:59.160
Daniel Tartakovsky: reasonably well and, and that is constraint equation learning on the learns the closure.
229
00:39:00.600 --> 00:39:06.360
Daniel Tartakovsky: outperforms the direct equation learning that learns the whole.
230
00:39:07.380 --> 00:39:08.040
Daniel Tartakovsky: differential.
231
00:39:10.260 --> 00:39:31.620
Daniel Tartakovsky: Right so Moving on, in terms of complexity what what What else can we learn well now let's let's look at burgers equation, where are now the non linearity in isn't in the flocks again clearly there a short problems we looked at them, but in a simple case where.
232
00:39:33.390 --> 00:39:42.900
Daniel Tartakovsky: The diversity is the fox is deterministic so they give the devil certainties constraint to is limited to the initial state.
233
00:39:44.430 --> 00:39:46.770
Daniel Tartakovsky: You could are getting the exact.
234
00:39:47.910 --> 00:39:49.050
Daniel Tartakovsky: PDF equation.
235
00:39:50.910 --> 00:39:55.470
Daniel Tartakovsky: it's it's in this in this form, so but.
236
00:39:56.550 --> 00:39:59.640
Daniel Tartakovsky: it's an interesting equation, because here, you could see that.
237
00:40:02.010 --> 00:40:14.220
Daniel Tartakovsky: This PDF equation contains both the PDF and the CDS for users, the CDF, of course, is an integral so they're here so that that's not non local are.
238
00:40:15.510 --> 00:40:16.740
Daniel Tartakovsky: def equation.
239
00:40:18.390 --> 00:40:23.880
Daniel Tartakovsky: So can can can we can we learn their heart, and the answer is yes, we could be.
240
00:40:25.410 --> 00:40:36.900
Daniel Tartakovsky: The error increases here but but but, but if you are willing to leave with this engineering precision, you could learn this this equation for the.
241
00:40:40.050 --> 00:40:40.770
Daniel Tartakovsky: But if.
242
00:40:42.540 --> 00:40:44.160
Daniel Tartakovsky: You are not happy with.
243
00:40:45.570 --> 00:40:53.070
Daniel Tartakovsky: That level of precision, then you say it can be maybe instead of learning that PDF equation there in the city of equation.
244
00:40:54.090 --> 00:40:56.100
Daniel Tartakovsky: And when you do that, you.
245
00:40:58.440 --> 00:41:03.420
Daniel Tartakovsky: You you could do that the error is much smaller and why is that because.
246
00:41:04.530 --> 00:41:11.910
Daniel Tartakovsky: analytically you could show that the CDF equation is closed forum and it's it's local it's it's Nice.
247
00:41:13.050 --> 00:41:16.200
Daniel Tartakovsky: So, so that this this part tells you that.
248
00:41:17.550 --> 00:41:25.500
Daniel Tartakovsky: That you know, part two of my talk is learning of PDF equations closures for PDF equation so more more broadly constraint equations.
249
00:41:26.400 --> 00:41:40.380
Daniel Tartakovsky: It really it it's affected by what you want to learn, so what is your observer that you want to learn a PDF equation or CDF equation, and that is not so different from.
250
00:41:42.480 --> 00:41:47.520
Daniel Tartakovsky: Learning mercury there's where the performance of this methods.
251
00:41:48.930 --> 00:42:02.610
Daniel Tartakovsky: highly effective bye bye bye cheers cheers of the observable right so so that's that's where human ingenuity sneaks back in into machine learning.
252
00:42:04.290 --> 00:42:19.350
Daniel Tartakovsky: So so so what I hope to be able to have convinced you is that one could learn the deterministic equations or derive analytically deterministic equations for the probability density function of our state.
253
00:42:21.000 --> 00:42:25.590
Daniel Tartakovsky: And that PDF equation here defines a dynamic manifold.
254
00:42:26.670 --> 00:42:38.010
Daniel Tartakovsky: on which the pdfs of the State variable live, so we don't need to now assume that the PDF version, it could be.
255
00:42:40.530 --> 00:42:41.280
Daniel Tartakovsky: It could have.
256
00:42:43.320 --> 00:43:03.720
Daniel Tartakovsky: could be multimodal, in fact, and so so then let's get back to the problem of beta simulation down because there's after all this is the title of my talk and the problem is is is is again like that so suppose suppose that we have a dynamic of system that we.
257
00:43:04.830 --> 00:43:06.000
Daniel Tartakovsky: Have a model for.
258
00:43:07.050 --> 00:43:11.460
Daniel Tartakovsky: Some this example it's it's the action and the action reaction equation.
259
00:43:12.960 --> 00:43:19.770
Daniel Tartakovsky: We hear from unknown and uncertain initial condition and the boundary condition.
260
00:43:21.060 --> 00:43:24.720
Daniel Tartakovsky: And, and then we also have measurements noisy measurements of you.
261
00:43:25.860 --> 00:43:26.460
Daniel Tartakovsky: So.
262
00:43:29.070 --> 00:43:42.990
Daniel Tartakovsky: For for for this problem where we define a dynamic money fault of distributions and then dynamical manifold is described is is defined by the CDF equation.
263
00:43:44.010 --> 00:43:50.040
Daniel Tartakovsky: Here is, in this case it's given analytically its exact.
264
00:43:51.540 --> 00:43:58.830
Daniel Tartakovsky: um we have the initial states are answering initial distribution of you it's given by the.
265
00:44:00.360 --> 00:44:07.560
Daniel Tartakovsky: Initial prior distribution on on the initial states, we have the CDF of the.
266
00:44:08.910 --> 00:44:25.530
Daniel Tartakovsky: bark at the boundary it's given by the CDF of the boundary condition both could be if it's if you don't know much about your initial state and the boundaries state, you could say the surprise, are they someone informative priors so you use.
267
00:44:29.910 --> 00:44:32.550
Daniel Tartakovsky: Uniform distribution, you could also use.
268
00:44:33.570 --> 00:44:36.720
Daniel Tartakovsky: But, but this money fault here is is.
269
00:44:38.070 --> 00:44:41.220
Daniel Tartakovsky: it's a it's a four dimensional manifold so so it's.
270
00:44:42.270 --> 00:44:44.070
Daniel Tartakovsky: You could specify.
271
00:44:45.300 --> 00:44:45.930
Daniel Tartakovsky: The.
272
00:44:47.070 --> 00:45:11.100
Daniel Tartakovsky: The coordinates in that on that manifold is X ti N and the parameters in the distributions of the initial state and Bob tested, so in this particular case it because it's a four dimensional manifold So if you specify P zero phoebe and this one's could be.
273
00:45:12.450 --> 00:45:16.740
Daniel Tartakovsky: Also subsets or it's not subsets of that they don't have to be.
274
00:45:18.450 --> 00:45:20.760
Daniel Tartakovsky: What unit dimensional so if.
275
00:45:22.320 --> 00:45:33.450
Daniel Tartakovsky: If F zero is a gouge the handle them, then this V zero would be the mean and the variance of the gaussian the same thing for phoebe so that that's.
276
00:45:34.590 --> 00:45:37.980
Daniel Tartakovsky: The caveat Okay, and so, then.
277
00:45:40.230 --> 00:46:05.520
Daniel Tartakovsky: We would replace this expensive Beijing update that arguably is too expensive to be are used in in practical problems with this variation of formulation which gives rise to this sequential and local Beijing update so now, instead of using breakthrough what what we do ESB.
278
00:46:09.090 --> 00:46:12.570
Daniel Tartakovsky: So this minimization problem we find.
279
00:46:13.800 --> 00:46:25.800
Daniel Tartakovsky: The parameters met the parameters in hopes in on this manifold that formed the chords in it, and this manifold, so we are minimizing the.
280
00:46:26.760 --> 00:46:45.630
Daniel Tartakovsky: city of in this case, that is predicted by by our ctf equation so just forward propagation of uncertainty and the difference between their CDs and observational CDF that this one incorporates.
281
00:46:47.370 --> 00:47:02.520
Daniel Tartakovsky: observations so that's it's local and the sequential and that's the he is a big advantage of this formulation, so our CDs here I innovate right or pdfs because.
282
00:47:03.720 --> 00:47:17.280
Daniel Tartakovsky: That we only talk about single point in space and single point in time, where you have measurements and other that significantly reduces the computation of course, but this this, this is a constraint.
283
00:47:18.060 --> 00:47:31.410
Daniel Tartakovsky: minimization as I think pointed out so classically what people will do they would say that that they would constrain post observational CDF and and.
284
00:47:32.340 --> 00:47:51.150
Daniel Tartakovsky: portable up propagation cydia to gosh in spaces, we are saying that it's not goshen doesn't have to be gaussian these CDs here live on the on the money for that is described by by by this idea of equation right and.
285
00:47:52.500 --> 00:47:57.450
Daniel Tartakovsky: The result of this minimization is here so So if you have.
286
00:47:58.770 --> 00:48:08.520
Daniel Tartakovsky: If you're uncertain you want to learn the initial state, so you want to learn the mean and variance of standard deviation of your.
287
00:48:09.690 --> 00:48:14.760
Daniel Tartakovsky: Initial state got got that that was a gaussian so you start with the prior.
288
00:48:16.320 --> 00:48:19.020
Daniel Tartakovsky: If you use just the classical.
289
00:48:21.810 --> 00:48:34.500
Daniel Tartakovsky: classical bit they were Asian update in this simple case you could afford to do that this after simulating is a number of observations you would end up.
290
00:48:35.580 --> 00:48:38.880
Daniel Tartakovsky: With this point here in this space of.
291
00:48:40.140 --> 00:48:50.280
Daniel Tartakovsky: Initial variance variance and the name of the interesting and the true the true value is here and and our.
292
00:48:50.940 --> 00:49:09.000
Daniel Tartakovsky: beta aware method of distribution so that that's how we label our methods so that that is indistinguishable from the true true value that we know in this case and so so that's that's how the.
293
00:49:10.200 --> 00:49:21.630
Daniel Tartakovsky: Full PDF of the prior would look like the true the true unknowable initial status this delta function here the prior is this white PDF.
294
00:49:22.650 --> 00:49:32.430
Daniel Tartakovsky: And posterior PDF is here Okay, so I will conclude again so so this.
295
00:49:33.450 --> 00:49:34.620
Daniel Tartakovsky: Talk really.
296
00:49:35.910 --> 00:49:47.430
Daniel Tartakovsky: has several parts and and so therefore his conclusions, I want to sort of take home message here is is that.
297
00:49:49.110 --> 00:49:58.020
Daniel Tartakovsky: is learning on dynamic manifolds what so so what it is it's it's a it's a bayesian strategy for.
298
00:49:59.520 --> 00:50:03.330
Daniel Tartakovsky: For for for in beta simulation and.
299
00:50:04.530 --> 00:50:19.980
Daniel Tartakovsky: It consists of three parts one, we need to disk to define the manifold of pdfs or CDs and that's where equation discovery plays a prominent role and and then our this information geometric implementation of.
300
00:50:21.870 --> 00:50:31.620
Daniel Tartakovsky: Learning on this dynamic money fault, I did not talk much about specifically primarily I remove some slides because they know i'm running.
301
00:50:33.210 --> 00:50:33.690
Daniel Tartakovsky: late.
302
00:50:35.550 --> 00:50:46.260
Daniel Tartakovsky: So so here, you could see that that will, there are other that we used an Elton on in this example in this paper but.
303
00:50:47.550 --> 00:50:59.490
Daniel Tartakovsky: To improve the convergence of this minimization problem what you really we are doing here is we're minimizing the difference, or the discrepancy get into distributions.
304
00:51:00.000 --> 00:51:18.840
Daniel Tartakovsky: And, of course, to norm is not the best known to do that, so you could you could you could be we've done that, recently, so you could replace that Elton on with the either kale divergence or the wasserstein distance doing that significantly improves their.
305
00:51:22.170 --> 00:51:23.910
Daniel Tartakovsky: Convergence rate of.
306
00:51:25.230 --> 00:51:41.910
Daniel Tartakovsky: gradient based methods for minimization of the problem and, interestingly, the law enables you to use our words in the field is called natural gradient descent are and and.
307
00:51:42.990 --> 00:51:46.470
Daniel Tartakovsky: that's sort of how you would think of.
308
00:51:47.550 --> 00:51:52.620
Daniel Tartakovsky: It and it also provides this information geometric interpretation.
309
00:51:54.330 --> 00:51:59.250
Daniel Tartakovsky: Of the manifold kalka, you would think of the dynamic manifold described by the.
310
00:52:00.480 --> 00:52:04.290
Daniel Tartakovsky: city or for PDF equations so if.
311
00:52:06.150 --> 00:52:08.190
Daniel Tartakovsky: You want to get more details.
312
00:52:09.360 --> 00:52:17.580
Daniel Tartakovsky: it's not possible, I suppose, to follow exactly every step of what I described so each step here.
313
00:52:19.380 --> 00:52:21.780
Daniel Tartakovsky: His references associated to them so.
314
00:52:24.030 --> 00:52:25.140
Daniel Tartakovsky: feel free to.
315
00:52:26.640 --> 00:52:29.160
Daniel Tartakovsky: To look at every so that's.
316
00:52:30.480 --> 00:52:32.040
Daniel Tartakovsky: that's their pipe I will stop.
317
00:52:34.590 --> 00:52:35.100
Daniel Tartakovsky: here.
318
00:52:36.180 --> 00:52:38.310
Daniel Tartakovsky: And questions.
319
00:52:39.720 --> 00:52:40.080
hey.
320
00:52:41.340 --> 00:52:43.230
fausto cattaneo: Thank you very much and.
321
00:52:44.310 --> 00:52:47.070
fausto cattaneo: let's see if there are any questions.
322
00:52:50.250 --> 00:52:52.410
fausto cattaneo: me I always as a question where's me.
323
00:52:55.440 --> 00:52:57.030
Mihai Anitescu: not this time I don't.
324
00:53:00.240 --> 00:53:01.890
Mihai Anitescu: Know i'm just i'm spelled well.
325
00:53:02.100 --> 00:53:02.430
So.
326
00:53:06.000 --> 00:53:07.050
Daniel Tartakovsky: I know that.
327
00:53:08.640 --> 00:53:19.080
Daniel Tartakovsky: my brother and whatever people who work with him or for him, so that they are use the word of that for this problem that you are looking at.
328
00:53:21.150 --> 00:53:23.280
Daniel Tartakovsky: For for for electrical please and.
329
00:53:23.370 --> 00:53:24.270
Mihai Anitescu: that's right that's right so.
330
00:53:25.110 --> 00:53:31.350
Daniel Tartakovsky: Right and, in fact, one of my students is going to do in some way to say you.
331
00:53:31.590 --> 00:53:37.410
Mihai Anitescu: know this right, this is, I mean you know he killed me with the you remember he's worked with the closures, for the wind problem.
332
00:53:37.710 --> 00:53:38.640
Daniel Tartakovsky: Like I said.
333
00:53:40.200 --> 00:53:41.580
Mihai Anitescu: i've seen quite a bit of this before.
334
00:53:41.580 --> 00:53:41.790
But.
335
00:53:43.080 --> 00:53:45.960
Mihai Anitescu: I can see, I understood it very well, but but i've seen.
336
00:53:47.760 --> 00:53:55.650
Daniel Tartakovsky: That but just so that you would know what's coming so Tyler here is is is your your your your summer student I forgot who he is working.
337
00:53:56.160 --> 00:53:56.550
Mihai Anitescu: Now, no.
338
00:53:57.780 --> 00:54:03.480
Mihai Anitescu: No you're exactly right we don't want to be the destination, the part which is a bit different is, of course, that sort of I cannot say I.
339
00:54:03.990 --> 00:54:17.100
Mihai Anitescu: absorbed very well the sort of the the statistical manual part right, so this, I think I understand it, designation, but that part I don't absorb well so that's why i'm not i'm still absorbing at the moment I don't know that I have a very, very smart phone share.
340
00:54:18.060 --> 00:54:34.860
Daniel Tartakovsky: Right, so I think in your case, if you need the joint PDF for each node of an electrical great you end up with 1000 dimensional PDF equations which makes it useless.
341
00:54:35.880 --> 00:54:36.480
Daniel Tartakovsky: So so.
342
00:54:37.650 --> 00:54:40.920
Mihai Anitescu: I mean that was always the argument I can be with your brother.
343
00:54:40.950 --> 00:54:46.050
Daniel Tartakovsky: Right, which is that, like i'm aware of that, that seems like and I wouldn't do that it was like.
344
00:54:47.610 --> 00:54:49.980
Mihai Anitescu: What am I going to do with three boss problem right.
345
00:54:50.010 --> 00:54:52.080
Daniel Tartakovsky: I I agree so.
346
00:54:52.110 --> 00:55:00.540
Daniel Tartakovsky: So what what you would do is is you would, for me, I think I mean we haven't done that, but I was talking to.
347
00:55:02.010 --> 00:55:21.210
Daniel Tartakovsky: My other properties, so you would you would start with the thousand dimensional PDF equation, and then you would say I really interest like they're there a correlation right we're not every note is calculated every other note on agreed so maybe I could derive.
348
00:55:22.440 --> 00:55:26.550
Daniel Tartakovsky: Well, I could compute marginal so that right, so I would integrate.
349
00:55:27.810 --> 00:55:45.960
Daniel Tartakovsky: You know thousand minus 999 notes out, and I will get the glory joint PDF for three nodes of them notes to make that manageable so that that's that's in the project I am.
350
00:55:46.230 --> 00:55:58.080
Mihai Anitescu: Not and actually there was a talk in the second day since we're talking in the third day actually I mean he was he was doing proctor free in France, which is not the same idea, but in.
351
00:56:00.090 --> 00:56:10.410
Mihai Anitescu: The same problem I end up even though reduce if you have a way to reduce the system to kill for parameters well I don't know it gets complicated right so and actually for the grid for the grid.
352
00:56:10.980 --> 00:56:18.960
Mihai Anitescu: So there's the thing you know, there are probably circumstances where you can do that, but we don't have enough good examples and that's sort of a.
353
00:56:20.910 --> 00:56:22.230
Mihai Anitescu: So the.
354
00:56:25.290 --> 00:56:33.990
Mihai Anitescu: integrity is weird because of the homogeneity right that's really the fundamental problem it's a very homogeneous media and and we don't have enough data.
355
00:56:35.730 --> 00:56:40.980
Mihai Anitescu: It doesn't so so so so i'm not sure the group would be the place, I would look for it after what I understand.
356
00:56:42.750 --> 00:56:43.050
Mihai Anitescu: yeah.
357
00:56:43.470 --> 00:56:46.800
Daniel Tartakovsky: Right so so so in this example here we looked at.
358
00:56:48.240 --> 00:56:51.000
Daniel Tartakovsky: Some neuroscience models where it's it's.
359
00:56:52.140 --> 00:56:55.860
Daniel Tartakovsky: But we obviously didn't look at billions of neurons together as.
360
00:56:55.890 --> 00:57:02.730
Mihai Anitescu: Well, so certainly there is there is another example I would say so, for example, where I would play with this, to be honest.
361
00:57:03.060 --> 00:57:10.710
Mihai Anitescu: And I talked to Alex with of course it's a it's a very complicated question, for example, of place where you weren't doing what you guys are doing.
362
00:57:11.160 --> 00:57:18.690
Mihai Anitescu: he's to decide, for example in in climate or whether rather before hurricane turns left or right right so that that's sort of a.
363
00:57:19.200 --> 00:57:22.890
Mihai Anitescu: That actually makes perfect sense, but you have to reduce the problem to do exactly what you're.
364
00:57:23.670 --> 00:57:31.200
Mihai Anitescu: Suggesting here there's actually many examples there like three or four disasters, indeed, by simulation where people actually have predicted this form going their own which.
365
00:57:32.070 --> 00:57:42.900
Mihai Anitescu: Is sort of like you know so so so stuff like that I think you would probably use it, but that would be a significant amount of work to do even get the problem in that format, but but that's where purpose, and I would look.
366
00:57:43.860 --> 00:57:48.900
Daniel Tartakovsky: yeah so I i'll contact you about the disasters, where.
367
00:57:50.580 --> 00:57:54.150
Daniel Tartakovsky: we're just you know calm and future state gives you.
368
00:57:54.210 --> 00:58:00.510
Mihai Anitescu: Go to this right so so you get you get to my mobile distributions that does occur and then sure that that doesn't make any sense.
369
00:58:01.050 --> 00:58:08.850
Mihai Anitescu: So this is so that does occur, the green one has one such example so we're starting to seeing examples, but they're only simulated.
370
00:58:09.600 --> 00:58:17.430
Mihai Anitescu: were actually exist fluctuation in solar results in the separation of Brits say the Canada from the rest of the East.
371
00:58:17.790 --> 00:58:27.540
Mihai Anitescu: So I there are such examples, but but I mean the amount of work to get anywhere close to predicting that is gigantic right, so this is a that's that's really why you know it's it's.
372
00:58:28.410 --> 00:58:30.870
Daniel Tartakovsky: right there, so I want to be very clear.
373
00:58:30.930 --> 00:58:36.420
Daniel Tartakovsky: don't want to oversell it so it could be a nice niche method.
374
00:58:37.500 --> 00:58:40.680
Daniel Tartakovsky: i'm not claiming that it's a it's a it's a universe.
375
00:58:41.790 --> 00:58:43.320
Daniel Tartakovsky: doesn't give up doing what they're.
376
00:58:43.350 --> 00:58:56.370
Mihai Anitescu: Doing and then another, but what are your I agree with you at some point we need to do something more structural that so here I agree with you, but why did these executive Minister factory so cheesy so i'm not sure.
377
00:58:57.690 --> 00:58:57.930
Mihai Anitescu: yeah.
378
00:59:02.460 --> 00:59:08.640
fausto cattaneo: Okay, and there anyone else would the comments or questions or comments in general.
379
00:59:10.320 --> 00:59:21.210
fausto cattaneo: I don't see any hands Okay, then on that note, thank you very much for everybody, the speakers, the participants, the organizers and.