From 96f9bf31a801294cc3da0648ff164b0720ad8421 Mon Sep 17 00:00:00 2001 From: ctcpip Date: Thu, 19 Oct 2023 15:29:34 -0500 Subject: [PATCH] =?UTF-8?q?=E2=9C=8F=EF=B8=8F=20fix=202023=20notes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .markdownlint-cli2.jsonc | 1 - meetings/2023-01/feb-01.md | 116 ++++++++++++++++----------------- meetings/2023-01/feb-02.md | 82 ++++++++++++------------ meetings/2023-01/jan-30.md | 103 +++++++++++++++--------------- meetings/2023-01/jan-31.md | 127 ++++++++++++++++++------------------- meetings/2023-03/mar-21.md | 20 ++---- meetings/2023-03/mar-22.md | 29 +++------ meetings/2023-03/mar-23.md | 62 ++++++------------ 8 files changed, 243 insertions(+), 297 deletions(-) diff --git a/.markdownlint-cli2.jsonc b/.markdownlint-cli2.jsonc index 9df5643b..a906579f 100644 --- a/.markdownlint-cli2.jsonc +++ b/.markdownlint-cli2.jsonc @@ -6,7 +6,6 @@ "node_modules/**", "meetings/201*/*.md", "meetings/202[0-2]*/*.md", - "meetings/2023-0[1-3]/*.md", "scripts/test-samples/*" ] } diff --git a/meetings/2023-01/feb-01.md b/meetings/2023-01/feb-01.md index 0dd8acb3..8c7eca41 100644 --- a/meetings/2023-01/feb-01.md +++ b/meetings/2023-01/feb-01.md @@ -4,7 +4,7 @@ **Remote attendees:** -``` +```text | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Andreu Botella | ABO | Igalia | @@ -94,7 +94,7 @@ JRL: Yes and no. It’s actually the expectation that I had. But Node’s implem KG: I see. So I guess I have two follow ups. The first is – so for node, it does at least capture automatically if you are using await presumably. -JRL: Yes. `await`, `setInterval`, `setTimeout` `queueMicrotask`. The things that are obviously continuations of the same task but not event listeners. +JRL: Yes. `await`, `setInterval`, `setTimeout` `queueMicrotask`. The things that are obviously continuations of the same task but not event listeners. KG: I see, So my second question was towards – just getting the mental model for your mental model or if you want event listeners to automatically inherit the current context or the context in which they were registered, would that still be true with synchronous dispatch and do dispatch event that doesn’t trigger the event to be runed on the subsequent micro task tick or turn it executes right there and like a function call, which context would that be, the one in which the listener was registered or the one in which the dispatch – @@ -114,7 +114,7 @@ SYG: Thanks for all the leg work you did for finding the other kind of async pro JRL: This is actually my bonus slides. -SYG: Okay, cool. But that said, we did – the V8 team discussed some of this proposal and the implementation and performance ramifications. We did still have some concerns. The most relevant one is probably – are you allowed to have unbounded number of async contexts to be propagated? +SYG: Okay, cool. But that said, we did – the V8 team discussed some of this proposal and the implementation and performance ramifications. We did still have some concerns. The most relevant one is probably – are you allowed to have unbounded number of async contexts to be propagated? JRL: Yes. So you could create a million instances of async context and run them all at the same time in a nested call back or something. One calls the other that calls the other that calls the other. You could have a lot, yes. @@ -182,7 +182,7 @@ RPR: URL from the queue: https://github.com/wintercg/proposal-common-minimum-api JRL: That’s the WinterCG’s minimal subset of the local storage. And as DE said, this is everything that is implemented in this minimal subset can be represented as the AsyncContext or vice versa. They’re similar APIs essentially with slightly different names for methods. -SYG: Just to add on to what DE was saying about please don’t prematurely ship, yes, please don’t do that. And yes it is true that V8 currently has a field in the promise reaction called ContinuationPreserveEmbedderData that is exactly made for storing this kind of data data. But it is kind of un-multiplexed right now. Just preserves whatever you put in there. This is okay. Because all the current cases of data that’s preserved and propagated this way are not user programmable, things like priority that we want to do and things like the task attribution ID. When it becomes user programmable with AsyncContext this might need to change. So please prototype away. The API may shift under your feet as we get to actually implementing this. +SYG: Just to add on to what DE was saying about please don’t prematurely ship, yes, please don’t do that. And yes it is true that V8 currently has a field in the promise reaction called ContinuationPreserveEmbedderData that is exactly made for storing this kind of data data. But it is kind of un-multiplexed right now. Just preserves whatever you put in there. This is okay. Because all the current cases of data that’s preserved and propagated this way are not user programmable, things like priority that we want to do and things like the task attribution ID. When it becomes user programmable with AsyncContext this might need to change. So please prototype away. The API may shift under your feet as we get to actually implementing this. JRL: I would love to have you as part of our discussions with the node folks. We’re actually looking to reimplement async local storage using the continuation preserved embedded data to solve the performance concerns that async local storage currently has. @@ -216,8 +216,7 @@ DE: Are there any concerns that anybody has beyond what’s been expressed? Any MM: One concern I have is not with regard to the semantic of the proposal but with regard to the how the spec, how the semantics is expressed in the spec. There’s an editorial mistake, if you will, a non-normative mistake we made in the way we wrote down the semantics of registered symbols which is we wrote the semantics down as if they’re shared global mutable state. And a very subtle that global mutable state cannot be used for communication channel and easier for us that could have written down the semantics that would have made it obvious there’s no global communication channel channel. I think we can also explore some of the exploration that we have done about different rewrites for modeling the semantics here that we should also do some of that exploration with regard to how to write down the semantics in the internal spec language so as to avoid the appearance of more shared mutable state than is actually implied by the semantics. -JRL: Okay. Happy to work on that. We haven’t written any spec text yet. But that would come as part of the Stage 1, Stage 2 -process. +JRL: Okay. Happy to work on that. We haven’t written any spec text yet. But that would come as part of the Stage 1, Stage 2 process. SYG: MM, what are you talking about? ECMAscript spec that you consider there’s editorial mistake? @@ -237,11 +236,11 @@ JRL: So we have a repo, please open up any issues you may have on the repo, we w ### Conclusion -* AsyncContext is promoted to Stage 1, with explicit support from several delegates. -* Future work will be needed to develop in terms of investigating the optimizability of this proposal as it scales into more variables, in particular to avoid memory bloat, possibly through limitations on the creation of AsyncContext variables. Develop a definition of semantics as this proposal interacts with various environments, e.g., how AsyncContext is propagated across events, e.g., on the web. Consider editorial improvements to the ECMAScript spec to ensure coherence in the presence of AsyncContext being inherently cross-realm, and per-agent. -* **PSA**: Don’t ship AsyncContext in your environment yet, as the API shape may change. If you need something for this capability right now in your environment, consider https://github.com/wintercg/proposal-common-minimum-api/blob/main/asynclocalstorage.md -* **PSA**: V8 advises against shipping usages of ContinuationPreserveEmbedderData as it does *not* handle multiplexing multiple usages, and is likely to change in the future as we consider AsyncContext. -* You are welcome to join our public Matrix chat for this topic. +- AsyncContext is promoted to Stage 1, with explicit support from several delegates. +- Future work will be needed to develop in terms of investigating the optimizability of this proposal as it scales into more variables, in particular to avoid memory bloat, possibly through limitations on the creation of AsyncContext variables. Develop a definition of semantics as this proposal interacts with various environments, e.g., how AsyncContext is propagated across events, e.g., on the web. Consider editorial improvements to the ECMAScript spec to ensure coherence in the presence of AsyncContext being inherently cross-realm, and per-agent. +- **PSA**: Don’t ship AsyncContext in your environment yet, as the API shape may change. If you need something for this capability right now in your environment, consider https://github.com/wintercg/proposal-common-minimum-api/blob/main/asynclocalstorage.md +- **PSA**: V8 advises against shipping usages of ContinuationPreserveEmbedderData as it does *not* handle multiplexing multiple usages, and is likely to change in the future as we consider AsyncContext. +- You are welcome to join our public Matrix chat for this topic. ## ArrayBuffer transfer for Stage 3 @@ -254,19 +253,19 @@ SYG: This is ArrayBuffer transfer and some friends, some related getters and fun SYG: The new stuff to add to ArrayBuffer are `transfer`, which makes a copy of this buffer, meaning literally `this`, the receiver, and then detaches and returns the copy and `transferToFixedLength`. If you were someone who read one of the earlier drafts of the proposal I had called this `fix`, and everyone else thought that was a terrible name and changed to the `transferToFixedLength`, which is what it does. It behaves like `transfer`, but returns a non-resizable buffer and I will go into the cases of when resizability is preserved and what that means in future slides. Getter is `get detached`. This is the authoritative way to find out if an ArrayBuffer is in fact detached. We do not have this in the language currently. All right. -SYG: So for motivation, why would you want transfer? This example, consider you have this `validateAndWrite` function, where the validation is expensive. You await for the validation to finish and then write the ArrayBuffer data to some disk file or you persist it to some storage. And the way you would use it is, the way you use it is like below. The program is there’s a bug in this code. The bug is the validation is asynchronous. Depending on how things are timed, that `setTimeout` could overwrite the data to write to disk in the ArrayBuffer after the validation. So between the two awaits, in the `validateAndWrite` function, the timeout could mess you up basically. After two awaits, that timeout would overwrite the data, it’s in fact not nsafe. What you can do today to get safety? You can copy. You can copy with slice, but this is slow because you have to do a copy. With transfer, what you can do is, you can take ownership. This is a very limited notion. I am using a lay definition of ownership. Now Rust has been on the scene, there’s a more sophisticated notion of ownership, but we have a simple notion for ArrayBuffers, which is detached. And what – with transfer you can make this faster by transferring and detaching the original, validating the transferred thing and then writing that. Because of lexical scoping, you can have transfer of the ArrayBuffer because there’s no closures, you are assured after asynchronous validation and firing off the asynchronous write the data is exactly as you expect it. +SYG: So for motivation, why would you want transfer? This example, consider you have this `validateAndWrite` function, where the validation is expensive. You await for the validation to finish and then write the ArrayBuffer data to some disk file or you persist it to some storage. And the way you would use it is, the way you use it is like below. The program is there’s a bug in this code. The bug is the validation is asynchronous. Depending on how things are timed, that `setTimeout` could overwrite the data to write to disk in the ArrayBuffer after the validation. So between the two awaits, in the `validateAndWrite` function, the timeout could mess you up basically. After two awaits, that timeout would overwrite the data, it’s in fact not nsafe. What you can do today to get safety? You can copy. You can copy with slice, but this is slow because you have to do a copy. With transfer, what you can do is, you can take ownership. This is a very limited notion. I am using a lay definition of ownership. Now Rust has been on the scene, there’s a more sophisticated notion of ownership, but we have a simple notion for ArrayBuffers, which is detached. And what – with transfer you can make this faster by transferring and detaching the original, validating the transferred thing and then writing that. Because of lexical scoping, you can have transfer of the ArrayBuffer because there’s no closures, you are assured after asynchronous validation and firing off the asynchronous write the data is exactly as you expect it. -SYG: So why is this faster? If transfer copies? It’s specified as a copy. But if you call transfer without changing the length of the ArrayBuffer, it could be implemented much more efficiently under the hood as a zero copy move. And even some calls with changing the length could be implemented more efficiently than a full user length copy. If things are aligned in a certain way, you can grow that without having to grow new physical pages and so on. But because the spec can be implemented more efficiently. So going into what the actual semantics are, it takes an optional new length argument. If you don’t pass the new length, it’s set to the current length. It transfers to a new ArrayBuffer of the exact same length. If you pass a negative length it will throw. If the receiver buffer is not resizable, then we preserve resizability, but if the receiver buffer is resizable, the return buffer is is also resizable and has the same max byte length. Resizable buffers are resizable up to some maximum and this maximum is preserved by transfer. If the new length you pass in is greater than the max byte length you get a range error. Any new memory in the new array are zero. +SYG: So why is this faster? If transfer copies? It’s specified as a copy. But if you call transfer without changing the length of the ArrayBuffer, it could be implemented much more efficiently under the hood as a zero copy move. And even some calls with changing the length could be implemented more efficiently than a full user length copy. If things are aligned in a certain way, you can grow that without having to grow new physical pages and so on. But because the spec can be implemented more efficiently. So going into what the actual semantics are, it takes an optional new length argument. If you don’t pass the new length, it’s set to the current length. It transfers to a new ArrayBuffer of the exact same length. If you pass a negative length it will throw. If the receiver buffer is not resizable, then we preserve resizability, but if the receiver buffer is resizable, the return buffer is is also resizable and has the same max byte length. Resizable buffers are resizable up to some maximum and this maximum is preserved by transfer. If the new length you pass in is greater than the max byte length you get a range error. Any new memory in the new array are zero. SYG: So `transferToFixedLength` behaves exactly like transfer except it returns non-resizable buffers. Here I have a cheat sheet for what the range conditions for the new length for these various 4 cases. If you transfer a resizable buffer, then the new length must be greater than equal to zero but lower or equal to the maximum byte length. In every case which is the transferring a fixed length buffer, you transfer to fixed length to fixed length buffer, all you have to remember is the new length must be greater than or equal to 0 the specification of implement achesses buffers, there’s no max for the other three phases. SYG: The other friend is the `detached` getter. It’s just a getter without a setter. You can tell if the buffer is in fact detached. Because of the history of how ArrayBuffers were specified, detaching and figuring zero out – so it was omitted from the original spec drafts for buffers to deif they have detached. It was confused because like how do you observe detached. Some methods throw. Others return sentinel values. When T39 took over the spec, it should throw, at that point, the focus didn’t update. A few years ago, RKG from PlayStation PR’d and got consensus for normative changes to reflect reality, where on detached buffers we codified the sentinel values for getting index elements. That’s all to say that the current state for detecting whether something is detached is complicated, but it’s useful to know if something is detached. So we are adding a detached getter. It might be good to mention that engines all have this in their engine API’s anyway and Node maybe exposes something to user space, but I am not sure. Overall, it’s good to have a small thing. -SYG: One question I want to address here is that a discussion item that came up last time for MAH, was why not copy-on-write buffers?. This is in the service of performance, if we ignored the detached bit, there is no native API to detach something. This is performance that could be transparently had if you implemented copy-on-write ArrayBuffers. Instead of transferring, if you just kept with a copy, but underlyingly that copy is is copy on write, you don’t incur the copy until you mutate the array. Wouldn’t that solve the problem? On paper, yes it would. But the problem is, why would V8 not have implemented copy-on-write buffers. We consider it important security mitigation to have the data pointers. This is like the pointer in the C++ of the ArrayBuffer data. We want that data pointer to be fixed. After you allocate object ArrayBuffer, you don’t want the pointer to move so if there’s bugs in the JIT, like it’s an important optimization that we bake into JIT code for performance. If there are mistakes in the JITs and we move that and bake that pointer in, and we move that pointer due to copy on write that opens up a whole class of letting-you-access-arbitrary-memory bugs. We consider this an important security mitigation to have the data points of ArrayBuffers to be fixed this. The same reason why the resizable buffer have a max length, to keep this, we want the data pointer to be fixed after data allocation. If you want to implement copy on write ArrayBuffers the portable way to do that is move the data pointer because you originally have it point to the original backing store, and upon first mutation you do a copy and then repoint everything to the new copy data store. That kind of move would destroy the security mitigation. And for this reason we have never implemented copy on write buffers and we don’t plan to. If there were ways to do this without moving of the data pointer, in theory we would be open to it after assessing complexity. But the only way to do it portably requires deep integration into each OS. And yes. It doesn’t seem like a complexibility we want to take on. We do have copy on write arrays because the same security mitigation concerns don’t really apply to arrays. +SYG: One question I want to address here is that a discussion item that came up last time for MAH, was why not copy-on-write buffers?. This is in the service of performance, if we ignored the detached bit, there is no native API to detach something. This is performance that could be transparently had if you implemented copy-on-write ArrayBuffers. Instead of transferring, if you just kept with a copy, but underlyingly that copy is is copy on write, you don’t incur the copy until you mutate the array. Wouldn’t that solve the problem? On paper, yes it would. But the problem is, why would V8 not have implemented copy-on-write buffers. We consider it important security mitigation to have the data pointers. This is like the pointer in the C++ of the ArrayBuffer data. We want that data pointer to be fixed. After you allocate object ArrayBuffer, you don’t want the pointer to move so if there’s bugs in the JIT, like it’s an important optimization that we bake into JIT code for performance. If there are mistakes in the JITs and we move that and bake that pointer in, and we move that pointer due to copy on write that opens up a whole class of letting-you-access-arbitrary-memory bugs. We consider this an important security mitigation to have the data points of ArrayBuffers to be fixed this. The same reason why the resizable buffer have a max length, to keep this, we want the data pointer to be fixed after data allocation. If you want to implement copy on write ArrayBuffers the portable way to do that is move the data pointer because you originally have it point to the original backing store, and upon first mutation you do a copy and then repoint everything to the new copy data store. That kind of move would destroy the security mitigation. And for this reason we have never implemented copy on write buffers and we don’t plan to. If there were ways to do this without moving of the data pointer, in theory we would be open to it after assessing complexity. But the only way to do it portably requires deep integration into each OS. And yes. It doesn’t seem like a complexibility we want to take on. We do have copy on write arrays because the same security mitigation concerns don’t really apply to arrays. SYG: So before I move on to the open question for API design alternatives, I will turn to the queue, any questions about what is presented so far? -MAH: So I have nothing against adding the transfer API. I want to get that out of the way. However, I really love if we could actually check if it’s possible to do copy-on-write because there are lot of optimization. Like it’s a value optimization for any code. And I am not the only one to think that. I have seen conversations on twitter of people asking for this and why is it not there? And one of the thing like none of us understand is, if there is a detached check today for ArrayBuffers, how is there not a “this array was copied” flag? Is that not the same equivalent check than the detached check? Obviously, if you – if the ArrayBuffer becomes detached you can blindly follow the pointer to the data. So I don’t understand the security argument here. And yeah. +MAH: So I have nothing against adding the transfer API. I want to get that out of the way. However, I really love if we could actually check if it’s possible to do copy-on-write because there are lot of optimization. Like it’s a value optimization for any code. And I am not the only one to think that. I have seen conversations on twitter of people asking for this and why is it not there? And one of the thing like none of us understand is, if there is a detached check today for ArrayBuffers, how is there not a “this array was copied” flag? Is that not the same equivalent check than the detached check? Obviously, if you – if the ArrayBuffer becomes detached you can blindly follow the pointer to the data. So I don’t understand the security argument here. And yeah. SYG: I don’t think detached buffers get freed immediately, do they? @@ -280,7 +279,7 @@ SYG: In a bug free implementation. What if the JavaScript implementation is bugg MAH: Yes. I mean, you already have an invalid execution here. -SYG: So what? We still don’t want a render – escape – +SYG: So what? We still don’t want a render – escape – MAH: Are you saying that a invalid JavaScript execution is okay, but if you were – like, the pointer got moved somehow that would be worse than invalid JavaScript? @@ -290,7 +289,7 @@ MAH: I don’t understand either because for copy on write, by definition, you a SYG: If you transfer to a different length, if the API were limited that you can always transfer – you can only transfer to same exact byte length – -MAH: I am not talking about `transfer` but ArrayBuffer `slice`. You make a copy of your ArrayBuffer. But you don’t want to incur the costs of allocating new memory. So both ArrayBuffers actually point to the same memory behind it, but they have a guard on any write operation to make a copy at that point. +MAH: I am not talking about `transfer` but ArrayBuffer `slice`. You make a copy of your ArrayBuffer. But you don’t want to incur the costs of allocating new memory. So both ArrayBuffers actually point to the same memory behind it, but they have a guard on any write operation to make a copy at that point. SYG:That’s correct. Yes. @@ -320,7 +319,7 @@ DE: Yeah. I can’t think of anything else that makes sense to deal with that wa SYG: Yeah. Yeah. Do check out that gist, folks. It’s nice and simple. Thanks. -SYG: Before moving on to asking for Stage 3, there is an issue open, number 6 of API alternative, which is I am proposing here these two methods with basically identity functionality except one returns fixed length and one preserved resizeability of the receiver buffer. What if you had one single method? If you had one single method, how would you communicate to the API that you want to preserve resizeability or you want a fixed length, you want an options bag probably. The pro of having one method. And the con is there is an increase in flexibility. If the fixed length behaviour, you have to pass undefined for it to get the current byte length of the receiver. So it’s maybe slightly less ergonomic for the common use case. I am on the side of keeping with the current design of two separate methods. Transfer being the majority use case, having just one single optional argument with no options bag, and the longer `transferToFixedLength` with the same signature that identity behaviour. I feel that’s a little bit better. I don’t feel super strongly. I don’t think this kind of method needs the options bag. But this is – those are just API design opinions. Be happy to hear if folks have thoughts here. +SYG: Before moving on to asking for Stage 3, there is an issue open, number 6 of API alternative, which is I am proposing here these two methods with basically identity functionality except one returns fixed length and one preserved resizeability of the receiver buffer. What if you had one single method? If you had one single method, how would you communicate to the API that you want to preserve resizeability or you want a fixed length, you want an options bag probably. The pro of having one method. And the con is there is an increase in flexibility. If the fixed length behaviour, you have to pass undefined for it to get the current byte length of the receiver. So it’s maybe slightly less ergonomic for the common use case. I am on the side of keeping with the current design of two separate methods. Transfer being the majority use case, having just one single optional argument with no options bag, and the longer `transferToFixedLength` with the same signature that identity behaviour. I feel that’s a little bit better. I don’t feel super strongly. I don’t think this kind of method needs the options bag. But this is – those are just API design opinions. Be happy to hear if folks have thoughts here. JHD: Yeah. SYG and I talked about this with the name change, the previous name was less clear. I don’t think it matters which choice we pick. I thought it was worth getting a temperature check of the room as to whether one more complex method or two simpler methods is preferred - and either one is fine. @@ -338,7 +337,7 @@ ACE: +1s as well DE: I don’t know. Yeah. Good to get explicit support, but also fix ups like this, it’s maybe not quite the same bar as having extremely broad support. Nevermind. -RPR: Okay. Any observations? Or any – weak nonblocking – weak concerns? Okay. All right. Congratulations, you have Stage 3. +RPR: Okay. Any observations? Or any – weak nonblocking – weak concerns? Okay. All right. Congratulations, you have Stage 3. DE: Sorry. On the last topic, it seems like MF in the chat has some concerns about the name. Can we jump to that because . . . you are saying we should have spent more time @@ -348,11 +347,11 @@ DE: So, I mean, I am a little confused by this. Let’s follow up off-line in th MF: Sure. We can talk in the chat. -* Note: Discussion in Matrix showed nothing further to follow up on re: process. If MF had bigger concerns about the name, he says he would have blocked. +- Note: Discussion in Matrix showed nothing further to follow up on re: process. If MF had bigger concerns about the name, he says he would have blocked. ### Conclusion/Resolution -* Stage 3 with the names `transfer` and `transferToFixedLength`. +- Stage 3 with the names `transfer` and `transferToFixedLength`. ## Intl era and monthCode for Stage 2 @@ -365,15 +364,15 @@ FYT: Hi, everybody. I am Frank Yung-Fong Tang. I work with Google. In the intern FYT: So the scope. We tried to narrow the scope. The proposal is already in stage 3, so we tried not to touch it because at least in the developmental phase, we tried not to bring in too much complication to the development of Temporal. But the part that to amend to, that part we could separately, independently in parallel develop this proposal so it could be implemented both time for any engine and they could say "we only implement the ECMA-262 version, ship it" or "we want to implement the not ISO 8601 calendar, but we need a clear definition what the code is have to combine with the proposal to address that". So the focus is really on calendars other than ISO8601. So we really feel there’s a need for the additional requirement, the need for implementation purpose and that we need to define some detail. Not too much detail. But enough detail that could be merged into 402 spec for the usage with Temporal for these other calendars. -FYT: So let’s look into this. So let’s talk about a little bit history. So the history is this: in the May 2022, in the TG2 meeting, we had a discussion we need to have such need in forming the proposal. And so after discussion of that in June 16, I open the repro for Stage 0, and I put it into the Stage 0 page; and then in October, original I was naming Intl Temporal, but it referred to as the scope of this proposal is actually pretty limited and could be a little bit misleading. It only touches a very smart part of the proposal. So they suggested we change the name. So we change it quarterly. In November, we come here and agreed to advance to Stage 1. During that discussion, there are couple questions raised and we take it very seriously, and I will tell you more later on. The thing is that also, during that discussion, one of the questions in highlight is if this is the correct standard body to define this? Should this be, instead, discussed in CLDR and our spec refer to it. And that’s a very interesting and very insightful feedback. We take it seriously. In high level, I will tell you the detail later. But in high level what we did is that in December 7, I filed some bugs and went to CLDR TC and discuss with them and share in CLDDR TC, and basically they agree to accept the particular file. And the CLDR chair basically say, well, we – CLDR release that once every six months. So the next CLDR will be released 3 months from now and they are close to close up the changes in review process now. In that time, the agreement in CLDR is that we should form a working group and to define the eraCode and target the CLDR 43 and starting to define, then, this particular proposal will start to refer to that. We still need to point out that we follow this standard of the other nonexistent standard, we have to at least define thing we referring to. And they nicely agreed to chair the working group. And in early this year, just like two weeks ago, the working group [...] and have a draft. +FYT: So let’s look into this. So let’s talk about a little bit history. So the history is this: in the May 2022, in the TG2 meeting, we had a discussion we need to have such need in forming the proposal. And so after discussion of that in June 16, I open the repro for Stage 0, and I put it into the Stage 0 page; and then in October, original I was naming Intl Temporal, but it referred to as the scope of this proposal is actually pretty limited and could be a little bit misleading. It only touches a very smart part of the proposal. So they suggested we change the name. So we change it quarterly. In November, we come here and agreed to advance to Stage 1. During that discussion, there are couple questions raised and we take it very seriously, and I will tell you more later on. The thing is that also, during that discussion, one of the questions in highlight is if this is the correct standard body to define this? Should this be, instead, discussed in CLDR and our spec refer to it. And that’s a very interesting and very insightful feedback. We take it seriously. In high level, I will tell you the detail later. But in high level what we did is that in December 7, I filed some bugs and went to CLDR TC and discuss with them and share in CLDDR TC, and basically they agree to accept the particular file. And the CLDR chair basically say, well, we – CLDR release that once every six months. So the next CLDR will be released 3 months from now and they are close to close up the changes in review process now. In that time, the agreement in CLDR is that we should form a working group and to define the eraCode and target the CLDR 43 and starting to define, then, this particular proposal will start to refer to that. We still need to point out that we follow this standard of the other nonexistent standard, we have to at least define thing we referring to. And they nicely agreed to chair the working group. And in early this year, just like two weeks ago, the working group [...] and have a draft. -FYT: So in this particular proposal, is that we tried to limit our definition of proposal to find a set of calendars which are already defined in CLDR. We are talking about which eraCode to define for the calendar. We tried to only limit it to what are already defined CLDR, because there are other calendars we know of in the world for which is not well-documented is not yet have an identifier. For example, [...] we all know they exist, but have not yet been listed in CLDR data, a calendar ID. For those thing we try not to include yet. Later on we may amend that. The second thing is, for each of this calendar which are already defined in CLDR with ID, we tried to define a set of valid era and monthCode for that particular calendar. For example, for Gregorian calendar, if they pass M13, we should throw exception. It’s an invalid month code for Gregorian. However, for Coptic calendar, every Coptic year has 13 months. The 12 big month. Like 30-day roughly. About 29, 30 day. But every year, there’s a small month, the 13th month, with 5 days or six or 7, depend on whether the leap year or not. One leap year will have one small day, and a 13 month. In that case we should take M13 as the 13th month. But not M14. Right? And also for Chinese calendar, there’s leap month was let’s say you have March, the third month and after the third month of the year, have a leap month, which many Chinese referring to leap third month. N03L, so basically, Chinese calendar should be able to have M01 to 12 and M01L to M12L. But not Gregorian. You should throw exception. So the set of monthCode should also be defined what is a settable set for that calendar. Similar to era. What era could be acceptable. That is something to try to define. And we probably also should design the semantic and high level of what era and eraYear are, and monthCode as I mentioned before. Right? Maybe if there’s something very simple, we can define a conversion. But that is something – that part we can still discuss in Stage 2. Probably not for all the calendars, because a lot of calendar are very difficult to tackle. For example,Buddhist calendar, the difference between that and Gregorian calendar is – how do you say that? Is the starting point of the zero. Right? So today, I think I forget which year, there are 2005 or something. It shifted a couple of years. It’s not exactly the same. Similarly, the ROC calendar in Taiwan, it’s shifting the Gregorian era and starting from 1912. So those are very simple shifting of era. That might not need to be. We think the semantic, not the algorithm should be defined because that is the API surface. What kind of thing to get in and what kind of thing could be returned should be defined, in terms of algorithms for this kind of logic is complicated and we may not be able to do it and not necessarily need to do so. Of course, there is a desire to do that, but that is yet another topic. +FYT: So in this particular proposal, is that we tried to limit our definition of proposal to find a set of calendars which are already defined in CLDR. We are talking about which eraCode to define for the calendar. We tried to only limit it to what are already defined CLDR, because there are other calendars we know of in the world for which is not well-documented is not yet have an identifier. For example, [...] we all know they exist, but have not yet been listed in CLDR data, a calendar ID. For those thing we try not to include yet. Later on we may amend that. The second thing is, for each of this calendar which are already defined in CLDR with ID, we tried to define a set of valid era and monthCode for that particular calendar. For example, for Gregorian calendar, if they pass M13, we should throw exception. It’s an invalid month code for Gregorian. However, for Coptic calendar, every Coptic year has 13 months. The 12 big month. Like 30-day roughly. About 29, 30 day. But every year, there’s a small month, the 13th month, with 5 days or six or 7, depend on whether the leap year or not. One leap year will have one small day, and a 13 month. In that case we should take M13 as the 13th month. But not M14. Right? And also for Chinese calendar, there’s leap month was let’s say you have March, the third month and after the third month of the year, have a leap month, which many Chinese referring to leap third month. N03L, so basically, Chinese calendar should be able to have M01 to 12 and M01L to M12L. But not Gregorian. You should throw exception. So the set of monthCode should also be defined what is a settable set for that calendar. Similar to era. What era could be acceptable. That is something to try to define. And we probably also should design the semantic and high level of what era and eraYear are, and monthCode as I mentioned before. Right? Maybe if there’s something very simple, we can define a conversion. But that is something – that part we can still discuss in Stage 2. Probably not for all the calendars, because a lot of calendar are very difficult to tackle. For example,Buddhist calendar, the difference between that and Gregorian calendar is – how do you say that? Is the starting point of the zero. Right? So today, I think I forget which year, there are 2005 or something. It shifted a couple of years. It’s not exactly the same. Similarly, the ROC calendar in Taiwan, it’s shifting the Gregorian era and starting from 1912. So those are very simple shifting of era. That might not need to be. We think the semantic, not the algorithm should be defined because that is the API surface. What kind of thing to get in and what kind of thing could be returned should be defined, in terms of algorithms for this kind of logic is complicated and we may not be able to do it and not necessarily need to do so. Of course, there is a desire to do that, but that is yet another topic. -FYT: So since I am going to bring up to Stage 2, in TG2 we have additional requirement. This is not needed for 262 proposals, but in TG2, two years ago we have this agreement, anything we want to bring up for Stage 2 we should pass three tests. One is prior art: is anything like that been before? We try not to invent new thing. We try to have something proven to work. The second thing is the proposal, difficult to implement in userland. The third thing is whether there is a broad appeal. Is there really wide usage for this specific thing. So here’s the reason to justify this is needed: is because we doubt a clear set of era codes and semantic of monthCode of each calendar system. The JavaScript language cannot easily implement this without the ambiguity. Until we define that clearly, my fear is that different browser engine, when support Temporal with different calendar may accept different set of eras. Or whenever the set month code that were treated differently. So that’s the only minimum we are trying to avoid. And here is one example. The top part, I copy from a preexisting Temporal test in stage 2, this is not actually a good task because the blue parts you can see are actually undefined. There’s no place in 262 or 402 or in the Temporal proposal currently defined the token `ce`. You think that’s common sense. Right? But there is no way `ce` is acceptable. What does that mean? So, you know, in a way, this current test is kind of not a valid test for Temporal at this point. Even though it probably should be. And therefore we need to define acceptable codes and bring forth whatever we mean and therefore to make this legal. Right? The other two other example. One is showing you, for example, by using BCE era. And year -0001. - 0002. - 01. Because there’s no year zero for Gregorian calendar. And also for Japanese, they have different era codes for ?? things like in 2019, so 2023 . . . February ‘23 will be year 5, and the monthCode will be 2 This is the first day this year. Japanese empire. +FYT: So since I am going to bring up to Stage 2, in TG2 we have additional requirement. This is not needed for 262 proposals, but in TG2, two years ago we have this agreement, anything we want to bring up for Stage 2 we should pass three tests. One is prior art: is anything like that been before? We try not to invent new thing. We try to have something proven to work. The second thing is the proposal, difficult to implement in userland. The third thing is whether there is a broad appeal. Is there really wide usage for this specific thing. So here’s the reason to justify this is needed: is because we doubt a clear set of era codes and semantic of monthCode of each calendar system. The JavaScript language cannot easily implement this without the ambiguity. Until we define that clearly, my fear is that different browser engine, when support Temporal with different calendar may accept different set of eras. Or whenever the set month code that were treated differently. So that’s the only minimum we are trying to avoid. And here is one example. The top part, I copy from a preexisting Temporal test in stage 2, this is not actually a good task because the blue parts you can see are actually undefined. There’s no place in 262 or 402 or in the Temporal proposal currently defined the token `ce`. You think that’s common sense. Right? But there is no way `ce` is acceptable. What does that mean? So, you know, in a way, this current test is kind of not a valid test for Temporal at this point. Even though it probably should be. And therefore we need to define acceptable codes and bring forth whatever we mean and therefore to make this legal. Right? The other two other example. One is showing you, for example, by using BCE era. And year -0001. - 0002. - 01. Because there’s no year zero for Gregorian calendar. And also for Japanese, they have different era codes for ?? things like in 2019, so 2023 . . . February ‘23 will be year 5, and the monthCode will be 2 This is the first day this year. Japanese empire. FYT: Prior arts in era, most are using numeric for int. Microsoft .NET have calendar years. Java, int. Java have a newer API code. Java error. They do have defined that thing. They also have defined a particular enum for that. Javaera. The java era also the new API actually have a value of method returner string. Only for java era. They have ICU4X, they have several classes that have string era codes there. Those are the prior arts. Most of the prior arts actually in the int. But we currently don’t really think int is a good way to pass this. Maybe it is acceptable but we really think that should be a string value. Not just a string of a number. But a string of text. -FYT: So we bring the thing for stage 1 advancement in December and we got two very important feedback. One is from API and he says "was CLDR. Like the people working on this, have they been posed? Are they against creating a identifier for those to we choose the solution for it? Are they okay with this?” This is the code from the text with some elimination. But I think I capture his main points. Is it okay for us, TC39, to define this? The second thing is around, they mention that – I think I mentioned in the past we have some termination of some other fill. But he mentions that most of the things for are not specific individual language or calendar system like that. So I think both show the preference that we don’t do this work. We let CLDR do that. It’s good feedback. We go to the following path to address whatever that true feedback. One is, I filed an issue to track the issue. And as I mentioned I talked to CLDR TC. Working group, they during that period a PR 6225 to the 35 and the CLDR for that. And the proposal have been shown to ICU TC a lot of overlapping between this and CLDR. And they have some minor comment, but basically agree. So that proposal is currently under review by the CLDR TC. What we try to do is I changed the spec text to return to assuming that thing will be put in. It’s not yet, but probably will be published in April 2023. So I am currently thinking to whether update the proposal so far. And I change the spec referring to that. And I also only define a subset for that defined era. The reason for the subset definition is that we tried not to bring in the pre-Meiji Japanese era into the JavaScript standard, and the reason they told us, what happens is, CLDR have definition of 264 or something Japanese era. But only 5 or 6 are reliable and have useful meaningful. The Japanese pre-Meiji, about 1860 or something, I forgot the era, is really meaningless for Japanese calendar. Although the era exist, historian have argument when it started. And Japanese not using Gregorian calendar today, but the Japanese lunar calendar. But they are too difficult to get rid of them in the CRTR. So our proposal is that not to include those thing in the definition. So here is the draft of spec text. One of the Stage 2 requirements is to have initial draft. As you see, I put a table here to show from the changed proposal in CLDR to list what calendar have what is there is what kind of era could be acceptable for the calendar and whether to have aliases and how to map those aliases to that. For example, we will have Gregorian, Gregory as a calendar and Gregory as an era and CE and AD list for the Gregory, which are both accetable. But whenever we return, we only return Gregory as the era code in some case, time have that. What is the range of era acceptable or not? So for a lot of calendar, they are from negative infinity to infinity. For example, for Gregory, 0 shouldn’t be accepted. The minimum era year is 1. It shouldn’t have a zero BCE. Detail we can discuss. But we list here the thing. And so on and so forth. And there’s a process to connect the code to the era in calendar, which era year for the calendar is valid, I haven’t figured to plug that in, it should probably reject if the era year didn’t make sense. And so – which is a valid month code for the calendar. For example, most of the calendar will have M01 to M12. But then Chinese may have another 12 possibility. So on so forth. +FYT: So we bring the thing for stage 1 advancement in December and we got two very important feedback. One is from API and he says "was CLDR. Like the people working on this, have they been posed? Are they against creating a identifier for those to we choose the solution for it? Are they okay with this?” This is the code from the text with some elimination. But I think I capture his main points. Is it okay for us, TC39, to define this? The second thing is around, they mention that – I think I mentioned in the past we have some termination of some other fill. But he mentions that most of the things for are not specific individual language or calendar system like that. So I think both show the preference that we don’t do this work. We let CLDR do that. It’s good feedback. We go to the following path to address whatever that true feedback. One is, I filed an issue to track the issue. And as I mentioned I talked to CLDR TC. Working group, they during that period a PR 6225 to the 35 and the CLDR for that. And the proposal have been shown to ICU TC a lot of overlapping between this and CLDR. And they have some minor comment, but basically agree. So that proposal is currently under review by the CLDR TC. What we try to do is I changed the spec text to return to assuming that thing will be put in. It’s not yet, but probably will be published in April 2023. So I am currently thinking to whether update the proposal so far. And I change the spec referring to that. And I also only define a subset for that defined era. The reason for the subset definition is that we tried not to bring in the pre-Meiji Japanese era into the JavaScript standard, and the reason they told us, what happens is, CLDR have definition of 264 or something Japanese era. But only 5 or 6 are reliable and have useful meaningful. The Japanese pre-Meiji, about 1860 or something, I forgot the era, is really meaningless for Japanese calendar. Although the era exist, historian have argument when it started. And Japanese not using Gregorian calendar today, but the Japanese lunar calendar. But they are too difficult to get rid of them in the CRTR. So our proposal is that not to include those thing in the definition. So here is the draft of spec text. One of the Stage 2 requirements is to have initial draft. As you see, I put a table here to show from the changed proposal in CLDR to list what calendar have what is there is what kind of era could be acceptable for the calendar and whether to have aliases and how to map those aliases to that. For example, we will have Gregorian, Gregory as a calendar and Gregory as an era and CE and AD list for the Gregory, which are both accetable. But whenever we return, we only return Gregory as the era code in some case, time have that. What is the range of era acceptable or not? So for a lot of calendar, they are from negative infinity to infinity. For example, for Gregory, 0 shouldn’t be accepted. The minimum era year is 1. It shouldn’t have a zero BCE. Detail we can discuss. But we list here the thing. And so on and so forth. And there’s a process to connect the code to the era in calendar, which era year for the calendar is valid, I haven’t figured to plug that in, it should probably reject if the era year didn’t make sense. And so – which is a valid month code for the calendar. For example, most of the calendar will have M01 to M12. But then Chinese may have another 12 possibility. So on so forth. RPR: FYT, there’s 5 minutes left. @@ -381,17 +380,17 @@ FYT: Yes. So those are some initial drafts. This is the entrance for Stage 1, wh RPR: Any questions on the proposal? -RPR: Okay. No questions. +RPR: Okay. No questions. RPR: Any positive support for Stage 2? -USA: +1 for Stage 2. +USA: +1 for Stage 2. USA: Thank you for being very receptive and I support Stage 2. MF: This might be my misunderstanding of how aliases are intended to be used, but I see like we have a single letter alias for the Japanese eras and it doesn’t include any of the era names themselves. Like the actual Japanese characters assigned to that era name. Is that intentionally omitted, or is that accidentally omitted? -FYT: That’s a good question. What I try to do is reflecting what is proposed in the CLDR. And my understanding is intentionally. And this brings up the interesting place, right? We try to define it here, or we try to just copy whatever got defined in CLDR. Therefore, I try to discuss whether that is a good or bad thing here. If we tried to define it here, then that’s a good place to discuss. If only thing we try to define is try to copy from them, then the discussion should happen in CLDR instead of here. That’s what I tried to discuss here, but that’s not the feedback. +FYT: That’s a good question. What I try to do is reflecting what is proposed in the CLDR. And my understanding is intentionally. And this brings up the interesting place, right? We try to define it here, or we try to just copy whatever got defined in CLDR. Therefore, I try to discuss whether that is a good or bad thing here. If we tried to define it here, then that’s a good place to discuss. If only thing we try to define is try to copy from them, then the discussion should happen in CLDR instead of here. That’s what I tried to discuss here, but that’s not the feedback. MF: I agree with that decision to defer those things to CLDR. @@ -411,13 +410,13 @@ FYT: Wait a second. I don’t think that’s his suggestion. Is that what he’s MF: It is. My suggestion was, I guess more of a question . . . it was about additional aliases -FYT: Oh, additional. Okay. +FYT: Oh, additional. Okay. MF: Yeah FYT: I thought you were talking about H and N here. I see. Okay -RPR: We have heard one expression for support from USA. Is there a second? Are there any messages for any other positive messages for advancing this? +RPR: We have heard one expression for support from USA. Is there a second? Are there any messages for any other positive messages for advancing this? RPR: JHX has + 1 for Stage 2 - EOM. Thank you. All right. And no concerns @@ -427,7 +426,7 @@ FYT: I also need to ask for to 3 people for stand up for Stage 3 reviewer at thi RPR: Who wants to be a stage 3 reviewier. -RPR: EAO volunteers in the chat. +RPR: EAO volunteers in the chat. FYT: Thank you. @@ -441,8 +440,8 @@ RPR: We will be back at the top of the hour. Ask just a reminder we added in the ### Conclusion/Resolution -* Got explicit support and consensus to advanced into Stage 2 -* Stage 2. EAO and SFC volunteered as Stage 3 reviewers +- Got explicit support and consensus to advanced into Stage 2 +- Stage 2. EAO and SFC volunteered as Stage 3 reviewers ## Temporal, naming of `.calendarId` and `.timeZoneId` @@ -458,7 +457,7 @@ JHD: Then hopefully that can have consensus, and one less item we have to talk a ### Conclusion/Resolution -* Consensus on `Id` spelling in properties as presented yesterday +- Consensus on `Id` spelling in properties as presented yesterday ## Symbol predicates @@ -484,14 +483,13 @@ SYG: I’m completely aligned with MM there. I will just support Stage 2 and als JHX: +1 for Stage 2. -JHD: We have consensus for Stage 2. I heard preferences for static methods. If there’s anyone who would not be content with Stage 3 -with static methods, I would love to hear your feedback in advance of the next plenary. If you have not commented on GitHub please do so or reach out to me privately. Thank you. +JHD: We have consensus for Stage 2. I heard preferences for static methods. If there’s anyone who would not be content with Stage 3 with static methods, I would love to hear your feedback in advance of the next plenary. If you have not commented on GitHub please do so or reach out to me privately. Thank you. USA: Congratulations JHD. ### Conclusion/Resolution -* Stage 2 for static methods +- Stage 2 for static methods ## Decorator/export ordering @@ -610,7 +608,7 @@ JHD: It makes a local variable called `default` with that value? RBN: No, it does not. It creates an exported binding named default. It only creates an export binding. The name export is part of list of bound names of the export module. Or the imported names. -JHD: I’m saying that the -- the export will be named `”default”` if you `import *` it, but there is no relationship between the local changes it makes and the exported changes it makes. I know that if you `export let` something, that that’s not the case, or, you know, `export var` something, because it’s a live binding. If you `export const` something, you can’t observe there is any connection between the local binding stuff and the consumer stuff. +JHD: I’m saying that the -- the export will be named `”default”` if you `import *` it, but there is no relationship between the local changes it makes and the exported changes it makes. I know that if you `export let` something, that that’s not the case, or, you know, `export var` something, because it’s a live binding. If you `export const` something, you can’t observe there is any connection between the local binding stuff and the consumer stuff. RBN: Yeah, default is primarily observable in the -- in that it also sets the name of the class. @@ -618,17 +616,17 @@ JHD: Right. But only when it’s directly there. A decorated anonymous class, na RBN: That decorated, it should still get the name. The name should still be assigned. The decorator is still in effect. Assuming no class decorate more between replaces the constructor with something else. The name should still come from the assigned name or the default that’s provided. -DE: Yeah, it’s fine if we want to draw a higher level analogies, like, that export kind of looks like it’s taking an expression. But it’s very straightforward about what’s a declaration versus what’s an expression and what we choose is, like, what kind of higher level mental model we want to attach to that. And we could decide either way. +DE: Yeah, it’s fine if we want to draw a higher level analogies, like, that export kind of looks like it’s taking an expression. But it’s very straightforward about what’s a declaration versus what’s an expression and what we choose is, like, what kind of higher level mental model we want to attach to that. And we could decide either way. USA: All right. Next up we have Shu. -SYG: We do? Oh, yes, yes. The thing about -- yeah, can you speak to why existing migration techniques like code mods are insufficient to help TypeScript. +SYG: We do? Oh, yes, yes. The thing about -- yeah, can you speak to why existing migration techniques like code mods are insufficient to help TypeScript. -DRR: Yeah, basically there’s always a level of how easy it is to migrate, right? Like, the easiest migration is you upgrade and everything works magically then there’s some level of, okay, I have to switch a flag. In this case, that’s part of the migration. Then having to have users also say, like, I’m also going to run this tool for my code base and what not, it’s okay, except there’s lulls a risk of the code mod not being correct and might lose trivia, like comments, white space, things like that. And it’s a pain that doesn’t actually upgrade the knowledge that’s been built up over the years as well around, like, you know, existing documentation, things like that. That, you know, has the opportunity to still be valid, right? So, yeah, I mean, you can just say run a code mod, done deal, right? But not everyone knows what code mod exists or how to do it or whatever. So there’s a degree to how easy we want that make this and I would like to make this as easy as possible. +DRR: Yeah, basically there’s always a level of how easy it is to migrate, right? Like, the easiest migration is you upgrade and everything works magically then there’s some level of, okay, I have to switch a flag. In this case, that’s part of the migration. Then having to have users also say, like, I’m also going to run this tool for my code base and what not, it’s okay, except there’s lulls a risk of the code mod not being correct and might lose trivia, like comments, white space, things like that. And it’s a pain that doesn’t actually upgrade the knowledge that’s been built up over the years as well around, like, you know, existing documentation, things like that. That, you know, has the opportunity to still be valid, right? So, yeah, I mean, you can just say run a code mod, done deal, right? But not everyone knows what code mod exists or how to do it or whatever. So there’s a degree to how easy we want that make this and I would like to make this as easy as possible. SYG: So let me respond to that real quick. So Kevin wrote a code mod just now in the past 15 minutes, super coder there. But I guess what I’m trying to figure out is what I heard are some pretty -- fully generic arguments about the pain of any upgrade path, and I totally agree with them. It’s a question, it’s an exercise in line drawing on how easy we want to make it. I was looking for some color on why does this decorator ordering thing make you feel like the right thing is to request a change here instead of pursuing something slightly more painful like code mods. -DRR: I mean, I don’t know. Like, it’s not just one thing. It’s not just the transition cost. It’s also -- I mean, everything. Right? Why are -- why is the spec even different, right? We haven’t gotten any feedback saying I want this different from our side, and we had this feature for years. But I mean, I agree, right? Everything is possible. Right? Maybe the code mod is perfect, doesn’t lose anything. I haven’t tried it, right? But, you know, I think there is something to be said about just trying to do the right thing in this case for users. +DRR: I mean, I don’t know. Like, it’s not just one thing. It’s not just the transition cost. It’s also -- I mean, everything. Right? Why are -- why is the spec even different, right? We haven’t gotten any feedback saying I want this different from our side, and we had this feature for years. But I mean, I agree, right? Everything is possible. Right? Maybe the code mod is perfect, doesn’t lose anything. I haven’t tried it, right? But, you know, I think there is something to be said about just trying to do the right thing in this case for users. RBN: Daniel, can I also speak to this. @@ -652,17 +650,17 @@ USA: All right. Thank you for taking this async. We seem to be out of time, but RPR: I know there was a request for a temperature check. If you think that’s an essential path forward, we can try and schedule some time for it tomorrow. -DRR: If we have time, it sounds like we have a fairly open schedule tomorrow afternoon, right? Or tomorrow in general. +DRR: If we have time, it sounds like we have a fairly open schedule tomorrow afternoon, right? Or tomorrow in general. RPR: There is time tomorrow, yes. -DRR: Yeah, I don’t want to drag this out too long, but I don’t want to -- I don’t to eat into more time. Can we schedule maybe ten minutes tomorrow? Or 15 if you think it would be better. +DRR: Yeah, I don’t want to drag this out too long, but I don’t want to -- I don’t to eat into more time. Can we schedule maybe ten minutes tomorrow? Or 15 if you think it would be better. USA: All right. Yeah, let’s do ten minutes overflow tomorrow. ### Conclusion/Resolution -* To be discussed further +- To be discussed further ## Async explicit resource management @@ -671,18 +669,18 @@ Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-async-explicit-resource-management/) - [slides](https://onedrive.live.com/redir?resid=934F1675ED4C1638!295595&authkey=!AKWyo9TWP2xQRM4&ithint=file,pptx&e=GmlwFX) -RBN: Back in the November two plenary, we discussed splitting off the async functionality from the ex-plus it resource management proposal to potentially advance separately pending a compromise or consensus that could be reached around the explicit syntax, whether or not we needed an explicit marker for the block scope. So I wanted to bring the proposal back, to discuss where the current -- what the current status is, and see if we’re at a point where we believe we can advance to Stage 3. So I have in here my standard motivations slide that I presented before resource management. And this is applied both to the sync does async versions of the proposal. Essentially, the main motivator for the proposal is to simplify these inconsistent patterns for resource management, provide a cleaner way to handle resource scoping and resource lifetime, avoid a number of common foot guns and lengthy code. And I can dig into many of these examples, but we have all kinds of different cases of return, release lock, close, end, there’s all kinds of different ways of cleaning up these resources that are often inconsistent. Sometimes they are -- they look synchronous but actually are not. So actually, working with these things and the consistent manager becomes much more complicated. Resource lifetime is -- can be tricky. Because you have to manage the handles construction outside of the try-finally block, therefore, it has a lifetime outside of that -- that scope. And ensure that you actually use the handle, but it’s the declaration essentially that sticks around to be further used in your code outside of the try finally. In cases like release lock, not having a consistent way to kind of marry the -- a declaration to its lifetime makes it easy to forget things like releasing locks, later on in your code, so being able to do this and not have to add try finally staff holding is helpful. Also, again, the consistency of -- or the foot gun of incorrect resource ordering, if A were to somehow -- sorry, if B were to somehow depended on A to properly close itself, closing them out of order could result in an exception, and in the case of trying to avoid a scaffolding, trying to to things the right way where resources are handled in the correct order, often requires a lot of complicated nesting that makes code harder to read, it pushes the thing you’re trying to do further to the right as it gets further and further nested. These applied both to the sync and async versions of the proposal. But here is to really kind of get into the meat of what the async proposal provides, I’ll show some motivating examples. One example here is a three face commit distribute transaction system or even non-distributed transitioning, where the commit of that resource or potential roll back of that change requires a period of time where you cannot be considered to block the main thread or you don’t want to block the main thread. This example shows using some type of transaction manager to start a transaction between two accounts where you want to debit an amount from one account and credit to the other account, and if all of these operations succeed, you can mark the transaction as successful so that it is committed at the end. If either of the debit or the credit fails, maybe there wasn’t enough money in the account, maybe the account you’re trying to credit to wasn’t available, then either one of these two options could throw an exception, which would prevent the code from ever reaching the point of marking success. So then the transaction needs to then go through its commit rollback cycle. But to do so requires, again, more operations that may require network requests or file requests, therefore, you want to be able to await those rather than block the main thread. Another example of this might be using something like a writable stream? Node JS allowing you to write data, but then forgetting to call end or the fact that end in node JS looks like it’s synchronous, but there is an event that you can listen to that tells you when it’s actually finished, and maintaining this ordering is -- or maintaining this -- +RBN: Back in the November two plenary, we discussed splitting off the async functionality from the ex-plus it resource management proposal to potentially advance separately pending a compromise or consensus that could be reached around the explicit syntax, whether or not we needed an explicit marker for the block scope. So I wanted to bring the proposal back, to discuss where the current -- what the current status is, and see if we’re at a point where we believe we can advance to Stage 3. So I have in here my standard motivations slide that I presented before resource management. And this is applied both to the sync does async versions of the proposal. Essentially, the main motivator for the proposal is to simplify these inconsistent patterns for resource management, provide a cleaner way to handle resource scoping and resource lifetime, avoid a number of common foot guns and lengthy code. And I can dig into many of these examples, but we have all kinds of different cases of return, release lock, close, end, there’s all kinds of different ways of cleaning up these resources that are often inconsistent. Sometimes they are -- they look synchronous but actually are not. So actually, working with these things and the consistent manager becomes much more complicated. Resource lifetime is -- can be tricky. Because you have to manage the handles construction outside of the try-finally block, therefore, it has a lifetime outside of that -- that scope. And ensure that you actually use the handle, but it’s the declaration essentially that sticks around to be further used in your code outside of the try finally. In cases like release lock, not having a consistent way to kind of marry the -- a declaration to its lifetime makes it easy to forget things like releasing locks, later on in your code, so being able to do this and not have to add try finally staff holding is helpful. Also, again, the consistency of -- or the foot gun of incorrect resource ordering, if A were to somehow -- sorry, if B were to somehow depended on A to properly close itself, closing them out of order could result in an exception, and in the case of trying to avoid a scaffolding, trying to to things the right way where resources are handled in the correct order, often requires a lot of complicated nesting that makes code harder to read, it pushes the thing you’re trying to do further to the right as it gets further and further nested. These applied both to the sync and async versions of the proposal. But here is to really kind of get into the meat of what the async proposal provides, I’ll show some motivating examples. One example here is a three face commit distribute transaction system or even non-distributed transitioning, where the commit of that resource or potential roll back of that change requires a period of time where you cannot be considered to block the main thread or you don’t want to block the main thread. This example shows using some type of transaction manager to start a transaction between two accounts where you want to debit an amount from one account and credit to the other account, and if all of these operations succeed, you can mark the transaction as successful so that it is committed at the end. If either of the debit or the credit fails, maybe there wasn’t enough money in the account, maybe the account you’re trying to credit to wasn’t available, then either one of these two options could throw an exception, which would prevent the code from ever reaching the point of marking success. So then the transaction needs to then go through its commit rollback cycle. But to do so requires, again, more operations that may require network requests or file requests, therefore, you want to be able to await those rather than block the main thread. Another example of this might be using something like a writable stream? Node JS allowing you to write data, but then forgetting to call end or the fact that end in node JS looks like it’s synchronous, but there is an event that you can listen to that tells you when it’s actually finished, and maintaining this ordering is -- or maintaining this -- evaluation and scoping is very important, because might in the next step want to open the file, and if you created that writable stream exclusively, then trying to open it while it hasn’t finished the actual commit would be a problem. So making sure you actually have a correct and consistent way of managing that lifetime is important. So we’ve gone through many different variations of this as I described in the kind of history slide in the sync version this was proposal yesterday. So what I’ll show you today is kind of where we settled on the syntax that we’re hoping to use. And that is in the form of using await declarations. So very similar to the using declaration, you can define a using await declaration anywhere you would be there an async context, so this would be inside of async functions or async generators or async error functions or at the top level of a module where open the level await is permitted. Just like the northerly using declaration, these are blocked scoped at the end of the block, there is an impolice wait for any resources that have been registered, essentially for these using await declarations that have been initialized. So in the example of a using await variable, taking that expression, the value of that expression and its async dispose or dispose as a fallback would be then captured at the using await declaration, and then at the end of the block, those Tess pose methods would be called in reverse they were added. We also support a using await there a for declaration head, just like we do for the form using declaration. They are also supported in for of and for-await of, and I’ll get to the duplication await in this statement here in a later slide. So, again, the using await declaration on its own, only let in async functions. These declarations, much like normal using declarations are immutable con extent bindings. They also do not support binding patterns. Just like we do, and they also again not supported at the top level of script by the nature of those the -- fact that those scripts are not async and also due to existing restrictions. Much like using declarations, lifetime is scope to current block scope container and the RIAA style, the resource acquisition is initialization style that we’re using for these declaration allows you to avoid excess block nesting and, again, makes sure these resources are scoped to what contains them. Using await declarations introduce an implicit interleaving point at the end of a block, and this has been one of the major sticking points for syntax as we’ve been discussing it for the past several years. The syntax we’ve chosen has a couple things. One, we know -- we do not have an explicit marker at the head of a block. This is something we discussed at length with Matthew Hoffman and Mark Miller. Throughout various iterations of this proposal, there were discussions about using being an expression context, which could make it very easy for it to become buried somewhere within evaluation. But the using await declaration itself, because it is a statement, it has to be essentially at the top of that -- or essentially at the same level of nesting as any other statement of that block, so in well formatted code, it’s easy to recognize where using await statement occurs. Some choices that we’ve made around evaluation is that if you exit a block before you ever evaluate or initialize a using await declaration, there would be no implicit await. This is my -- my contention or my position here is that code that you don’t execute shouldn’t have side effects essentially. And a use await declaration that you actually never initialize, having that cause an implicit await could introduce wrapping would be an unfortunate consequence if that were required. And we don’t currently require that if you have, for example, an if statement that has an await in one block and not in the other. Having that mandate that -- or having a mandate that you await if you never encounter code that has an await keyword, we don’t feel would be the right semantics. So, again, we choose to if you never execute an await initializer, then we don’t actually perform an await. However, if you evaluate a using await declaration and its initializer, we will await fetch this value is null or undefined, which is is the conditional case we talked about before. In these cases, you have essentially evaluated this await keyword or you’ve reached a point of execution where the await keyword was -- excuse me, where the declaration itself was initialized, so as you step through this, we’ve essentially indicated there is a registration of an async interleaving point that will happen. So, again, this was a long standing requirement from Mark about implicit interleaving point, they be marked with await or yield, and as it stands within the specification today, every single place where you -- where we await is explicit in some form so that would be the await expression itself, a for-await declaration, after some of the discussion that we had since splitting off the proposal, Mark was willing to drop the requirements with some of the discussions we’ve had and some compromises around things like always -- always awaiting when we hit -- when we evaluate these declarations. And there are some -- if there is the case where you are in a code base that wants to have a more specification of these sections, it’s perfectly feasible to use comment-based markers and a lentor to perform validation to ensure that you’ve commented you’re doing so. This is -- this has often been the case with things like having an empty block or having implicit fallthrough. In addition, there’s potential for editors to use neithers like syntax highlighting, editor decorations, inlay hints, et cetera, to highlight the presence of these interleaving points. So actually, there is -- so Justin has a topic on the queue that I think is worth addressing at this point. Justin, can you go ahead. JRL: Okay, I didn’t mean to interrupt you if you wanted to go. RBN: This is a good point. -JRL: So we -- we have `using await` as the keyword marker that this is going to schedule an async disposal will happen after the current block is finished. But we’re not awaiting at the point of the using. We’re waiting at the end of a block. So is await the correct keyword to use here, or should we be doing `using async` instead to highlight that there’s no asynchronicity yet? It will only be asynchronous at close? +JRL: So we -- we have `using await` as the keyword marker that this is going to schedule an async disposal will happen after the current block is finished. But we’re not awaiting at the point of the using. We’re waiting at the end of a block. So is await the correct keyword to use here, or should we be doing `using async` instead to highlight that there’s no asynchronicity yet? It will only be asynchronous at close? RBN: So the position that I hold, and I would have to ask Mark what his specific -- to clarify his position on this as well, but while it’s -- while there’s potential for us to use something like async using, using async I don’t think is valid because async isn’t actually a reserved word in an async context, so it’s perfectly feasible to have an async keyword, so that would break -- that would break potential for refactoring in those cases. So using await indicates an await will happen. Async has no connotation for that. You can have an async function that you never await. You can invoke it and never look at its results, probably bad practice. And no await occurs, the await is the subscription operation that will happen. Async is a confirmation of a syntactic confirmation that will occur. So sink indicates a thing that you can await explicitly yourself. And every instance of await in the language today indicates that an await will happen at some point. For-await, for example, has both an await during -- when the -- it enters the block -- or enters the loop as it starts to read these resource, but it also has an await at the end of the block, so there is both an await whose consequence is immediate and an await whose consequence is potentially deferred to later. So we believe that using await is the correct syntax to use for this case. And we have a fairly strong preference for that. If we were to choose syntax using the async keyword, it would mostly be async using. Because there is less potential for conflict, but there is still also conflict with overlap with error functions that introduce a complexity around cover grammars. So that’s where my position stands right now. And I think Mark said he’s happy to clarify his position as well. -MM: Yes. So first of all, I agree with everything Ron said, and it does highlight all of my major points. I want to address one additional thing, which is the projection of the awaiting to the end of the block, you know, was the syntactic hangup. The reason -- one of the reasons why I’m happy with the proposal as is with the await keyword being at the using point, even though the awaiting does not happen at the using point, is that even for the synchronous using, explicit resource management, the users -- you know, people writing code and reading code will rapidly come to understand when they see a using is that it’s projecting the interleaving some of cleanup code, some interleaving, but interleaving of additional cleanup code at the end of the block where -- or the end of the block is not otherwise marked. So you already have to start to understand when you see a using that that implies some official computation happening at the end of the block. I think the using await extends that notion to simply say, well, carry the meaning of the await to what happens at the end of the block. I think that whether rapidly become intuitive. And the reason not to use await rather than async is exactly what Ron said. Which is async does not mark an interleaving point, does it no mark that an interleaving point necessarily happens. +MM: Yes. So first of all, I agree with everything Ron said, and it does highlight all of my major points. I want to address one additional thing, which is the projection of the awaiting to the end of the block, you know, was the syntactic hangup. The reason -- one of the reasons why I’m happy with the proposal as is with the await keyword being at the using point, even though the awaiting does not happen at the using point, is that even for the synchronous using, explicit resource management, the users -- you know, people writing code and reading code will rapidly come to understand when they see a using is that it’s projecting the interleaving some of cleanup code, some interleaving, but interleaving of additional cleanup code at the end of the block where -- or the end of the block is not otherwise marked. So you already have to start to understand when you see a using that that implies some official computation happening at the end of the block. I think the using await extends that notion to simply say, well, carry the meaning of the await to what happens at the end of the block. I think that whether rapidly become intuitive. And the reason not to use await rather than async is exactly what Ron said. Which is async does not mark an interleaving point, does it no mark that an interleaving point necessarily happens. JRL: Okay. @@ -692,7 +690,7 @@ WH: The first time I saw it, I expected `using await` to await at the point of e RBN: Do you have more to that, or is -- this is -- do you have more to ato this? -WH: What I’m saying is this will become a constant point of confusion, and it will become an education problem for incoming users. I don’t have a better solution to it. +WH: What I’m saying is this will become a constant point of confusion, and it will become an education problem for incoming users. I don’t have a better solution to it. MM: No matter what we do, there will be some confusion. There’s no one -- we’ve been around this enough to know that there’s no one answer here. That will not violate the principles of surprise for some programmers. So it’s a question of choosing which rude surprises we’re imposing. That doesn’t -- that observation doesn’t decide the issue. But it -- but I do think there’s no option that avoids any unpleasant surprises for anyone. @@ -700,7 +698,7 @@ WH: Yeah, I’m just saying that we may have a problem here. MM: Yeah. -RBN: I would say that -- so if you consider the intuition of for-await of, we don’t await the expression that iterate over. We check if it has a simple async iterator. This was an intuition people were very easy to adapt to with for-await. I don’t see it an intuition people have trouble attaching to using await either. You don’t await the expression. You await the sequence of that expression. And I think Kevin has a reply to this as well. +RBN: I would say that -- so if you consider the intuition of for-await of, we don’t await the expression that iterate over. We check if it has a simple async iterator. This was an intuition people were very easy to adapt to with for-await. I don’t see it an intuition people have trouble attaching to using await either. You don’t await the expression. You await the sequence of that expression. And I think Kevin has a reply to this as well. KG: Yes, just to second what WH said. In discussion of this proposal on the repository, at least one person has already expressed the intuition that they expected using await to perform the await there, so I agree if we to choose using await, we are opting into confusing everyone forever, which maybe we’re okay with and maybe there’s a story that we can tell that makes it okay, but I do think that we should be aware that we are opting into confusing -- sorry, not literally everyone, but a very large percentage of readers for the rest of the language's life. And I really think `async using` would be less confusing. I know we will never avoid some confusion for all programmers, and we’re just choosing what is less confusing. I really think async using would be less confusing. @@ -719,14 +717,13 @@ SYG: But that cuts both ways. Why doesn’t that apply to using async? RBN: Well, again, it’s consistency with other things within the language. There’s another point that I was going to get to, but I’ve lost what that was. Yeah, again, my preference, as stated, is to try to maintain that consistency as much as possible. And so -- -SYG: Well, okay, and to reiterate again, what about the consistency of the intuition that await means awaiting, like, the nearest right-hand side expression that gets evaluated right there? That’s the intuition that is causing confusion, and this breaks that intuitive consistency, and what is the response to why it’s okay the break that consistency? Or why it’s more preferred? +SYG: Well, okay, and to reiterate again, what about the consistency of the intuition that await means awaiting, like, the nearest right-hand side expression that gets evaluated right there? That’s the intuition that is causing confusion, and this breaks that intuitive consistency, and what is the response to why it’s okay the break that consistency? Or why it’s more preferred? RBN: I remember the other point I was going to get to. But I’ll get to that if a moment. The other one is, if the await were in an expression position, then, yes, I would imagine that it is -- that that makes sense, that it’s an immediate thing. But it is in a portion of a declaration that or a statement, which, again, for-await has other interesting semantics than the immediacy of the await. That make it enough to need to pay a bit more attention when you break out of a for of, that -- or a for-await of, that there still actually an await that occurs as you exit that code. So that is one case. The other is that there’s actually more than a few cases in other languages that have similar capabilities that use either async or await. Depending on those languages. In Python, I believe, async wait is their choice. I don’t believe that -- I can’t recall, actually that is an issue you might have with -- or off happened, I can’t recall what Python years use of wait is. In languages like C#, await is used for a similar declaration and they actually use the ordering await using in their case, but that doesn’t work for us because of the fact that using is a valid identifier, so that would break existing expressions or be -- result there a cover grammar that seems potentially unnecessary and confusing. But they use await using because they consistently use await for all the same places we use await within JavaScript and they consistently use async for all the same places we use async within the language. So I tended the lean towards C#’s design because it is more aligned with what we use within the JavaScript language. USA: Mark? -MM: Okay, so, yeah, there is a remaining clarification that’s worth stating. It’s only a clarification. Everything said so far including about my position is accurate. The clarification is that it’s not that people will get used to it just from scratch. It’s that given the mind shift that people already have to invest to understand just using, the synchronous using, is they’re already having to project the understanding that there is implied code execution at a later closed curly, which looks otherwise like an unmarked closed curly. So they already have to understand that there is synchronous code execution happening there. The -- so my argument that people will get used to the meaning of await here as being projected rides on the fact -- on the assumption that people will already have done the investment in projecting some implied code execution to the bottom of the block. Additionally, I want to say that this conversation has made me, I think, able to better characterize what the two possible confusions are, and this does not, by the way, decide the issue, but I think it brings clarity, which is if we say using await, there’s a possible confusion of thinking that there is an implied await where there is not. If we use async using, the possible confusion is to not know that there is an interleaving where there is one. So it’s, you know -- it’s a, you know, type 1 versus type 2 -error, and then the question is which is more dangerous. And that’s not clear. From my perspective, they’re both quite dangerous. My intuition is that missing an interleaving point is more dangerous than thinking there’s an interleaving point where there isn’t any, but that’s very tentative and I can argue that in both ways. +MM: Okay, so, yeah, there is a remaining clarification that’s worth stating. It’s only a clarification. Everything said so far including about my position is accurate. The clarification is that it’s not that people will get used to it just from scratch. It’s that given the mind shift that people already have to invest to understand just using, the synchronous using, is they’re already having to project the understanding that there is implied code execution at a later closed curly, which looks otherwise like an unmarked closed curly. So they already have to understand that there is synchronous code execution happening there. The -- so my argument that people will get used to the meaning of await here as being projected rides on the fact -- on the assumption that people will already have done the investment in projecting some implied code execution to the bottom of the block. Additionally, I want to say that this conversation has made me, I think, able to better characterize what the two possible confusions are, and this does not, by the way, decide the issue, but I think it brings clarity, which is if we say using await, there’s a possible confusion of thinking that there is an implied await where there is not. If we use async using, the possible confusion is to not know that there is an interleaving where there is one. So it’s, you know -- it’s a, you know, type 1 versus type 2 error, and then the question is which is more dangerous. And that’s not clear. From my perspective, they’re both quite dangerous. My intuition is that missing an interleaving point is more dangerous than thinking there’s an interleaving point where there isn’t any, but that’s very tentative and I can argue that in both ways. RPR: I’d like this get to Waldemar’s topic and then maybe get to Daniel’s later on in the presentation. Unless it’s specific to this Waldemar. @@ -747,11 +744,11 @@ WH: I still think there’s a problem there. MM: Can we all agree that there’s a problem there, and that both sides are primarily -- both sides of this debate are primarily motivated by trying to alleviate user confusion, the problem is there’s two different user confusions and each side of the debate only alleviates one of them. -RBN: Yeah, and one thing I will say is that the -- the points of -- the current syntax and the choices that we’ve made are an intent to find a compromise between the -- these two sides as well as the -- some of the additional complexity and cost that would have been associated with having to indicate the specific block. It just -- we’re trying to, I guess, find a compromise or a middle ground that we can have here. And I’m more than willing to come back and address this as we get towards to end, but I want to make sure I’m able to cover did rest of the slides so we can come back to this topic later on, if I can. And can I defer your comment to late or is it something you want to bring up now? Daniel? +RBN: Yeah, and one thing I will say is that the -- the points of -- the current syntax and the choices that we’ve made are an intent to find a compromise between the -- these two sides as well as the -- some of the additional complexity and cost that would have been associated with having to indicate the specific block. It just -- we’re trying to, I guess, find a compromise or a middle ground that we can have here. And I’m more than willing to come back and address this as we get towards to end, but I want to make sure I’m able to cover did rest of the slides so we can come back to this topic later on, if I can. And can I defer your comment to late or is it something you want to bring up now? Daniel? DE: Yes, please, defer it. -RBN: So I can come back to that at the -- as we go on in a little bit further in the presentation. The next thing that I know might be potentially contentious is the use of using await in for loops. So much like a using declaration at the head of a for declaration, you would be able to use a using await here. This introduces a constant binding. This is not a per iteration bindings since per iteration bindings only apply to mutable bindings. So it’s -- and the way the spec is currently written and the current semantics are these constant bindings are only evaluated once and are scoped the life of the entire loop. So it -- follows those same semantics. In the case of for of and for-await of, these binding are per iteration. Which is consistent with how these variables are defined on each loop. And we’ve made a distinction on how for-await of and for of work such that they are consistent with how -- sorry, using await is consistent with how for-await works as well. In that -- actually, I think I may have a more detailed slide on this shortly. No. So the -- there’s -- we’ve discussed this a little bit on the issue tracker as well, but there is this potential for it seeming somewhat repetitious to have a for-await and the using await in the same statement. And this is -- we’ve chosen the direction we have here because of the semantics for await work and forrate way work. They’re similar there how that evaluation is performed. When you perform a for of on a async iterable that is not also -- there’s not also defined as simple iterator, it will throw. So this is essentially a run time check of the input. I’m sorry. Let me reshare that. So there is -- +RBN: So I can come back to that at the -- as we go on in a little bit further in the presentation. The next thing that I know might be potentially contentious is the use of using await in for loops. So much like a using declaration at the head of a for declaration, you would be able to use a using await here. This introduces a constant binding. This is not a per iteration bindings since per iteration bindings only apply to mutable bindings. So it’s -- and the way the spec is currently written and the current semantics are these constant bindings are only evaluated once and are scoped the life of the entire loop. So it -- follows those same semantics. In the case of for of and for-await of, these binding are per iteration. Which is consistent with how these variables are defined on each loop. And we’ve made a distinction on how for-await of and for of work such that they are consistent with how -- sorry, using await is consistent with how for-await works as well. In that -- actually, I think I may have a more detailed slide on this shortly. No. So the -- there’s -- we’ve discussed this a little bit on the issue tracker as well, but there is this potential for it seeming somewhat repetitious to have a for-await and the using await in the same statement. And this is -- we’ve chosen the direction we have here because of the semantics for await work and forrate way work. They’re similar there how that evaluation is performed. When you perform a for of on a async iterable that is not also -- there’s not also defined as simple iterator, it will throw. So this is essentially a run time check of the input. I’m sorry. Let me reshare that. So there is -- Yeah, so there is a -- essentially run time type check of the thing that you can’t iterate an async iterable in a synchronous for of. And for-await does an explicit check for the AsyncIterator before then falling back to the sync iterator. But that’s a check against the presence of that method on that object if it doesn’t have AsyncIterator or iterator then again it will result there a ru time exception. Similarly, the using declaration looks for a dispose method, and if it not present whether result there a run time exception. Using await, it looks for an async dispose -- simple async dispose method, and if that doesn’t exist, falls back to dispose and if that resultsr result there’s a runtime exception. Each one has a run time type check that is performed against the values that you are working with. So we opted to make it very explicit, what you’re intending to do here. If you want to forrate way for a synchronous resource and thus any potential asynchronous resource that is not -- that does not have asynchronous dispose should be an error, that case would be made evident by not specifying an await many the using await declaration. Because, again, we are opting into tracking the async dispose in the using await case but not in the normal using case. So we’ve opted for that specific -- for that specific semantics for this. Yes. And that’s the -- my final bullet point here, is around the for-awaits of asynchronous disposable for block is only synchronous when an await occurs. We won’t magically enlist an asynchronous dispose if you opted not to leave in the await, just like we wouldn’t automatically make -- we don’t automatically make an -- enlist an asynchronous dispose in a normal for loop. We believe that having these specific syntax and the explicit preference to opt into which behavior you want is important. And finally, async disposal semantics are roughly the same as the dispose semantics with you for async context, so if the initialized value is null or undefined, then we don’t do any registration. Well, I should clarify, we don’t register any -- we do perform a registration, but we don’t throw an exception. The thing that we register is that some await then async interleaving point must occur at the end of the block, so when it’s initialized to null or undefined, there will still be an await later on. If it’s neither null nor undefined, we will attempt to read either async dispose or dispose, and if it doesn’t exist, it neither exists we throw. If the method that we find does exist but isn’t callable, we throw and we record the value in the lexical environment so ensure we perform cleanup at the end. So async disposal interface. This is the spec interface similar to the iterator and AsyncIterator interfaces -- spec interfaces and the disposable spec interface 36 it describes basically an object with a simple async dispose method. With the expectation that invoking that method indicates the caller is done with the object that its lifetime has ended and cleanup should occur. This would be used by semantics of the using await declaration and the async disposable stack class. When exceptions are thrown or a rejected promise is returned from async disposed indicates the -- most likely indicates the resource could not be freed. A simple async dispose method should perform necessary cleanup for an object and all of these shoulds are essentiality same as the definition for a synchronous disposable. Avoid throw an exception is fits called more than once but that’s not required, a async dispose should return a promise, and this is again consistent with how AsyncIterator, you can write a non-conforming AsyncIterator that just returns a synchronous iterator and these spec will still do awaits in the right places. Because it expects that things -- it will still await the results. There is an open issue on the naming of the symbol. Right now, the symbol is symbol.asyncDispose, which matches the parallel symbol async -- asyncDispose is to dispose as asyncIterator is to iterator. But this doesn’t match the naming convention for non-symbol methods that exist such as the dispose async that’s provided or built ins like atomics wait async, etcetera, where async comes at the end. So for now, we have chosen to match the symbol AsyncIterator naming convention, yet this doesn’t match the behavior that async iterate does, because AsyncIterator expecting to be called and return an object and the async is what we whether call, and I don’t know if -- if anyone on the committee has a preference for changing asyncDispose to change the symbol name to change its order or if I should maintain the current direction. I’d like to give just a moment for anyone to chime in if they have any particular concern. @@ -804,7 +801,7 @@ USA: Last up, thank you, Waldemar. Last up we have -- MM: So I think that’s -- I didn’t hear my name, but I think that’s me last. I want so say I support this going to Stage 3. Plus one on this. I appreciate Ron’s patient engagement with us during the whole process. It resulted in us better understanding the deep meaning of our own objections, so thank you very much, Ron, for all of that. And, yes, I’m in favor of this going to Stage 3. And also I should mention that we have publicly stated and we maintain that the async using versus using await is not a blocking issue for us. I made clear what my very strong preferences are, but we stated that we will not block the other syntax choice, and we are still of that position. -USA: Thank you, Mark. Assuming that I’m audible now, Ron, I think now might be the time to -- yeah. +USA: Thank you, Mark. Assuming that I’m audible now, Ron, I think now might be the time to -- yeah. RBN: Yes. At this time, I would like to seek consensus for advancement to Stage 3. @@ -872,7 +869,6 @@ USA: Thank you. And thank you to the note takers. ### Conclusion/Resolution -* Regarding ‘Symbol.asyncDispose’ (current) vs ‘Symbol.disposeAsync’, consensus was to continue with `Symbol.asyncDispose` for the name. -* Conditional Advancement to Stage 3 pending outcome of investigation of ‘async using’ vs. ‘using await’ syntax. Condition to be resolved no later than the March plenary, with the currently proposed ‘using await’ syntax as the default choice if we don’t arrive at another conclusion. (For now, the proposal will stay in the Stage 2 section of the proposals repo, as that repo does not represent conditional advancement.) -* Following Stage 3 advancement, consensus is to merge the “Explicit Resource Management” and “Async Resource Management” proposals to simplify the work involved in reaching Stage 4. - +- Regarding ‘Symbol.asyncDispose’ (current) vs ‘Symbol.disposeAsync’, consensus was to continue with `Symbol.asyncDispose` for the name. +- Conditional Advancement to Stage 3 pending outcome of investigation of ‘async using’ vs. ‘using await’ syntax. Condition to be resolved no later than the March plenary, with the currently proposed ‘using await’ syntax as the default choice if we don’t arrive at another conclusion. (For now, the proposal will stay in the Stage 2 section of the proposals repo, as that repo does not represent conditional advancement.) +- Following Stage 3 advancement, consensus is to merge the “Explicit Resource Management” and “Async Resource Management” proposals to simplify the work involved in reaching Stage 4. diff --git a/meetings/2023-01/feb-02.md b/meetings/2023-01/feb-02.md index 5c9b62fd..1d33653a 100644 --- a/meetings/2023-01/feb-02.md +++ b/meetings/2023-01/feb-02.md @@ -4,7 +4,7 @@ **Remote attendees:** -``` +```text | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -83,7 +83,7 @@ MLS: So you want to get-you’re basically lying to get as close to intrinsics a JHD: Correct. And, Shu, I see your question. -SYG: Yes. One of the motivations that you said was you agree that it affects few develop, but it affects more users downstream. +SYG: Yes. One of the motivations that you said was you agree that it affects few develop, but it affects more users downstream. JHD: Uh-huh. @@ -117,7 +117,7 @@ JHD: That’s right. YSV: I also recall this proposal initially came from the shadow realms proposal and the desire to be able to get unpolluted globals from, for example, an iframe to then modify them potentially in some way or to be able to use the unpolluted global in some way. And I’m wondering if maybe my memory of the goal of that proposal is different. I do see two potentially separate use cases here. One where overriding the get intrinsics with custom values -- I know SES case and also your use case. I’m wondering is there a more widespread use case we can fit to better or in fact the way to go the what Shu is suggesting, a more bounded API. Those are the questions I got. -JHD: As far as your question on the queue, how widespread is your technique, I can only answer the one package I use for this has 40 million downloads a week. I don’t know if anybody else uses this technique, and that’s the package I use for it. That gives you an idea of the scope and limit of that scope for me. I can’t speak for MM and crew, but I have no correlation with ShadowRealms for this. Without full object transfer ShadowRealm doesn’t do anything for me, however, with either just hidden intrinsics or with what I’ve currently got proposed, it would also work for ShadowRealm use cases because a new ShadowRealm would have the same capability and you can modify or restrain as you like. I agree with you and SYG that it would be a much smaller and more bounded set to just be hidden intrinsics, thing we can’t reach with property access. +JHD: As far as your question on the queue, how widespread is your technique, I can only answer the one package I use for this has 40 million downloads a week. I don’t know if anybody else uses this technique, and that’s the package I use for it. That gives you an idea of the scope and limit of that scope for me. I can’t speak for MM and crew, but I have no correlation with ShadowRealms for this. Without full object transfer ShadowRealm doesn’t do anything for me, however, with either just hidden intrinsics or with what I’ve currently got proposed, it would also work for ShadowRealm use cases because a new ShadowRealm would have the same capability and you can modify or restrain as you like. I agree with you and SYG that it would be a much smaller and more bounded set to just be hidden intrinsics, thing we can’t reach with property access. YSV: Right, just to clarify my ShadowRealm comment, maybe my memory is shaky here, but I recall when we orally discussed in ShadowRealm and -- we decided to not make it transparent in the way you had originally intended, you said you needed an API. And part of my question is are we still solving the same use case in this case? @@ -143,21 +143,21 @@ YSV: Okay, thanks. JHD: And personally, I would be very loath to tacitly endorse mutating globals, because people already do it with all the discouragement in the ecosystem, but I think that’s worth discussing separately, like Kevin said. I think SYG might be next. -SYG: Yeah, I want to separate this cache some intrinsic at first ride in the Firefox on posted use case as a separate ones and exhaustively do this for every single intrinsic use case that ?? raised. I think the former technique is much wider spread, and I think the current language serves that use case very well, because it’s -- by its nature, you’re getting a few things. Like, I’m implementing -- like, I want to call the original dot map thing. I want to cache that. I think we need a new feature for that. But the exhaustively -- to exhaustively do this for every single intrinsic thing, it’s one of the problems that this is actually solving here, plus the hidden intrinsic thing. That seems like the missing capability for robustness that you want. +SYG: Yeah, I want to separate this cache some intrinsic at first ride in the Firefox on posted use case as a separate ones and exhaustively do this for every single intrinsic use case that ?? raised. I think the former technique is much wider spread, and I think the current language serves that use case very well, because it’s -- by its nature, you’re getting a few things. Like, I’m implementing -- like, I want to call the original dot map thing. I want to cache that. I think we need a new feature for that. But the exhaustively -- to exhaustively do this for every single intrinsic thing, it’s one of the problems that this is actually solving here, plus the hidden intrinsic thing. That seems like the missing capability for robustness that you want. JHD: Right. I can actually ask, Shu, you’ve implied and said some performance and memory concerns – any memory concerns about including the potential set of intrinsics be so large as including everything. Is that only for the iteration side or is that also for the retrieval side? -SYG: It is only for the retrieval side. Like the iteration side, because you have made the iteration return strings, that’s no longer an issue for the iteration side. For the retrieval side, the problem is that every time you create a global, if you want the original intrinsics to be reachable via anything, via property access or via -- sorry, not via property access, via a special get intrinsic function, we have to keep slots for that -- for every single intrinsic that you might want to get so we keep the originals, because the normal ones that are gotten via property access could be overridden. +SYG: It is only for the retrieval side. Like the iteration side, because you have made the iteration return strings, that’s no longer an issue for the iteration side. For the retrieval side, the problem is that every time you create a global, if you want the original intrinsics to be reachable via anything, via property access or via -- sorry, not via property access, via a special get intrinsic function, we have to keep slots for that -- for every single intrinsic that you might want to get so we keep the originals, because the normal ones that are gotten via property access could be overridden. JHD: I assume that every implementation by enlarge has some sort of dirty bit where it knows if a built-in property has been modified or not. Does V8 have something similar? -SYG: It is not my understanding that any implementation has that. Why would you track that on properties? Like, these are just properties like any other properties. +SYG: It is not my understanding that any implementation has that. Why would you track that on properties? Like, these are just properties like any other properties. JHD: I see. Okay, yeah, I guess I was assuming that that was some sort of optimization hint. But I mean, obviously I don’t know how these things are implemented. But my thinking just now had been, like, you’d only need to store those pointers for the things that had been modified because you could just -- if you knew which had been modified and not, you could just do the lookup, the property lookup, because you know -- -SYG: That seems way too complex a scheme to implement for this anyhow. +SYG: That seems way too complex a scheme to implement for this anyhow. -SYG: But the point is that we -- that the memory concern is this, like, we have to have slots for every single intrinsic, and that is what we don’t want, because that is a per global cost, and especially on mobile, this is a big issue. Like, this is an issue that we’re going to have to also do something special with for temporal just adds so many things. But, you know, there’s really no way around that and the use case for temporal is kind of set in stone, everyone is convinced. I’m trying to think of ways that could satisfy your use case without having to incur that cost. +SYG: But the point is that we -- that the memory concern is this, like, we have to have slots for every single intrinsic, and that is what we don’t want, because that is a per global cost, and especially on mobile, this is a big issue. Like, this is an issue that we’re going to have to also do something special with for temporal just adds so many things. But, you know, there’s really no way around that and the use case for temporal is kind of set in stone, everyone is convinced. I’m trying to think of ways that could satisfy your use case without having to incur that cost. MLS: Yeah, I just wanted to say that it sounds like we do a similar thing to what V8 does. When we create a global object, we create it from intrinsics and it’s kind of a special case. We do have some lazily created object-based upon first access for less use things. But it’s -- the process is kind of unique in initialization. So you’re reif ing for us is also going to be some work for us as well. @@ -177,7 +177,7 @@ BT: Before we move on to a new topic, I just wanted to quick -- just remind you KG: This is just to say I like the design where you’re returning a string. In particular, for the patching case that we’re discussing, it’s a lot easier to just patch getIntrinsics and not have to worry about the thing that’s returning a string, because that iterator only actually gives you access to the string, so I like the design with the string iterator. -DE: This is an interesting proposal idea. If the performance issues that SYG raised, both for lazy loading style implementations and for implementations of V8’s style can be worked out, then great, I’m not opposed to it. But I think this is part of a more general need, and I think this need comes up in your libraries and you’re handling it, but it’s having code that is high integrity, code that you write that comprehensively closes over all of the original load time global environment. And this is code that’s extremely hard to write. In code that comes up in multiple different environments, like intrinsics of certain JavaScript engines, core kind of extension code in some systems, as well as systems like Node.js core or libraries like the ones you maintain. And sort of platform core code in other cases. In Bloomberg, we do sometimes use a realm that doesn’t have the ShadowRealm boundary for this kind of purpose. So overall, I think we need to think about some higher level mechanisms to solve this problem comprehensively, because we have lots of evidence from real vulnerabilities that such manual mechanisms, even when they do have access to the intrinsics through various means are error prone, and those errors result in kind of breaking the exact extraction that they’re trying to meet. So, yeah, not opposed to this moving forward, but if we’re trying to solve this problem, I would like us to think about some higher level solutions that may be partly tooling, may be partly thing that are outside of what we standardize. But it would be great if we had some kind of broader solution where you write normal looking code and it comprehensively becomes something that meets these kinds of goals. +DE: This is an interesting proposal idea. If the performance issues that SYG raised, both for lazy loading style implementations and for implementations of V8’s style can be worked out, then great, I’m not opposed to it. But I think this is part of a more general need, and I think this need comes up in your libraries and you’re handling it, but it’s having code that is high integrity, code that you write that comprehensively closes over all of the original load time global environment. And this is code that’s extremely hard to write. In code that comes up in multiple different environments, like intrinsics of certain JavaScript engines, core kind of extension code in some systems, as well as systems like Node.js core or libraries like the ones you maintain. And sort of platform core code in other cases. In Bloomberg, we do sometimes use a realm that doesn’t have the ShadowRealm boundary for this kind of purpose. So overall, I think we need to think about some higher level mechanisms to solve this problem comprehensively, because we have lots of evidence from real vulnerabilities that such manual mechanisms, even when they do have access to the intrinsics through various means are error prone, and those errors result in kind of breaking the exact extraction that they’re trying to meet. So, yeah, not opposed to this moving forward, but if we’re trying to solve this problem, I would like us to think about some higher level solutions that may be partly tooling, may be partly thing that are outside of what we standardize. But it would be great if we had some kind of broader solution where you write normal looking code and it comprehensively becomes something that meets these kinds of goals. JHD: I think that would be great. I think that I’ve not sensed an appetite for solving that problem in the committee in the past, and I think that this proposal, which I think is independently motivated, as well as a number of others which I think are independently motivated, could actually combine quite nicely to address the problem you’re describing. But if there’s committee appetite for solving it holistically and having that be an acceptable motivation for these other proposals, that would be great. I think the tradeoff for the smaller part of intrinsics that SYG suggested would be not getting the desired DX to solve that problem. So I think it sounds like there’s a storage/memory tradeoff or whatever to be able to get that DX. Because there’s definitely nothing ergonomic about caching globals on global access, and you have to know what they are. @@ -197,9 +197,9 @@ JRL: This returns of the original value, no matter what? JHD: That’s intention, yes, unless you replace the getIntrinsic function itself of course. -JRL: Doesn’t that run up against the lazy loading issue? I’m sorry, I thought -- when I heard this earlier, I thought you said if you denied a value then get intrinsic could not get it later on. +JRL: Doesn’t that run up against the lazy loading issue? I’m sorry, I thought -- when I heard this earlier, I thought you said if you denied a value then get intrinsic could not get it later on. -JHD: Currently if you want to deny something, you delete it off the global or off an object, right? With this proposal, you also will have to wrap the `getIntrinsic` function to deny it. As far as the lazy loading issue, I don’t know how that is implemented, but my assumption is that whatever sort of implicit secret getter is there when you try to access, I don’t know, Map for the first time or something, that that is actually what would be invoked when you try to get the Map intrinsic. So you don’t actually have to load Map until the first time somebody accesses it on the global or tries to retrieve it. Does that answer your question? +JHD: Currently if you want to deny something, you delete it off the global or off an object, right? With this proposal, you also will have to wrap the `getIntrinsic` function to deny it. As far as the lazy loading issue, I don’t know how that is implemented, but my assumption is that whatever sort of implicit secret getter is there when you try to access, I don’t know, Map for the first time or something, that that is actually what would be invoked when you try to get the Map intrinsic. So you don’t actually have to load Map until the first time somebody accesses it on the global or tries to retrieve it. Does that answer your question? JRL: Yeah, so that -- that clears up my question about, like -- I thought -- I could not see the value of this over just having things on the global. But if we can get access to the original regardless of being patched, then that makes it clear. @@ -209,13 +209,13 @@ JRL: So for -- so this -- I understand now. This makes me have to rethink of wha JHD: My understanding, just before SYG steps in to respond, my understanding is that the issue is not the storage of the object as much as it is the storage of all the pointers to the original objects. Because there’s, like, I don’t know, 1,000 intrinsics or something, so you’d need to store 1,000 intrinsic pointers per realm, whether you lazy loaded the thing it pointed to or not. -JRL: Isn’t that what is required by these semantics, where you can modify 'foo' and still get the intrinsic 'foo'? You got to have the ‘foo’ pointer somewhere. +JRL: Isn’t that what is required by these semantics, where you can modify 'foo' and still get the intrinsic 'foo'? You got to have the ‘foo’ pointer somewhere. JHD: Yes, it is. That’s the tradeoff, that the complete use case requires that. JRL: Okay. -JHD: My understanding of SYG’s pushback is could we sacrifice meeting some of that use case by only providing the hidden intrinsics and then the tradeoff is that instead of storing 1,000 pointers per realm, you only have to store 10 to 20 pointers per realm. +JHD: My understanding of SYG’s pushback is could we sacrifice meeting some of that use case by only providing the hidden intrinsics and then the tradeoff is that instead of storing 1,000 pointers per realm, you only have to store 10 to 20 pointers per realm. JRL: Okay. @@ -225,14 +225,13 @@ SYG: This is all predicated -- yeah, I think your understanding is correct, this JRL: Thank you for clearing this up. -BT: Just to note, you’re down to a little bit less than 15 -minutes. +BT: Just to note, you’re down to a little bit less than 15 minutes. JHD: That’s fine. So, RBN, your item actually deals with the next part of the presentation, I’d love to go on. RBN: That’s fine. I asked in matrix whether or not there was more slides because I’m not seeing them, so I wasn’t sure the presentation ended. -JHD: There are more slides and I think Hax had a item that relate to the the naming, but I wanted to address all of the other items before we got into that. +JHD: There are more slides and I think Hax had a item that relate to the the naming, but I wanted to address all of the other items before we got into that. RBN: I’m fine with waiting. @@ -240,7 +239,7 @@ JHD: Awesome. Thank you. So that’s all of it, you know, modulo namings and so JHD: Essentially if we had two functions, then we’ve got either two global functions, adding two globals instead of one is less ideal, or we’ve got a global function that has a property on it like, for example, `getIntrinsic` and `getIntrinsic.keys`. That would also be fine, but it’s kind of weird to have a non-constructor function that has an own property on it. We could do it, it’s just there’s no precedent for it. It’s not weird in JavaScript in general, it’s just kind of weird in 262. So in the PR that adds in enumeration, the current thing I went with is a function that when it gets a string, it’s retrieval, and when it gets no argument, it’s iteration. It is completely natural to have an “eww, gross, don’t overload one function to do two things” reaction to that. The alternative, as I see it, is either two functions or the expando property thing I mentioned. -JHD: at this stage, yeah, I wanted to get thoughts. The specific names are not super important, `getIntrinsic`, `getIntrinsicNames`, that can be bikeshedded at any time. It’s more the one function or two, and then if it’s two functions, are both global or is one chained off the other, something like that? Or is there an alternative suggestion like a namespace option that hasn’t been considered. I’d love to hear about it, and that’s where we can go to RBN. +JHD: at this stage, yeah, I wanted to get thoughts. The specific names are not super important, `getIntrinsic`, `getIntrinsicNames`, that can be bikeshedded at any time. It’s more the one function or two, and then if it’s two functions, are both global or is one chained off the other, something like that? Or is there an alternative suggestion like a namespace option that hasn’t been considered. I’d love to hear about it, and that’s where we can go to RBN. RBN: so to my topic, it kind of covers two thing that are slightly related but if I need to split them up, that’s when I see get entrain cig, if you pass it no arguments, then it gives you an iterator is a bit odd, especially if you call it to get property scripter, if you call on no arguments, that doesn’t give you the names of all the property descriptors. We have a separate name for that. So it would be more consistent with the JavaScript naming scheme for the rest of the API to keep this as a separate method that produces those names. And then my second part of that topic was related to -- and I mentioned this in the matrix as well, there have been numerous discussions over the years about adding other things to reflect, and it’s always come back that no `Reflect` should only ever contained the things that are related to proxy operations, which I find unfortunate because reflect is such a broad meaning that generally means reflection, and is often used for those types of things -- for more than just reflecting of -- or intercepting proxies or providing default behavior for those, so if we were to perhaps relax that restriction that we’ve put on reflect over the years, that this would be the place that you would put that. @@ -299,11 +298,11 @@ JHD: Thank you SYG and YSV. That’s my action item is make those three issues a ### Conclusion/Resolution -* Remaining at stage 1 +- Remaining at stage 1 ## Import Assertions -Presenter: Nicolò Ribaudo (NRO) +Presenter: Nicolò Ribaudo (NRO) - [proposal](https://github.com/tc39/proposal-import-assertions/) - [slides](https://docs.google.com/presentation/d/1c5y-t-O3wrMEQWb92P1xL7PRcNmFZOOK2-BmC5FUkE8/edit) @@ -338,7 +337,7 @@ BT: I have a quick question. I think you want Stage 2 with the scoping restricti DE: So I would be okay with either Stage 2 or Stage 3. Honestly in the lead up to this discussion, I was kind of waffling between them. So that’s why ultimately I don’t think the champion group should be kind of burdened with making these kind of process calls. I think things should somehow be clear cut. But they’re kind of not. And as long as we agree on what the scope of what we’re investigating is and the timeline and we try to communicate that externally, I think we could consider this either Stage 2 or Stage 3. -BT: Okay. I guess you weren’t making the point that Stage 2 is better for messaging. +BT: Okay. I guess you weren’t making the point that Stage 2 is better for messaging. DE: No. @@ -392,9 +391,9 @@ DE: Sure. So I guess I would kind of like to dig into what is insufficient about MLS: Let’s continue talking about this proposal. I don’t want to monopolize time here. There are other people that are on the queue and let’s move on. -BT: We have a point of view from YSV. +BT: We have a point of view from YSV. -YSV: That was actually the point of view I was going to make. We were veering to a previous agenda item and not talk talking about this one. +YSV: That was actually the point of view I was going to make. We were veering to a previous agenda item and not talk talking about this one. DE: We can discuss that later. But I really think that that formed part of the solution. Any way, we’re done with that topic. @@ -494,11 +493,11 @@ DE: There’s been a long-running argument about whether GB: Specifically I mean just as far as the – I have a delay go ahead. -DE: Just hear you cutting in and out. I thought you were done. So you can finish. +DE: Just hear you cutting in and out. I thought you were done. So you can finish. GB: Apologies. I don’t have a great connection at the moment. I’m on mobile. I do feel it’s worth considering that. I also want to be clear that when the discussion is brought up about being able to unify on the syntax, I do think it’s worth still considering import reflection on exactly how this proposal goes too much and that it can still exist as a proposal side by side with this one. -DE: If you could fix up your comments in the notes so we can all understand you, that would be great. Then I can catch up on it, what you were saying. One particular question is whether unknown attributes or assertions are ignored. There are clear examples both for attributes that drive the module’s interpretation and for assertions why nice to be ignored and for example for lazy module loading or for a checksum that you’re checking, you kind of want there to be a fall back behavior where it’s ignored. But for type you definitely don’t want it to be ignored if the system didn’t know about the typed attribute. I think that’s something to work out but I don’t think it’s quite linked to the relaxation. It ties in and it’s not the first time this question appears. I agree that will be good to discuss. +DE: If you could fix up your comments in the notes so we can all understand you, that would be great. Then I can catch up on it, what you were saying. One particular question is whether unknown attributes or assertions are ignored. There are clear examples both for attributes that drive the module’s interpretation and for assertions why nice to be ignored and for example for lazy module loading or for a checksum that you’re checking, you kind of want there to be a fall back behavior where it’s ignored. But for type you definitely don’t want it to be ignored if the system didn’t know about the typed attribute. I think that’s something to work out but I don’t think it’s quite linked to the relaxation. It ties in and it’s not the first time this question appears. I agree that will be good to discuss. GB: That’s all I wanted to say. @@ -522,13 +521,13 @@ BT: All right. So I think we have consensus on Stage 2. Congratulations? ### Conclusion/Resolution -* Building off of [earlier discussion of import assertions this meeting](jan-31.md#problems-with-import-assertions-for-module-types-and-a-possible-general-solution--downgrade-to-stage-2), the committee reached the shared the understanding that we should revise this proposal to meet [the requirements of the web platform](https://github.com/whatwg/html/issues/7233) that the module type drive its interpretation. +- Building off of [earlier discussion of import assertions this meeting](jan-31.md#problems-with-import-assertions-for-module-types-and-a-possible-general-solution--downgrade-to-stage-2), the committee reached the shared the understanding that we should revise this proposal to meet [the requirements of the web platform](https://github.com/whatwg/html/issues/7233) that the module type drive its interpretation. -* To reflect the scope of expected future changes, the committee reached consensus to demote the proposal to Stage 2. +- To reflect the scope of expected future changes, the committee reached consensus to demote the proposal to Stage 2. -* The champion group plans to develop this proposal further over the next 2-4 months, with a goal to come back to committee with a proposal for Stage 3, based on iterating on: - * The syntax (e.g., which keyword(s) are used) - * The semantics (e.g., what forms part of the cache key) +- The champion group plans to develop this proposal further over the next 2-4 months, with a goal to come back to committee with a proposal for Stage 3, based on iterating on: + - The syntax (e.g., which keyword(s) are used) + - The semantics (e.g., what forms part of the cache key) ## Decorator `context.access` object API @@ -646,8 +645,8 @@ RBN: I appreciate that, thank you very much. ### Conclusion/Resolution -* Consensus for target moving to be the first param rather than receiver -* Consensus for adding a `has` method +- Consensus for target moving to be the first param rather than receiver +- Consensus for adding a `has` method ## Temporal Stage 3 update continuation @@ -680,8 +679,8 @@ USA: All right. That was quick and nice. Next up we have DRR and RBN with decora ### Conclusion/Resolution -* Consensus on merging https://github.com/tc39/proposal-temporal/pull/2447 -* https://github.com/tc39/proposal-temporal/pull/2479 will be presented again in the following meeting +- Consensus on merging https://github.com/tc39/proposal-temporal/pull/2447 +- https://github.com/tc39/proposal-temporal/pull/2479 will be presented again in the following meeting ## Decorators and export Ordering continuation @@ -714,7 +713,7 @@ SYG: Is that speculative or actual? You have partners that are going to do this? RBN: In cases where it is technically feasible, that is actual and that is a specific constraint – I shouldn’t say constraint. That is a specific capability that we pursued since the beginning. I know early on when YK was the champion, we were looking at making sure when the context – when we eventually looked at the context object, it might have a symbol.toString and history tag or something to use to differentiate the things to help with the overload overloading of legacy to native decorator case. This overloading thing is something we have been pursuing for a while. We suffered losses with this in that when it came to engine specific requirements on how fields work, we needed to – and introduced the access keyword we knew this is a case where we cannot support that migration. For cases like TypeScript legacy decorators and if you decorate a getter or setter we gave you entangled the get/set descriptor that gave you both. That is something that we were intending to because of the fact that the current spec changes made that not feasible, that’s one of the motivations behind the group and auto and there is change on the user side that would allow an existing decorator that supported both to be able to differentiate either by looking at the argument list because every legacy decorator – every legacy class element decorator takes three arguments versus a native decorator which always takes two arguments. So there is a way to to differentiate between the two and that is something that we said since the beginning. -USA: We have a queue. But before we move on with the queue, this is already over time. But we can extend until until 55 because we have time. Feel free to go on. +USA: We have a queue. But before we move on with the queue, this is already over time. But we can extend until until 55 because we have time. Feel free to go on. RBN: Thank you. @@ -832,7 +831,7 @@ KHG: I don’t mean to interrupt. I had an item on the queue and I wanted to tal USA: We have a number of comments on either side on the queue. First up you have HAX who says I support option 1 and then there’s WMS who says +1 to JHD’s point and Richard who says that I share JHD’s discomfort with option 1 for essentially similar reasons. At the same time, we’re running out of time. That’s all. -DRR: Sounds like we have people who have a preference for option 1, however, also people who are sharing their general discomfort for option 1. But who would prefer not to block on it. You know, I am not strongly in favor of providing every way to do something, but considering an exclusive order sort of situation where you must put it before or or after, you know, a good compromise is one where everyone has an option but unhappy about the other result maybe. So perhaps option 2 is a direction we can pursue. So let me ask this: Do we have consensus for option 2 where decorators are placed before or after the export keyword or export default but must be one or the other? +DRR: Sounds like we have people who have a preference for option 1, however, also people who are sharing their general discomfort for option 1. But who would prefer not to block on it. You know, I am not strongly in favor of providing every way to do something, but considering an exclusive order sort of situation where you must put it before or or after, you know, a good compromise is one where everyone has an option but unhappy about the other result maybe. So perhaps option 2 is a direction we can pursue. So let me ask this: Do we have consensus for option 2 where decorators are placed before or after the export keyword or export default but must be one or the other? USA: So far we have support from DE on option 2 and then NRO says option 2 with syntax error and then option 1, then option 3 and then option 2 without syntax error is there a preference. So I guess that’s in favor of option 2. But with the restriction that you proposed. JHD says begrudgingly consensus on option 2 and RHB says +1 support for option 2. So far only positive for option 2 with the restriction that you propose. That’s all. WH says they support option 2. I think it’s safe to say you have consensus on option 2. @@ -846,7 +845,7 @@ RBN: Yes, please. JHD: My understanding is that nobody wants to try to elide the export keyword from the toString representation. So I feel like we can either pick that decorators are never included, like, decorators on exported things are never included in the toString or we could pick decorators are only included in the toString when they appear after export, although that seems weird if both positions are allowed. And so it feels like either of those two options would satisfy my understanding of MM’s position. Of course, MM can clarify. And I don’t think that that decision should block option 2. I think that’s just something we should figure out in an issue. -RBN: One thing I was going to bring up was that I had a suggestion I had been discussing with other folks with Daniel and with KHG offline which is if we had gone with option 1 and had decided that we didn’t want to include export in the to spring, we could have made the distinction that decorators that come before export would not be included. If you decorated a class that doesn’t have the export declaration they would be included. If you are specifically tailoring the code to use the eval of a toString case that is a niche case as it is and the step of having export declaration for the binding as a separate statement is not a far stretch if you are again trying to custom tailor for the environment. It does feel a bit weird to have a distinction if we allow both but in the same vein allowing both would also make it feasible to have a specific case where you are custom tailoring your code to work with the eval case. That said, I still find evalling a toString to be an unsound and unreliable practice even though it has – it does exist in the ecosystem, I have seen it used well for performance and other things and functions but I also believe that forthcoming proposals or in progress proposals things like module blocks might be potentially a better way to do that as well because it doesn’t require strings and worrying about the CSP, for example, being an issue for making that reliable to use regularly. So I think it might be weird but it also does, like you said, make it so that option 2 is still viable. +RBN: One thing I was going to bring up was that I had a suggestion I had been discussing with other folks with Daniel and with KHG offline which is if we had gone with option 1 and had decided that we didn’t want to include export in the to spring, we could have made the distinction that decorators that come before export would not be included. If you decorated a class that doesn’t have the export declaration they would be included. If you are specifically tailoring the code to use the eval of a toString case that is a niche case as it is and the step of having export declaration for the binding as a separate statement is not a far stretch if you are again trying to custom tailor for the environment. It does feel a bit weird to have a distinction if we allow both but in the same vein allowing both would also make it feasible to have a specific case where you are custom tailoring your code to work with the eval case. That said, I still find evalling a toString to be an unsound and unreliable practice even though it has – it does exist in the ecosystem, I have seen it used well for performance and other things and functions but I also believe that forthcoming proposals or in progress proposals things like module blocks might be potentially a better way to do that as well because it doesn’t require strings and worrying about the CSP, for example, being an issue for making that reliable to use regularly. So I think it might be weird but it also does, like you said, make it so that option 2 is still viable. DRR: Okay. Any responses to that? @@ -885,8 +884,8 @@ LEO: Just want to add in the minutes, just want to make sure that we have consen ### Conclusion/Resolution -* Consensus on allowing decorators before the `export` keyword in addition to after the `export` or `export default` keywords, but with a Syntax Error if you specify decorators in both positions (i.e., exclusively one position, or the other, but not both) on a single declaration. Decorators must not come between the `export` and `default` keywords if both are present on the exported declaration. -* Consensus on the source text cutoff for class declarations remaining only the ClassDeclaration production. Decorators before `export` will not be included in Function.prototype.toString(). Decorators after `export` or `export default`, or on a non-exported class declaration or class expression, will be included in Function.prototype.toString(). +- Consensus on allowing decorators before the `export` keyword in addition to after the `export` or `export default` keywords, but with a Syntax Error if you specify decorators in both positions (i.e., exclusively one position, or the other, but not both) on a single declaration. Decorators must not come between the `export` and `default` keywords if both are present on the exported declaration. +- Consensus on the source text cutoff for class declarations remaining only the ClassDeclaration production. Decorators before `export` will not be included in Function.prototype.toString(). Decorators after `export` or `export default`, or on a non-exported class declaration or class expression, will be included in Function.prototype.toString(). ## Feedback on transcription @@ -904,11 +903,10 @@ DE: So we have the Ecma GA meeting coming up in June. And you don’t actually h TC (transcriptionist): I don’t think so. The terminology through the days. As things go on I’m learning the terminology -* Many notes in the chat in support of the transcriptionist +- Many notes in the chat in support of the transcriptionist -* A round of applause for the transcriptionist +- A round of applause for the transcriptionist -#### Conclusion/Resolution +### Conclusion/Resolution Widespread support for using human, rather than machine transcription, given the inaccuracies in current machine transcription. Further feedback will be collected offline/over time to inform the decision of whether to continue transcription in 2024. - diff --git a/meetings/2023-01/jan-30.md b/meetings/2023-01/jan-30.md index e47754e6..f883ddab 100644 --- a/meetings/2023-01/jan-30.md +++ b/meetings/2023-01/jan-30.md @@ -5,7 +5,7 @@ **Remote attendees:** -``` +```text | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -49,7 +49,7 @@ Presenter: Ujjwal Sharma (USA) USA: The committee agreed by consensus to work with a stenographer, with no objections or opt-outs from note-taking, details at https://github.com/tc39/Reflector/issues/460 -USA: the next meeting is 21st to the 23rd of March and it will be a hybrid meeting. Thank you to everyone who give the feedback in the survey. We have you quorum ahead of time. We have to approve the meeting minutes. Waiting for objections. I assume you approve the last meeting minutes. Does anyone have any objections against the current agenda? No as well. All right. +USA: the next meeting is 21st to the 23rd of March and it will be a hybrid meeting. Thank you to everyone who give the feedback in the survey. We have you quorum ahead of time. We have to approve the meeting minutes. Waiting for objections. I assume you approve the last meeting minutes. Does anyone have any objections against the current agenda? No as well. All right. ## Report from the TC39 Secretariat @@ -67,15 +67,15 @@ IS: Okay, good. Basically I’m not going to repeat it. These are the document t IS: This is the 001 ECMA statistic for 2022 and the other One this is also just a mirror, the agenda of this meet Meeting. We don’t have more documents here for the general Assembly. Why are this interesting for TC39 TC39 member and parallel Because we have here this double documentation. One is the official one on the TC39 website. The other one is TC39 is using basically for Internet Purposes and here basically this is the GIT hub and Also the other two that we are using. I go now to the next one. Now the next one, this is new. And this came from a discussion that I had with Patrick LUTE and I have already reported that I’m always Getting I would say repeated complaint about our long Meeting minutes. And obviously the reason for the long meeting minutes Are first of all, you know, the long technical notes And those ECMA members who are not interested in the Detailed work of TC39, they are not interested in the Technical notes and many of them have the feeling that The first part which would be the summary part, it is Not really formative enough. So Patrick suggested that we should improve the part That we are outputting, what kind of contribution that Were presented to the TC39 meeting. We are publishing all the contribution that we have Received and we can collect after the meeting, you knowp Know, the different slides. This is always published and stored this ECMA document. But here the issue is why don’t we ask each Contribution to provide us a very short one part – one Paragraph of short summary what their contribution is About. And then we can also from the secretariat maybe also Copy the resolution, what has been then decided on this Contribution. So here the request would be if everybody agrees with It, pleased to provide Patrick or to myself a paragraph Of your contribution and then we could include it into The next meeting minutes in order to improve a little Bit the quality, general quality of our minutes that we Are preparing I go to the next one. The next one is here. The membership changes. Basically there were a number of companies which were Formally approved by the general assembly meeting in December that could already participate and then some Of them indeed participate in the TC39 meeting. The only thing was that I could not formally vote. We didn’t have a formal vote. Associate member is ordinary member and then we have Received this withdraw letter in the secretarial secretariate in November ‘22 from paypal. I have to tell you and remind you because this is a Repeated mistake all the time that if any company wants To withdraw from ECMA, then please, please do do do it before The 1st of October. If you don’t do it before the 1st of October, then Automatically it prolongs for one year and then we have The discussion, you know, is it too late? Is it not too late? Are you going to pay? Are you not going to pay? Et cetera, et cetera. Fortunately it is not my business anymore. But Patrick has to deal with it. But we have the same situation also now with paypal. So for the next year, if somebody wants to quit from ECMA, which of course, you know, I would not wish and Recommend to you, but if it has to be the case, then Please do it before the 1st of October. So this is regarding this slide and let me go here. The next slide is recent TC39 meeting participation. I go immediately to the second page (slide 11) which I don’t know Why – maybe if it is in presentation mode, I don’t Know. It is on the next page. I don’t know why. The point is I can also tell it verbally, you know, so It was nothing dramatic than what we have seen before. We have a steady participation in the meetings. More or less with the same number. Nothing really exciting. I haven’t the slightest idea why I cannot show it to You. I go to the next page which is about the information That I have taken from the ECMA document 002-2023. Regarding the statistics. On this slide, you see the ECMA website, the entire Website. -RPR: We have one minute to go I would say. +RPR: We have one minute to go I would say. IS: How many? RPR: One or two? -IS: I’ll use two. +IS: I’ll use two. -IS: So I would say please read it. Next one or so, this is the ECMA website page access 2022. The other one was 2021. I always take it two years you can compare it. It is basically very similar the two years. The same is also true now I am on the next one for 2021. Regarding the ECMA PDF standard down loads. They are very, very similar to the ‘22 figures which is On the next slide and you can see here in 2021, the Share was 58% of all of the down loads ECMA TC39 was Dominated in 2021 but the same is also true for 2022. Now it is a little bit less 55%. But we are still dominating here the scheme. Here these are the access documents. The first access document, so HTML access, I have only Taken the four last year additions for ECMA 262 and for ECMA 402 and you can see also there are significant Number approximately a factor of four between the down Downloads and between the access number. So you can also read here the statistics by yourself. Next slide is the plenary schedule TC39. You know it also from the invitation to this meeting. So I can switch. Also regarding the rules that has been also published On the GIT hub and it is just taking repetition and Here it is coming five or six paces. It is just repetition regarding the ISO renewal of the Two standard. Here I am not going to read it through again. I just – because I have already presented also at the Last meeting in a little bit also I said it here and so I am not going to present it. The two menus are also not terribly new and important Because we have already seen it. So this is for the next year’s general assembly meeting Meeting. One in Japan in Tokyo and the other one in December in The U.S. There is no place for the ExeCom meeting. It is my fault. So I don’t know. So ExeCom meeting and then regarding the – here are the ExeCom meeting and the next one, the last one I have Announced already in the December meeting who is for President and vice-president and treasurer and ExeCom Member for 2023. All approved. No surprises. Congratulations to them. And SAMINA has been approved by the new secretarial secretary General. That’s it. Thank you very much. +IS: So I would say please read it. Next one or so, this is the ECMA website page access 2022. The other one was 2021. I always take it two years you can compare it. It is basically very similar the two years. The same is also true now I am on the next one for 2021. Regarding the ECMA PDF standard down loads. They are very, very similar to the ‘22 figures which is On the next slide and you can see here in 2021, the Share was 58% of all of the down loads ECMA TC39 was Dominated in 2021 but the same is also true for 2022. Now it is a little bit less 55%. But we are still dominating here the scheme. Here these are the access documents. The first access document, so HTML access, I have only Taken the four last year additions for ECMA 262 and for ECMA 402 and you can see also there are significant Number approximately a factor of four between the down Downloads and between the access number. So you can also read here the statistics by yourself. Next slide is the plenary schedule TC39. You know it also from the invitation to this meeting. So I can switch. Also regarding the rules that has been also published On the GIT hub and it is just taking repetition and Here it is coming five or six paces. It is just repetition regarding the ISO renewal of the Two standard. Here I am not going to read it through again. I just – because I have already presented also at the Last meeting in a little bit also I said it here and so I am not going to present it. The two menus are also not terribly new and important Because we have already seen it. So this is for the next year’s general assembly meeting Meeting. One in Japan in Tokyo and the other one in December in The U.S. There is no place for the ExeCom meeting. It is my fault. So I don’t know. So ExeCom meeting and then regarding the – here are the ExeCom meeting and the next one, the last one I have Announced already in the December meeting who is for President and vice-president and treasurer and ExeCom Member for 2023. All approved. No surprises. Congratulations to them. And SAMINA has been approved by the new secretarial secretary General. That’s it. Thank you very much. > **Note** > The presentation is fully included in the slides (tc39/2023/002.pdf) and also as audio/video in tc39/2023/004.mp4. @@ -86,17 +86,17 @@ DE: Yeah. I wanted to speak to IS suggestion that we capture the summaries. I wa DE: If the secretary doesn’t have time for that, it would Be welcome if somebody else in committee did this. LEO did this in past meetings. We’re behind in the summaries. I think it would be welcome. I think the pressure taken off from transcribing should give us all a bit more energy to do this important task of making accessible summaries at meetings. -IS: So if you give us the document, you are talking about, of course, then we can also do it. +IS: So if you give us the document, you are talking about, of course, then we can also do it. DE: : You already have the document. This is the minutes that we give you every meeting. Every heading will have a summary at the bottom. You can look at all those different minutes documents That you have submitted to the filer. They all have summaries and it is just a matter of collating them. -IS: They’re in the technical notes? +IS: They’re in the technical notes? -DE: Yes. We can cut it short between you and myself. If I know, we can also take it ourself. Because we have to take out also the summary, the decision. I’m talking about the decision but here – +DE: Yes. We can cut it short between you and myself. If I know, we can also take it ourself. Because we have to take out also the summary, the decision. I’m talking about the decision but here – ??: That’s right. The decisions are all listed in a section in the notes For each particular topic. -IS: I have no problem with the decision part that is always at the end. I have problems with the summary of the contribution,. +IS: I have no problem with the decision part that is always at the end. I have problems with the summary of the contribution,. DE: So to summarize the contribution, for each Contribution Contribution there’s link for supporting documents. I think we can provide the links and the authors and List the conclusion. That would be a useful start for summary document. Of course, more useful to have a summary of the Discussion. But that’s more involved. You have a link, a reference to the contribution. @@ -105,7 +105,7 @@ IS: Okay. So then I suggest that we cut it short outside of this meeting. In ord DE: I agree. But I want to emphasize it would definitely be useful to have more detailed summaries of the meetings. If anybody wants to get involved in that, then please, You know, be in touch. -IS: Okay,. So contributors are always welcome for sure. Thank you. +IS: Okay,. So contributors are always welcome for sure. Thank you. RPR: Thank you for this. Let’s move on. @@ -117,8 +117,7 @@ RPR: So the next item I’d like to give some reminders about Some things on the ??: One more thing, we also need a TG3 chair as reminded In the chat. -??: Yes. And MF points out we’re looking for a TG3 -Chair and someone who likes security please do so. That’s all from me. Next up, we have Kevin GIBBONS with ECMA status update Or maybe someone else from the group. +??: Yes. And MF points out we’re looking for a TG3 Chair and someone who likes security please do so. That’s all from me. Next up, we have Kevin GIBBONS with ECMA status update Or maybe someone else from the group. ## ECMA262 Status Updates @@ -126,18 +125,18 @@ Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1kcZOA8jUq-VMUv-NN89uXb-Jpl_wDmV_hyaB4ZJG_xU/) -KG: So this is the usual editor’s update and going over editorial And normative changes. Very little in the way of notable changes. We are of course continuing to make our usually metrics at Clean up and consistency. The only change that is worth calling to the attention of Plenary was this one [2681](https://github.com/tc39/ecma262/pull/2681), which is a tweak to how the “code evaluation state” is trapped. This is relevant for the machinery, for generators and AC Functions and not relevant for those not looking at those and It is a nontrivial change to that machinery. If you previously looked at it and been confused hopefully the Machinery is more sensible now and the other improvements in The type line for those as well. And then normative changes we’ve landed: +KG: So this is the usual editor’s update and going over editorial And normative changes. Very little in the way of notable changes. We are of course continuing to make our usually metrics at Clean up and consistency. The only change that is worth calling to the attention of Plenary was this one [2681](https://github.com/tc39/ecma262/pull/2681), which is a tweak to how the “code evaluation state” is trapped. This is relevant for the machinery, for generators and AC Functions and not relevant for those not looking at those and It is a nontrivial change to that machinery. If you previously looked at it and been confused hopefully the Machinery is more sensible now and the other improvements in The type line for those as well. And then normative changes we’ve landed: KG: The first is this [2819](https://github.com/tc39/ecma262/pull/2819) that is a Tweak to the mechanics of the generators that we got consensus For at the previous meeting or the one before, I forget which. Possibly the one before that. This all got consensus recently the one out for long. 2819 test landed and chipped in a couple of – [2905](https://github.com/tc39/ecma262/pull/2905) is not Actually a normative change. It was this change to the way that the module importing machinery is wired. This is in order to make it easier to do some of the module Related changes that we have coming through proposals. And there’s a change for integration on the HTML side but no Actual immediate normative for – and to [2973](https://github.com/tc39/ecma262/pull/2973) is this sort of Web reality change that in the atomics machinery allows Browsers to optionally make time outs somewhat larger as part Of spectrum integrations. This is something that some browsers are already doing and Many browsers feel they need to do. No notable other sorts of changes to the specification or Environment since then. And then in temples of upcoming work, basically the same list List, I don’t believe we have added anything to this (slide 5). I’m not going to go through it again. But just a note we are still working on refactoring a bunch Of machinery for clarity and consistency. That’s all we had in terms of the editor update. ### Conclusion/Decision - Normative changes: - - #2819: Avoid mostly-redundant await in async yield* - - #2905: Layering: Add HostLoadImportedModule hook - - #2973: Allow implementations to pad timeouts in SuspendAgent + - #2819: Avoid mostly-redundant await in async yield* + - #2905: Layering: Add HostLoadImportedModule hook + - #2973: Allow implementations to pad timeouts in SuspendAgent - Editorial changes: - - #2681: Use Abstract Closure to set the code eval state + - #2681: Use Abstract Closure to set the code eval state ## ECMA402 Status Updates @@ -145,13 +144,13 @@ Presenter: Ujjwal Sharma (USA) - [pull request](https://github.com/tc39/ecma402/pull/729) -USA: So hello everyone. I wouldn’t take a lot of your time. And get right to the point. Last meeting if you remember, we presented a couple of Normative issues for for for approval and this one wasn’t approved Because of the creation of this request #729 and the meeting itself Itself. Took a while for us to get around to it. But this has been reviewed by the TG2 and the implementers Have confirmed that this is a good change. So I would like to ask for committee consensus on this one. +USA: So hello everyone. I wouldn’t take a lot of your time. And get right to the point. Last meeting if you remember, we presented a couple of Normative issues for for for approval and this one wasn’t approved Because of the creation of this request #729 and the meeting itself Itself. Took a while for us to get around to it. But this has been reviewed by the TG2 and the implementers Have confirmed that this is a good change. So I would like to ask for committee consensus on this one. RPR: A point of order from DE. DE: Do we have a conclusion for the previous topic? For KG’s update? -KG: No. I don’t think there’s ever a conclusion for those updates. I mean, it’s their updates. +KG: No. I don’t think there’s ever a conclusion for those updates. I mean, it’s their updates. ??: Just FYI. @@ -165,9 +164,9 @@ KG: I can put something in the notes. I don’t think it is – there’s much t DE: Great, thanks. -USA: Apart from that, nothing to add. Nothing on the queue. So I take it that folks are not against the change. It changes the error handling sort of snippet here in ECMA to Use the correct starting year that is one not negative zero. +USA: Apart from that, nothing to add. Nothing on the queue. So I take it that folks are not against the change. It changes the error handling sort of snippet here in ECMA to Use the correct starting year that is one not negative zero. -RPR: Thank you. DLM is Plus one on the change. And the queue is empty. +RPR: Thank you. DLM is Plus one on the change. And the queue is empty. USA: Perfect. Thank you all. @@ -177,7 +176,7 @@ USA: I could add that to the notes. ??: At the end of each item, because the transcriptionist is Giving us obviously the play by play, please can the presenter Write up the conclusion with the main points. This is the usual section that we have in the notes where it Says conclusion. -??: I’d ask that it include brief rational and discussion point Points that were especially critical. +??: I’d ask that it include brief rational and discussion point Points that were especially critical. ??: Thank you. @@ -203,14 +202,14 @@ RPR: Thank you. Presenter: Philip Chimento (PFC) -PFC: We don’t have slides. We prepared a couple of paragraphs which I will read out and I can paste these in the notes. If there are no questions, they will be the conclusions. They’re very short. +PFC: We don’t have slides. We prepared a couple of paragraphs which I will read out and I can paste these in the notes. If there are no questions, they will be the conclusions. They’re very short. ### Conclusion/Decision For stage 3 proposals, we now have tests for `isWellFormed`. We've had recent PRs making progress on coverage of `Array.fromAsync`, `RegExp` modifiers, `Temporal`, `Intl.NumberFormat` V3, and `Intl.DurationFormat`. We'd love help on others; some of them already have volunteers so ask in the "TC39 Test262 Maintainers" channel on Matrix if you're interested, to make sure we don't have overlapping efforts on the same thing! Our trial run of our new RFC process was successful and we've used it to make some adjustments to our draft process document, which will become official soon. In further news about contributor documentation, we're preparing a document explaining the rationales for some existing choices made in the test262 codebase. We're hoping to add to this as new questions come up so that it's a place where contributors can get answers to the question "why is this like this, and what should I to know if I want to change it?" -RPR: Thank you PFC. +RPR: Thank you PFC. ## Updates from CoC Committee @@ -236,7 +235,7 @@ DE: Occasionally we may want to wait before shipping stage 3 proposals. One reas DE: We have a number of – just a couple, stage 3 proposals that are not ready to ship. One is Temporal. We keep having normative changes for Temporal. But it’s mostly stable. And in general stage 3 is usually stable and should still be understood to be so. We had ShadowRealm almost shipped in Safari with HTML integration being unfinished. The goal of the presentation is make a way to document clearly when essential pieces are known to be missing to reduce the chance of having disagreement between implementations and when things are shipped. What we want to avoid is a compatibility matrix. We want to avoid a state where different engines ship or different implementations in general ship different things or different subsets of things, and then application developers have to worry about not just is this feature there or not, but which version of the future it is? -DE: So my suggestion here is to document in the proposals repository, maybe in a separate column just for stage 3. Proposals saying this proposal even though stage 3 is “not ready to ship”. The strong default would remain that Stage 3 means something is ready to ship. We would be documenting the exceptional cases. There’s a question of how you would set this value. My suggestion is that the proposal champions themselves set the value. And of course this is considered non binding. It’s fine for engines not to ship this column marks and ship things that do have it marks, but it’s a clear central documentation point to make sure that we’re on the same page. +DE: So my suggestion here is to document in the proposals repository, maybe in a separate column just for stage 3. Proposals saying this proposal even though stage 3 is “not ready to ship”. The strong default would remain that Stage 3 means something is ready to ship. We would be documenting the exceptional cases. There’s a question of how you would set this value. My suggestion is that the proposal champions themselves set the value. And of course this is considered non binding. It’s fine for engines not to ship this column marks and ship things that do have it marks, but it’s a clear central documentation point to make sure that we’re on the same page. DE: Some feedback I have gotten so far about this proposal: some people like that it’s lightweight like this idea. Some people argue there should be a mechanism for settling disputes so in case the champion is kind of being negligent in saying their proposal fine, but then other people say there’s a real problem, then in that case, we should have some way of settling that. I think it’s important that we work it out if the situation like this comes up, but at the same time the strongest version of this that one could imagine ever requiring consensus on becoming shippable (“Stage 3.5”). I think that would add significant friction to our process of having an additional fifth consensus seeking stage. And I think there’s – I’m not the only one in committee who thinks that. I’m not so keen on the very strong version of this. But certainly if there’s any kind of disagreement on the marking of a proposal, it’s fair to bring up anything like that at all in plenary. @@ -248,11 +247,11 @@ DE: So I have the PR for this in the how-we-work repository. And I’m wondering DLM: Thank you. We discussed this internally at the SpiderMonkey team and we’re in favour of this. Explicit documentation is a good idea and we like the idea. This is a lightweight process and I agree that the idea that any sort of dispute resolution we can worry about that, if and when the dispute comes up. Given the idea this is non-binding and seems like we have a dispute resolution mechanism built in. Anyways, yes, support for this. Thank you. -MLS: The process document is pretty clear. Implementation types expected at stage 3 are spec compliant, but not shipping. That is the time for implementers to try things out and work out the bugs. At stage 4, the implementation types expected are shipping implementations. Now, saying that, we do implement things at stage 3. We’re not super eager to ship them, they usually go into our technology preview, nightly and things like that. I don’t want to change the process document to say something is shippable or not shippable at stage 3. Certainly, implementors can decide if they want to ship something that is at stage 3, that is their own decision. +MLS: The process document is pretty clear. Implementation types expected at stage 3 are spec compliant, but not shipping. That is the time for implementers to try things out and work out the bugs. At stage 4, the implementation types expected are shipping implementations. Now, saying that, we do implement things at stage 3. We’re not super eager to ship them, they usually go into our technology preview, nightly and things like that. I don’t want to change the process document to say something is shippable or not shippable at stage 3. Certainly, implementors can decide if they want to ship something that is at stage 3, that is their own decision. DE: I’m kind of surprised by this comment because I thought that JSC had repeatedly shipped things to the main version of Safari at stage 3. -MLS: It’s rare that we do, unless we have high confidence a proposal is stable and isn’t likely to change. Typically we “ship” a stage 3 proposal in our nightly or technology preview. I made this same statement in the past. This is the way we tend to work in WebKit. +MLS: It’s rare that we do, unless we have high confidence a proposal is stable and isn’t likely to change. Typically we “ship” a stage 3 proposal in our nightly or technology preview. I made this same statement in the past. This is the way we tend to work in WebKit. DE: I mean, the goal here is to build high confidence in stage 3. @@ -264,11 +263,11 @@ MLS: They’re not. Look at the process document. At Stage 3, implementation typ DE: We had quite a long back and forth about the semantics of this text. I think it’s just genuinely ambiguous and we can form different kinds of shared understanding about the way we want to do things. -MLS: I don’t share the understanding you’re talking about. Again, something is not shippable unless it’s fully stage 4. And even stage 4, sometimes we have to go back. And that happens. Stage 3, yes, we and other implementers will implement various features that are at stage 3 but we certainly are reluctant to ship things that are stage 3 to release versions of Safari. The JSC engine is used for all kinds of other applications. +MLS: I don’t share the understanding you’re talking about. Again, something is not shippable unless it’s fully stage 4. And even stage 4, sometimes we have to go back. And that happens. Stage 3, yes, we and other implementers will implement various features that are at stage 3 but we certainly are reluctant to ship things that are stage 3 to release versions of Safari. The JSC engine is used for all kinds of other applications. -JHD: MLS, I would say that my interpretation of the process document is that things have to be shipped in order to get Stage 4 which means that in fact, they are shippable at stage 3 - but they’re not required to be shipped at stage 3 which is why it’s fine that safari or any other implementation would choose not to ship until stage 4. I don’t see how there’s any argument it’s stage 3 isn’t shippable because if nothing is shipped in stage 3, nothing will ever get stage 4 according to the process document. That’s always how we’ve interpreted it. +JHD: MLS, I would say that my interpretation of the process document is that things have to be shipped in order to get Stage 4 which means that in fact, they are shippable at stage 3 - but they’re not required to be shipped at stage 3 which is why it’s fine that safari or any other implementation would choose not to ship until stage 4. I don’t see how there’s any argument it’s stage 3 isn’t shippable because if nothing is shipped in stage 3, nothing will ever get stage 4 according to the process document. That’s always how we’ve interpreted it. -MLS: JHD, I disagree with that. We have Test262 tests required for stage 3. We use those to test the implementations. Obviously implementations also do the other testing, like making sure It doesn’t break something else, has good performance and things like that. Again, stage 3 is a feedback process. We are not done when we’re at stage 3. And if we, TC39, think that at stage 3 we’re going to ship betas to the world, we get ourselves in a place where there’s difficulty. We’re going to talk about an issue of that later in this meeting were we have gotten ourself in some difficulty with an implementation shipping something that is stage 3. I think we would like to avoid these types of problems. +MLS: JHD, I disagree with that. We have Test262 tests required for stage 3. We use those to test the implementations. Obviously implementations also do the other testing, like making sure It doesn’t break something else, has good performance and things like that. Again, stage 3 is a feedback process. We are not done when we’re at stage 3. And if we, TC39, think that at stage 3 we’re going to ship betas to the world, we get ourselves in a place where there’s difficulty. We’re going to talk about an issue of that later in this meeting were we have gotten ourself in some difficulty with an implementation shipping something that is stage 3. I think we would like to avoid these types of problems. JHD: I hear what you’re expressing; I’m not arguing that point at all. I’m saying maybe we’re using different definitions of the word “shippable”. I’m saying that the process document always said that entrance criteria is shipping implementations. You can’t enter stage 4 until a proposal has shipped. I think the word ‘shippable’ like according to that definition means that it happens in stage 3 for someone. @@ -286,7 +285,7 @@ MLS: We haven’t changed our point of view. And if I understand correctly, a lo DE: So I don’t want to assert anything about your past or present point of view but it’s clear that your point of view is not the universal one. So given that – -MLS: There is no universal one. That is the point I’m making! There is no universal one. +MLS: There is no universal one. That is the point I’m making! There is no universal one. DE: I agree. So I’m wondering given the context – @@ -336,7 +335,7 @@ SYG: Yeah, I agree with JHD’s interpretation. I think somebody has to ship dur MLS, I’m interested to hear if you think nonbinding documentation is the way to coordinate those exceptional cases like Temporal? I am not advocating changing the process document to explicitly say that no implementations ship until Stage 4. -MLS: The issue is that “non-binding” is just that. So it’s a hint, right? It’s not binding. I’m not sure what signal it sends. The real issue here I think in all of our minds and the elephant in the room is when do we want developers to use a new feature? I don’t think we want developers to use it till stage 4 unless they fully understand there is a possibility the feature may change. +MLS: The issue is that “non-binding” is just that. So it’s a hint, right? It’s not binding. I’m not sure what signal it sends. The real issue here I think in all of our minds and the elephant in the room is when do we want developers to use a new feature? I don’t think we want developers to use it till stage 4 unless they fully understand there is a possibility the feature may change. MLS: I agree that we need to get some feedback from developers and I think that’s when we hide something behind a flag and let people try it out or make it available in a Canary or something like that. @@ -358,7 +357,7 @@ DE: For a process similar to Apple’s wouldn’t it be useful, given that these MLS: I find it very useful for every proposal to list the known implementations of the proposal and their shipping status (nightly, canary, release X, behind flag). -YSV: Actually I did do something like this in my personal tracking of the TC39 proposal’s repo ages ago. I don’t know if anyone remembers that. I had the status where we were in shipping. It’s difficult to keep up to date but maybe we can pull ‘canIUse’ data to get information if I understood you correctly Michael. +YSV: Actually I did do something like this in my personal tracking of the TC39 proposal’s repo ages ago. I don’t know if anyone remembers that. I had the status where we were in shipping. It’s difficult to keep up to date but maybe we can pull ‘canIUse’ data to get information if I understood you correctly Michael. DE: Yeah. We could enable columns for that in the proposal’s repo. Would that be something that you’re in favour of? For stage 3 and stage 4 proposals. @@ -392,7 +391,7 @@ SYG: MF, I would not agree to a new stage. Let me be somewhat blunt here. I thin SYG: So anyways I think one of the dimensions that I think you know standards ought not to prevent folks from competing on is the speed with which they implement and ship something. Like, there’s risk involved with being first shippers which I think various implementations have all experienced at this point.There’s also some rewards. And I don't want to take that dimension of competition away. For that I don’t want an extra stage for which there is to be coordinated shipping like a consensus seeking stage. Some things required coordinating shipping, we can agree on that that case by case. That’s totally fine. But as a matter of course, I don’t want a proposal to be we have to kind of flip the bit at the same time. That seems not not explicitly not something that I want. DE’s proposal to document the exceptional cases is a good start. I do believe these are exceptional cases. -DE: Whatever requires coordination that we do will have to not require sign off from all browsers at once, for example. You’re saying that’s not – we couldn’t set a bit that says now we’re in lock step mode. Is that what you’re saying? +DE: Whatever requires coordination that we do will have to not require sign off from all browsers at once, for example. You’re saying that’s not – we couldn’t set a bit that says now we’re in lock step mode. Is that what you’re saying? SYG: That’s what we’re saying. We can all agree to the lockstep mode for exceptional proposals. I don’t want that to be a consensus seeking stage now unless it’s really needed. I don’t see a reason for it. I would argue against it as the consensus seeking stage and not argue against it for exceptional cases. Exceptions come up. That’s why I’m in general opposed to the new stage: I think it gets us closer to the lock step mode. I don’t think that’s the role of the standards committee. That was all. @@ -413,7 +412,7 @@ Presenter: Daniel Ehrenberg (DE) - [PR](https://github.com/tc39/how-we-work/pull/122) - [slides](https://docs.google.com/presentation/d/1OvxOZrRmKovnVk4CW6GbvLGS5cnnjP6bJyp4cdC5A4U/edit#slide=id.p) -DE: Strengthening TC39’s consensus process. You know, we use consensus here. Let’s just review some reasons why it’s a good thing to do. First, it’s a conservative default, which means that we’re going to leave things how they are now rather than mess things up if we have any significant concern. Making a change is a big deal involving lots of implementations, lots of JavaScript developers and we want to get it right. +DE: Strengthening TC39’s consensus process. You know, we use consensus here. Let’s just review some reasons why it’s a good thing to do. First, it’s a conservative default, which means that we’re going to leave things how they are now rather than mess things up if we have any significant concern. Making a change is a big deal involving lots of implementations, lots of JavaScript developers and we want to get it right. DE: Consensus enables certain specialized delegates to have a strong seat at the stable to preserve, for example, web compatibility and invariants and ensures no critical stakeholders are excluded. @@ -433,7 +432,7 @@ DE: It could be as simple as like, you know, because sometimes we like to have t DE: It would be great to, you know, a brief rationale why people want to support things. Again, I think this is a really low bar. If people can’t articulate why they think something should happen and only the presenter can, does the committee really have consensus on it? -DE: I would also want to explicitly solicit non-blocking dissent and give space for this to be discussed. Because it currently feels a little too high pressure to raise concerns. This has been a problem for years; years of people being either not raising their concerns or raising their concerns and seeing them be misinterpreted for a block. Both of those things simultaneously occurred. Maybe a worse problem in the past than recently. +DE: I would also want to explicitly solicit non-blocking dissent and give space for this to be discussed. Because it currently feels a little too high pressure to raise concerns. This has been a problem for years; years of people being either not raising their concerns or raising their concerns and seeing them be misinterpreted for a block. Both of those things simultaneously occurred. Maybe a worse problem in the past than recently. DE: So do we have consensus on consensus? Is this a reasonable slight change in the process for gathering consensus at the end of the TC39 topic? @@ -443,7 +442,7 @@ DE: Yeah. Honestly about the stage 2 to stage 3 transition, I completely agree w WH: I would disagree with that restriction as well. -DE: Well, I feel like we frequently have a thing where the chair says: is the objection a stage N concern?’ Anyway, I’m fine of excising this from the document. I was trying to fully document what we do now. We can leave that, you know, to be discretionary or something. +DE: Well, I feel like we frequently have a thing where the chair says: is the objection a stage N concern?’ Anyway, I’m fine of excising this from the document. I was trying to fully document what we do now. We can leave that, you know, to be discretionary or something. WH: I would prefer to excise that. People are going to read that text and then try to object to objections. @@ -500,11 +499,11 @@ KG: Yes. I don’t think we need to do the whole I nominate, I second, mostly be DE: They write down the names and reasons in the notes as part of this? -KG: I don’t think they need to have reasons. It’s just presumably the reason is because you think it’s a good proposal. But I would be in favour of the names at least. +KG: I don’t think they need to have reasons. It’s just presumably the reason is because you think it’s a good proposal. But I would be in favour of the names at least. DE: Part of this is I explicitly want to solicit the reasons. Do you think that makes sense? -KG: I’m happy for there to be more discussion during this part of the process. I’m mostly happy about that so people aren’t in full agreement So everything is excellent have the space to say that. I think the people who think it is good in exactly the form it is, there’s not much more to be said about it. We have just had the champions presenting on all of the reasons it’s good. Now if you like it for a different reason than the champion, say so. If you are just like I agree with the champion, I want to support it, you don’t need to say anything more than I support advancing. +KG: I’m happy for there to be more discussion during this part of the process. I’m mostly happy about that so people aren’t in full agreement So everything is excellent have the space to say that. I think the people who think it is good in exactly the form it is, there’s not much more to be said about it. We have just had the champions presenting on all of the reasons it’s good. Now if you like it for a different reason than the champion, say so. If you are just like I agree with the champion, I want to support it, you don’t need to say anything more than I support advancing. DE: Great. Can we agree on on two people as the minimum bar here? I would prefer that. There was a back and forth in the issue. Any thoughts on this? @@ -583,7 +582,7 @@ SYG: `onOverflow` is a hook that gets called when the tab text is Too long. What SYG: What about to freeze the prototypes? We have tried this in the past. And it is difficult to apply to existing applications Especially ones that want to run off the shelf library code. The ‘override mistake’ is kind of endemic everywhere and difficult To work around. Even if you could do that and you had a completely first party Environment, apps that have polyfills that need to actually Mutate built in prototypes to polyfill missing features, that Then puts the onus on the application to find a freeze point. That is a nontrivial task and deployment concern. There’s size concerns that you have to freeze application to Find prototypes. The technique is general enough that most of the time you get The most bang for your buck to pollute the built in prototypes Like `Object.prototype`. If you have a large application you could also pollute the Application prototypes themselves to exploit privileges and Exploit the application. Strict mode to get non-silent breakages. Not great for DX if you have to point out the potential issue To the application developers and most interestingly which is this is a very recent CVE, I suggest folks with the interest In this things follow through is a detailed story walk through And basically some application called NodeBB, I’ve already Forgotten what it does. It is some sort of server side form software or something Where freezing the prototype would not have prevented the kind Of – the technique that they use to exploit this software Where they overrode something.constructor property via Prototype pollution or data only pollution. The point is data-only attack was – it wasn’t over riding. That’s not the point. It wasn’t overriding. It was with this kind of data only attack where you mutate Something not necessarily in the prototype that could lead you to the security vulnerability that led us to the solution that We’re proposing is not to freeze prototypes. And that CV is interesting that that is an attack that would Have been prevented by this somewhat radical change that we’re Proposing here. Not something that would have been prevented by the freezing Of prototypes. The TLDR is that at scale deployment of prototype freezing Despite it being a capability in the language we have already, -SYG: We have found at scale deployment to be impractical and can’t Use it to remove the noodle on reducing vulnerabilities here. So what we’re thinking is the starting point is can we cut off Access paths to prototypes instead? And the key observation is that prototype exploits – Prototype pollution exploits rely on unintentional paths to The prototype that the developers didn’t consider. You have three strong property keys and the combinations that Give access prototype and `__proto__` and constructor. Can we cut off access? It’s important of intentional and unintentional access. I said in the previous slide is preventing unintentional Access paths. Our assumption here is that static property access via dot is A good proxy measure of intention by the application developer. If you’re actually typing (obj).prototype assume you mean the Prototype than doing something like object bracket key. All of the attacks in the wild rely on computer access i.e. Unintended access. I want to take a quick side bar. This is a core design mistake that the reason – like, the fact that we have these string property access key paths to These deep, you know, object protocol things is a design mistake, core my mistake of JS been there since day 1 and pointing me to the term that Gila Bracha coined called "stratification". That says meta-level facilities must be separated from base-level functionality. Property access is base-level thing. Prototype fiddling is met at that-level thing. To combine the two things via the same language facility like Property access is opening a can of worms of trouble as we’re Seeing right now. Ideally we would have a stratified thing and we would have Explicit reflection APIs that lets you do the prototype Fiddling but property access can’t do that. It’s too late for that. As a side bar in the future “stratification” seems like a good Property to keep for any programming language. So why do we want to solve cutting off the access to these String paths in the language? As we have given the mode the common root cause of Encapsulation breakage of data versus code we can’t solve that At user land. It’s impractical to deploy. Without language changes importantly remains outside of the Threat model of existing mitigations of a bunch of things like JS prototypes are just how JS works. The if we don’t change just how JS works, the mitigations we Can’t really work around a core feature of the language. And it’s infeasible, for example, to taint check all data flow Flow. The sanitation is about code. And this is not about. This is about the data-only attacks. And tying into the stratification design principle I think Stratifying prototype access is high impact even by itself In the language. We have already moved in the direction and have object.prototype and had to have `__proto__` for existing code. Can we do something more radical with the opt in mode that I’m About to present? +SYG: We have found at scale deployment to be impractical and can’t Use it to remove the noodle on reducing vulnerabilities here. So what we’re thinking is the starting point is can we cut off Access paths to prototypes instead? And the key observation is that prototype exploits – Prototype pollution exploits rely on unintentional paths to The prototype that the developers didn’t consider. You have three strong property keys and the combinations that Give access prototype and `__proto__` and constructor. Can we cut off access? It’s important of intentional and unintentional access. I said in the previous slide is preventing unintentional Access paths. Our assumption here is that static property access via dot is A good proxy measure of intention by the application developer. If you’re actually typing (obj).prototype assume you mean the Prototype than doing something like object bracket key. All of the attacks in the wild rely on computer access i.e. Unintended access. I want to take a quick side bar. This is a core design mistake that the reason – like, the fact that we have these string property access key paths to These deep, you know, object protocol things is a design mistake, core my mistake of JS been there since day 1 and pointing me to the term that Gila Bracha coined called "stratification". That says meta-level facilities must be separated from base-level functionality. Property access is base-level thing. Prototype fiddling is met at that-level thing. To combine the two things via the same language facility like Property access is opening a can of worms of trouble as we’re Seeing right now. Ideally we would have a stratified thing and we would have Explicit reflection APIs that lets you do the prototype Fiddling but property access can’t do that. It’s too late for that. As a side bar in the future “stratification” seems like a good Property to keep for any programming language. So why do we want to solve cutting off the access to these String paths in the language? As we have given the mode the common root cause of Encapsulation breakage of data versus code we can’t solve that At user land. It’s impractical to deploy. Without language changes importantly remains outside of the Threat model of existing mitigations of a bunch of things like JS prototypes are just how JS works. The if we don’t change just how JS works, the mitigations we Can’t really work around a core feature of the language. And it’s infeasible, for example, to taint check all data flow Flow. The sanitation is about code. And this is not about. This is about the data-only attacks. And tying into the stratification design principle I think Stratifying prototype access is high impact even by itself In the language. We have already moved in the direction and have object.prototype and had to have `__proto__` for existing code. Can we do something more radical with the opt in mode that I’m About to present? SYG: So current thinking on this, on solving this is the two parted solution. One is opt in secure mode that removes the problematic string-keyed access paths. This is opt in. It is backwards breaking. And at the same time add new reflection APIs and what those reflection APIs look like is totally in the air for pending discussions. Maybe they could be new symbols and maybe could be reflect dot whatever. The idea is we want – you know, we don’t want to take away the capability of prototype fiddling but take away the unintentional really easy to accidentally get wrong capability. So the secure mode which is not a great name but we need to have something to be able to discuss. The whole point of this secure mode is cut off string-based property access when opted into. And there’s two main options on which path to cut. We can cut off `__proto__` and prototype or `__proto__` and constructor. And how do we opt in? @@ -597,9 +596,9 @@ RPR: I think we can move on. The first question is from JHD. JHD: Yeah, I mean, all the examples in your slides if you opt into a secure mode, you have to know to do that - and if you know to do that, then you also salt your keys, or use `Object.create(null)` or `{ __proto__ null }`, or use a `Map` or something. Unless you turn on the mode by default, I don’t think it would really achieve any of the goals you want. node, for example, already has a flag that lets you remove the `__proto__` accessor and you can run it with that - but lots of arbitrary modules in the ecosystem rely on the functionality. I’m incredibly confident that trying to do this by default would break the web in sufficient quantities that it wouldn’t be viable. I don’t see a lot of value in if it’s required to be opt in, that said, obviously the exploration area is great. Even though the number of prototype pollution attacks that turn into real exploits is nonzero, I think it’s small, but still worth addressing.I feel like the biggest benefit would be removing a bunch of false positive CVEs from the ecosystem that cost a lot of developers’ time. But either way, I mean, I think it’s worth exploring - that’s a stage 1 concern - but I wanted to share my skepticism. -SYG: Noted. I want to lean on SDZ to provide a more detailed answer here. But I want to respond first to this node flag thing. So our hunch is that we’re not saying we’re going to remove `__proto__` entirely. The idea is that this is a two-parted approach where we realize having .property access to __proto__ to .prototype to constructor to keep it working. The way we propose that is with automatic rewriting so we don’t have to manually migrate the entire code base. The other thing about using none prototype objects I think that speaks to the at scale deployment thing. If you had the luxury of time and whatever to basically re rewrite your whole world, then yes you could just never use prototype inheritance at all. That seems a challenge in itself. But at the very least we want to use third party libraries, you can’t really do that. As an application you could opt in the mode. With the automatic rewriting you get the benefits for free. We share your concern. Without the automatic rewriting step that that pure opt in will be difficult to get deploying and working. SDZ, do you have anything to add here? +SYG: Noted. I want to lean on SDZ to provide a more detailed answer here. But I want to respond first to this node flag thing. So our hunch is that we’re not saying we’re going to remove `__proto__` entirely. The idea is that this is a two-parted approach where we realize having .property access to **proto** to .prototype to constructor to keep it working. The way we propose that is with automatic rewriting so we don’t have to manually migrate the entire code base. The other thing about using none prototype objects I think that speaks to the at scale deployment thing. If you had the luxury of time and whatever to basically re rewrite your whole world, then yes you could just never use prototype inheritance at all. That seems a challenge in itself. But at the very least we want to use third party libraries, you can’t really do that. As an application you could opt in the mode. With the automatic rewriting you get the benefits for free. We share your concern. Without the automatic rewriting step that that pure opt in will be difficult to get deploying and working. SDZ, do you have anything to add here? -SDZ: Yeah. I want to speak up about the idea of using create null or the literal prototype null as an integration for this. I think it’s important to understand why we think that doesn’t work. We did a few experiments with this. We found a few problems with it. So the first one is you might create an object (inaudible) that doesn’t have any prototype and you think that is secure until some function at to the object might be array or number or string or maybe another object and now that has a prototype prototype, right? What you’re doing is essentially moving the goalpost one level deeper, right? And you really don’t have a way of creating let’s say a string with no prototype or a number with no prototype or array with no prototype. All of which could be polluted if they went into a common practice function. This is code and only protecting one object apart from sort of the issues that you would have in deploying it that is sort of find everywhere where I have an object literal and replace it with this which is granted sort of something that you can do and with the person speaking and saying if you’re willing to do that you are willing to do (inaudible) but I think those would be the strongest reasons why that solution is not good enough. +SDZ: Yeah. I want to speak up about the idea of using create null or the literal prototype null as an integration for this. I think it’s important to understand why we think that doesn’t work. We did a few experiments with this. We found a few problems with it. So the first one is you might create an object (inaudible) that doesn’t have any prototype and you think that is secure until some function at to the object might be array or number or string or maybe another object and now that has a prototype prototype, right? What you’re doing is essentially moving the goalpost one level deeper, right? And you really don’t have a way of creating let’s say a string with no prototype or a number with no prototype or array with no prototype. All of which could be polluted if they went into a common practice function. This is code and only protecting one object apart from sort of the issues that you would have in deploying it that is sort of find everywhere where I have an object literal and replace it with this which is granted sort of something that you can do and with the person speaking and saying if you’re willing to do that you are willing to do (inaudible) but I think those would be the strongest reasons why that solution is not good enough. RBR: Thanks. Let’s see next on the queue is Waldemar. @@ -674,7 +673,7 @@ PFC: I support this going to stage 1. I think wherever there’s an unintended e RPR: Justin. -JRL: Absolutely love this if we could disable the computed property access where the key is possibly `__proto__`. All of the exploits that I have ever seen have been unintentional accesses to the dudner proto when you don’t know what the key is because of some user value. If we prevent – if we can change the behavior so that computed access is one behavior that prevents access and intentional dot proto access allows you to set manipulations on the prototype, I think this could be web compatible. I would love to see it work. +JRL: Absolutely love this if we could disable the computed property access where the key is possibly `__proto__`. All of the exploits that I have ever seen have been unintentional accesses to the dudner proto when you don’t know what the key is because of some user value. If we prevent – if we can change the behavior so that computed access is one behavior that prevents access and intentional dot proto access allows you to set manipulations on the prototype, I think this could be web compatible. I would love to see it work. MAH: I have a quick reply. I have seen pollution with proto and dot constructor name access. So dot constructor, dot prototype things like that. Just preventing proto will not be enough. @@ -686,8 +685,7 @@ DMM: I think it looks very encouraging. I think it’s worth checking languages RPR: And Dan. -DE: This is a really interesting proposal. When I heard about it, it is a little ad hoc. Now that we see this kind of preponderance of vulnerabilities I think it’s important to do in this space. I think it’s important to prioritise mitigation based on both how exploited they are in practice. We can see this is fairly exploited in practice as well as how simple and contained the mitigation is where this is a pretty simple and contained mitigation. So earlier discussion about whether this is a double standard. I think it makes sense for us to bring this proposal to stage -1 because it scores pretty high on on those metrics. +DE: This is a really interesting proposal. When I heard about it, it is a little ad hoc. Now that we see this kind of preponderance of vulnerabilities I think it’s important to do in this space. I think it’s important to prioritise mitigation based on both how exploited they are in practice. We can see this is fairly exploited in practice as well as how simple and contained the mitigation is where this is a pretty simple and contained mitigation. So earlier discussion about whether this is a double standard. I think it makes sense for us to bring this proposal to stage 1 because it scores pretty high on on those metrics. RPR: Mark. @@ -817,7 +815,7 @@ RPR: This all makes sense. Thank you. ### Conclusion/Decision -* Editors will make this change, with WH's objection noted. +- Editors will make this change, with WH's objection noted. ## Symbols as WeakMap keys @@ -826,7 +824,7 @@ Presenter: Ashley Claymore (ACE) - [proposal](https://github.com/tc39/proposal-symbols-as-weakmap-keys) - [slides](https://docs.google.com/presentation/d/1FMQkmZH5YHsX9G_kPsTQtI3s5bPKJ3hVMJXA2Hz5z1w/) -ACE: This is Symbols as WeakMap keys. If all goes well, towards the end I will be asking for Stage 4. So this is a bit of context for people that might need it, so as far as I could tell, this goes back to at least GCP’s issue on ECMA-262 (issue [#2038](https://github.com/tc39/ecma262/issues/1194)) back in 2018 saying "why can’t we use symbols as WeakMap keys?". And this issue alone minus all the proposal things has a lot of comments on it. So lots of fun things to read. And that is what this proposal addresses. It says, “yes”, you can use some symbols as `WeakMap` keys and not just `WeakMap` keys but also `WeakSet` entries and a `WeakRef` target and also the target and token of `FinalizationRegistry`. So the whole family of weak and garbage collection related APIs. So in terms of the spec there’s no new APIs per-say, it’s just changing things that were previously a `TypeError` to no longer be a `TypeError`, and that is the observable change. So a big part of this proposal is discussing “which symbols?” and the answer is: all symbols except for those that have been returned from `Symbol.for`, a.k.a. ‘registered symbols’. They are not allowed. All other symbols are, whether that’s a good idea or not. So we reached stage 3 back in June. The PR is open to ECMA-262 and just to note that it hasn’t had a editor approval on that yet but seems like it’s just final editorial tweaking, not normative. SYG left good comments and I have updated the PR after those. I’m not 100% sure on the policy here of – I know Stage 4 requires editor signoff, so I guess I would like to ask for Stage 4 modulo editor review. I think the PR is 100% of the way there in terms of normative and very clone on editorial changes. We have the test262 tests merged. Thank you to PFC for writing those. That was massively appreciated. We also have two implementations, one V8 and one in JavaScriptCore. +ACE: This is Symbols as WeakMap keys. If all goes well, towards the end I will be asking for Stage 4. So this is a bit of context for people that might need it, so as far as I could tell, this goes back to at least GCP’s issue on ECMA-262 (issue [#2038](https://github.com/tc39/ecma262/issues/1194)) back in 2018 saying "why can’t we use symbols as WeakMap keys?". And this issue alone minus all the proposal things has a lot of comments on it. So lots of fun things to read. And that is what this proposal addresses. It says, “yes”, you can use some symbols as `WeakMap` keys and not just `WeakMap` keys but also `WeakSet` entries and a `WeakRef` target and also the target and token of `FinalizationRegistry`. So the whole family of weak and garbage collection related APIs. So in terms of the spec there’s no new APIs per-say, it’s just changing things that were previously a `TypeError` to no longer be a `TypeError`, and that is the observable change. So a big part of this proposal is discussing “which symbols?” and the answer is: all symbols except for those that have been returned from `Symbol.for`, a.k.a. ‘registered symbols’. They are not allowed. All other symbols are, whether that’s a good idea or not. So we reached stage 3 back in June. The PR is open to ECMA-262 and just to note that it hasn’t had a editor approval on that yet but seems like it’s just final editorial tweaking, not normative. SYG left good comments and I have updated the PR after those. I’m not 100% sure on the policy here of – I know Stage 4 requires editor signoff, so I guess I would like to ask for Stage 4 modulo editor review. I think the PR is 100% of the way there in terms of normative and very clone on editorial changes. We have the test262 tests merged. Thank you to PFC for writing those. That was massively appreciated. We also have two implementations, one V8 and one in JavaScriptCore. ACE: With that I would like to move on asking for stage 4 with explicit support from at least two people as well. @@ -854,8 +852,8 @@ USA: ACE, congratulations on stage 4. I suppose that’s all for the presentatio ### Conclusion/Decision -* Stage 4 -* Support from JHD and RMS +- Stage 4 +- Support from JHD and RMS ## JSON.parse source text access @@ -864,7 +862,7 @@ Presenter:Richard Gibson (RGN) - [proposal](https://github.com/tc39/proposal-json-parse-with-source) - [slides](https://docs.google.com/presentation/d/1HZVC1MI889MxMjHfmrqGiJtHc5THdmQpFI_ocAyR3q4/edit) -RGN: So this is an update on `JSON.parse` source text access. Hoping to get through it relatively quickly. I will just jump right in. Background first: We have a lossiness problem with `JSON.parse`, for example arbitrarily precise sequences of digits in the source are parsed into Numbers instances and even though revive functions exist and can interact with the parsed values, they don’t have access to the source and so it’s already lossy. If I want to represent this sequence of nines as perfectly accurate BigInt, I can’t achieve that with the current functionality available. As a related problem, revivers lack context. So if you want to transform only form part of a data structure, +RGN: So this is an update on `JSON.parse` source text access. Hoping to get through it relatively quickly. I will just jump right in. Background first: We have a lossiness problem with `JSON.parse`, for example arbitrarily precise sequences of digits in the source are parsed into Numbers instances and even though revive functions exist and can interact with the parsed values, they don’t have access to the source and so it’s already lossy. If I want to represent this sequence of nines as perfectly accurate BigInt, I can’t achieve that with the current functionality available. As a related problem, revivers lack context. So if you want to transform only form part of a data structure, you’re left to figure out for yourself what any particular invocation relates to. It’s really easy to confuse for instance a string that looks like a special data type with the actual data type itself and lack of that context causes problems and type confusion. RPR: Sorry, RGC. Point of order that we need someone to help with the notetaking. @@ -951,6 +949,5 @@ ACE: Thank you everyone. Really appreciate it. ### Conclusion/Decision -* Stage 4 -* Explicit support from PFC, MM, JHD, ABU - +- Stage 4 +- Explicit support from PFC, MM, JHD, ABU diff --git a/meetings/2023-01/jan-31.md b/meetings/2023-01/jan-31.md index a3c7ab30..2539c778 100644 --- a/meetings/2023-01/jan-31.md +++ b/meetings/2023-01/jan-31.md @@ -4,7 +4,7 @@ **Remote attendees:** -``` +```text | Name | Abbreviation | Organization | |-------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -45,14 +45,14 @@ | Willian Martins | WMS | Netflix | ``` -## Intl.NumberFormat V3 for Stage 4 +## Intl.NumberFormat V3 for Stage 4 Presenter: Shane F. Carr (SFC) - [proposal](https://github.com/tc39/proposal-intl-numberformat-v3) - [slides](https://docs.google.com/presentation/d/1b627TYDVDDcdae9D80CP5DLnSYX8GU97nmritxVN5Wo/edit#slide=id.g82ae0c50ed_0_111) -SFC: I’m going to be presenting Intl number format v3 for stage 4. So I’m going to go ahead and walk through the -- all the slides, including the ones that are -- in order to remind the audience what this is, what the proposal is all about. I first presented this proposal for stage 1 in 2020. It got to stage 2 in 2021, stage 3 later that year, and has been in stage 3 for about a year and a half now. And I’m excited to present it now for stage 4. So what is the proposal? We get a lot of feature requests and things each year. We look at which ones to prioritize. It’s very important that features have multiple stakeholders, already have prior art and Unicode CLDR and not easily implemented in user land. We previously took these three bullet points and have turned them now and codified them into the ECMA 402 contributors guide. As an example, these are how we process some of the feature requests. There’s various features that were requested here. Number ranges was one that was very popular amongst stakeholders. Many stakeholders and secured ELD support. Scientific notation styles. The issue is still open for that. It does not yet have very many stakeholders and the CLDR support is only partial, and therefore, that one is not being included in the proposal. So I’m going to go ahead and walk through all the content of the proposal. If there’s any changes since the stage 3 update in November 2022, the last time I presented on this, those will be highlighted. There’s not very many changes anymore. So as referenced in the first slide, this proposal is bringing number range formatting and includes currency into measurement units. This a screenshot showing an example of a number range. The way that we do this is by adding new prototype methods, range and format range to parts, as well as the plural rules analogue select range. The rest of this slide talks about how the format to parts works, how approximately sign works when the range collapses down to the same number. It talks about range to infinity as well as you we do support when the range numbers are not in order. So all these semantics are things that we definitely ironed out in great detail with this group, and these are the semantics that we landed. Another highly requested feature is the grouping enum. We have is used grouping which takes true or false and it’s not expressive based on what we know people want. This is another thing we reiterated on the different ways that we could have implemented this feature. And shown in the table here is what we’ve landed on. We have the min2 strategies, we have the auto strategy, which for backwards compatibility reasons supports the strings true and false. We have the always strategy and hen the false, which is basically turn off a grouping separators strategy. New rounding and precision options. Rounding priority, rounding increment and trailing zero display, all things that have a lot of motivation behind them. Rounding increments, I’m quite happy with the design that we came up with with rounding increment. I think it’s quite clean. Trailing zero display is something that -- another feature that is very commonly requested and actually, you know, ECMAscript is largely going to be setting precedent for good ways to go about implementing trailing zero display. Rounding priority is on the next slide. I’m happy with how we landed here. There are several different ways we could have gone about rounding priority, and yeah, the algorithm that’s currently in the spec is quite clean and I’m quite happy with how that ended up. In interpret strings as decimals, this is one thing I’m highlighting a change that happened since November 2022 at the end of the last meeting. I asked for some feedback on ways to go about this problem of how the set the range of the allowed Intl math mat value. I took that feedback and merged it with feedback from implementers, and this is the statement that we ended up with. So Intl mathematical values have range equal to a number, meaning that the magnitude of numbers represented are as equal to the range supported by a number type. But greater precision is allowed. So you can have more significant digits. The maximum number of significant digits is enforced by the fact that Intl number format always rounds numbers to a certain -- to a limited precision or number of fractional digits. +SFC: I’m going to be presenting Intl number format v3 for stage 4. So I’m going to go ahead and walk through the -- all the slides, including the ones that are -- in order to remind the audience what this is, what the proposal is all about. I first presented this proposal for stage 1 in 2020. It got to stage 2 in 2021, stage 3 later that year, and has been in stage 3 for about a year and a half now. And I’m excited to present it now for stage 4. So what is the proposal? We get a lot of feature requests and things each year. We look at which ones to prioritize. It’s very important that features have multiple stakeholders, already have prior art and Unicode CLDR and not easily implemented in user land. We previously took these three bullet points and have turned them now and codified them into the ECMA 402 contributors guide. As an example, these are how we process some of the feature requests. There’s various features that were requested here. Number ranges was one that was very popular amongst stakeholders. Many stakeholders and secured ELD support. Scientific notation styles. The issue is still open for that. It does not yet have very many stakeholders and the CLDR support is only partial, and therefore, that one is not being included in the proposal. So I’m going to go ahead and walk through all the content of the proposal. If there’s any changes since the stage 3 update in November 2022, the last time I presented on this, those will be highlighted. There’s not very many changes anymore. So as referenced in the first slide, this proposal is bringing number range formatting and includes currency into measurement units. This a screenshot showing an example of a number range. The way that we do this is by adding new prototype methods, range and format range to parts, as well as the plural rules analogue select range. The rest of this slide talks about how the format to parts works, how approximately sign works when the range collapses down to the same number. It talks about range to infinity as well as you we do support when the range numbers are not in order. So all these semantics are things that we definitely ironed out in great detail with this group, and these are the semantics that we landed. Another highly requested feature is the grouping enum. We have is used grouping which takes true or false and it’s not expressive based on what we know people want. This is another thing we reiterated on the different ways that we could have implemented this feature. And shown in the table here is what we’ve landed on. We have the min2 strategies, we have the auto strategy, which for backwards compatibility reasons supports the strings true and false. We have the always strategy and hen the false, which is basically turn off a grouping separators strategy. New rounding and precision options. Rounding priority, rounding increment and trailing zero display, all things that have a lot of motivation behind them. Rounding increments, I’m quite happy with the design that we came up with with rounding increment. I think it’s quite clean. Trailing zero display is something that -- another feature that is very commonly requested and actually, you know, ECMAscript is largely going to be setting precedent for good ways to go about implementing trailing zero display. Rounding priority is on the next slide. I’m happy with how we landed here. There are several different ways we could have gone about rounding priority, and yeah, the algorithm that’s currently in the spec is quite clean and I’m quite happy with how that ended up. In interpret strings as decimals, this is one thing I’m highlighting a change that happened since November 2022 at the end of the last meeting. I asked for some feedback on ways to go about this problem of how the set the range of the allowed Intl math mat value. I took that feedback and merged it with feedback from implementers, and this is the statement that we ended up with. So Intl mathematical values have range equal to a number, meaning that the magnitude of numbers represented are as equal to the range supported by a number type. But greater precision is allowed. So you can have more significant digits. The maximum number of significant digits is enforced by the fact that Intl number format always rounds numbers to a certain -- to a limited precision or number of fractional digits. SFC: Rounding modes, we are now accepting these nine rounding modes as described here. One of the controversial things that we landed on was how to do the capitalization as well as how to name these things, this is what we ended up with. This is now implemented. Sign display negative, a small feature that came in from a use that are was very small and easy to implement. We went ahead and implemented this one. It makes total sense. It changes the semantics slightly around how to Delia with negative zero when displaying the minus sign. Okay, and that’s the -- that’s the extent of the features that are included in this number format V3 proposal. So now let’s look at stage 4. So the entrance criteria for stage 4 are “do we have test262 tests”. If you go to the speaker notes on these slides, I listed all of the test 262 for all of the features that are shown in the slide show. Thanks so much for our partners at Igalia for helping implement all of the test262 tests. Two compatible implementations with past tests, the -- this is implemented in all three of the browsers. You can click these links to see the details. Chrome is already shipping the proposal. Firefox is available in Nightly. I believe Safari is shifting in version 15.4, according to the issue. Next, significant in the field experience with shipping implementations as seen above. A pull request has been sent. The pull request is shown here. I’ll go ahead and switch over to the pull request. So this here is the pull request. It’s all green. It’s passing. This is what the pull request looks like. https://github.com/tc39/ecma402/pull/753. The whole proposal is integrated in here. And this corresponds to the, of course, proposal diff that has been, you know, very deeply reviewed. Thanks so much to RGN for his help in getting this all ready and getting the pull request ready for review. There’s a lot of other changes that also went in before it was finalized, putting up the final pull request. The Stage 4 criteria also say all ECMAScript editors have signed off on the pull request. That hasn’t happened yet because of the timing. But all of the issues have been resolved and I appreciate, again, RGN’s assistance in getting all those things together. So, you know, I presume that the editors will complete their review once they’ve had a little bit more time to finish the pull request and finish reviewing the pull request. So, yeah. I guess that’s all. So let me pull up the queue. Let me see if there’s anyone on the queue, and then I’d like to ask for stage 4. @@ -71,12 +71,12 @@ USA: Thanks for all the work and +1 for stage 4 from me. BT: All right, thank you. And, DLM says plus 1 for stage 4 and no need to speak. Thank you for that. All right, we’ve heard some explicit support. We haven’t heard any concerns raised in the last couple minutes here. So I think that sounds like stage consensus to me. Congratulations. -SC: Thank you, everyone. +SC: Thank you, everyone. -BT: Michael has his hand up in the actual meeting. Oh, +BT: Michael has his hand up in the actual meeting. Oh, that was a clap. I see. Yeah, I make that mistake too. -#### Conclusion/Resolution +### Conclusion/Resolution Proposal reached Stage 4 consensus, with explicit support from DE, RGN, DLM @@ -95,11 +95,11 @@ PFC: (via queue) I support this. BT: Shane, plus 1. Reviewed the PR, no need to speak. -USA: Perfect. Thanks, everyone. Especially, before I finish, I wanted to thanks all the implementers, but especially Frank Tang and Andre Bargul and Daniel Minor for being quite proactive in these things. It really helped a lot. All right, that’s all I needed. So a bit of time back to the committee, and thank you all. BT: All right, thank you. +USA: Perfect. Thanks, everyone. Especially, before I finish, I wanted to thanks all the implementers, but especially Frank Tang and Andre Bargul and Daniel Minor for being quite proactive in these things. It really helped a lot. All right, that’s all I needed. So a bit of time back to the committee, and thank you all. BT: All right, thank you. ### Conclusion/Resolution -* TC39 consensus for normative PR https://github.com/tc39/proposal-intl-duration-format/pull/126. This pull request alters the output of `formatToParts` in many ways: fixes bugs, makes the output more granular as well as makes it more consistent with `NumberFormat` and the rest of `Intl`. +- TC39 consensus for normative PR https://github.com/tc39/proposal-intl-duration-format/pull/126. This pull request alters the output of `formatToParts` in many ways: fixes bugs, makes the output more granular as well as makes it more consistent with `NumberFormat` and the rest of `Intl`. ## Problems with import assertions for module types and a possible general solution + downgrade to Stage 2 @@ -111,9 +111,9 @@ Presenter: Nicolò Ribaudo (NRO) NRO: Okay, so hello, everyone. We’ve been experimenting with web platform, different integration, how the current import assertions proposal fits their needs. And we found that there are some problems. So just to look at the first line, I first got you the current semantics of the assertion proposal, what we would need, a possible solution, and then maybe a request to downgrade to stage 2. So just recap, for import assertions, they’ve been presented, I think, more than the goal. They allow us some, well, assertions to the host when importing modules. So the host can validate that the imported module respects some conditions, and to prevent the validation of the module. They are used, for example, to allow importing JSON files or css modules, making sure they’re not files with a different extension that would -- allow executing code unexpectedly. And there are some invariants on what the host can do with these assertions. They can only influence if a model is awaited and not how. So, for example, you cannot define different resolution strategies if there is an assertion. They can just be assertion about properties that the host can detect using other mechanisms. And import assertions should not be used as part of the cache key models. This model case is import cache the used by host to make sure that import the same module multiple times actually gives you back the same module, the same space and doesn’t validate the module every time. And hosts should not use assertions in the same key, which means if import the same module, like the same specifier multiple times with different assertions, you should always get the same module back. And due to integration problems with some host specs, namely HTML, the proposal doesn’t disallow use this part of the cache key, but just recommends not doing so. So I was saying these host invariants guarantee that if multiple imports even with different assertions have the same specifiers, they should all give back the same result. A bit of history. So the proposal was originally for generic import attributes. Where you could pass any type of data to the host. Even if the only use case was JSON, CSS, and HTML modules, but there was already some desire to potentially make this more powerful than just specifying the model type. However, it has been the restricted to only specifying the type of the import module with a single sing since it was the only practical use case we had at the time. And then it was diverted back to the original extensive syntax with the restriction that these attributes were not part of the cache key, and the keyword has been replaced multiple times. First from `with` to `if` and then `if` to `assert`, meaning the proposal as it is now can, and it was finally approved for stage 3 in September two years ago. We relaxed restriction that HTML could use -- that hosts could use the assertions as part of the cache key. Okay, so let’s now see what the web needs. On the web, there are resources with different types, such as images or JavaScript, CSS files and for every type, the web has different loading strategies, depending on how that resource is going to be used. For example, it has different CSP policies between scripts and cSS files because they have scripts are more powerful, so the makes sense to script them this different ways. And also it sends to the server different information to tell the server how the file is going to be used. Such as the Sec-Fetch-Dest and the Accept HTTP headers. For example, if you’re loading a stylesheet, the request will tell the server that the browser is expecting a text that is just my type and this file is going to be used as a stylesheet, and similarly for the script, the script tag, the accept header is different and the destination is going to be script. Can we do something similar with import assertions because ideally, importing a CSS module should work similarly with importing a style sheet that works. So the HTML would like to send to the server this text header or to send the style fetch destination. And we can only do this if the browser knows how this model is going to be used. And also, we have, like, talking with developers, with tooling out, it came up that assertions are not always fully interpreted as just assertions. It’s hard for developers to understand what is causing a module to interpret in a certain way. If it’s a type assertion that is causing this data which consistent able to load to JSON or if it is MIME type or the file extension. And right now, the assertion matches one to one the MIME type for extension of the import model, so it’s not really wrong to say that the type assertion is guiding the way the module is interpreted, or at least you cannot observe that it’s false. And there have been also some attempts by even popular tools to use import assertions to guide how modules are interpreted. For example, TypeScript was considering using this resolution header while assertion in their type imports to tell the types of compiler how to resolve or model, or Bun was considering using assertions to load macro files. And even if this is not what assertions are meant to be used for according to the proposal, since they should only assert certain properties of the import model. So we’ve seen that -- for the following assertions that do not solve what HTML will need, what the web will need, and also that they don’t really match the mental model of developers. So we start to think about a possible solution. And what we have found -- and, again, this is just a possible solution. It’s not something we’re proposing right now. Is to explicitly load the type to affect how a module is loaded or evaluated. For example, HTML could use the proper type to send it to the server. And also update can syntax to clarify they’re not just assertions anymore. For example, we could go back to the origin with keyword. And there are also other possible benefits to this solution. Such as being able to use the syntax space also for other proposals that are currently trying to extend how models, how import statements work, such as the import reflection and the deferred module evaluation. I’m modifying the import behavior in they could maybe’ reuse the options bag as the new import assertions model. In previous discussion about the proposal, before the restriction of what hosts can and cannot do, there were some concerns about that if the import assertions were too flexible, it would be hard to write portable code, because every host, every engine, every tool would come up with their own custom import attributes. So a possible compromise would be to specify within ECMA-262 which is the least valuable attributes and how do they behave. We could start with just the type import attribute or import assertion whose behavior is to be tasked to the host, and such a proposal could introduce new valid attributes, such as deferred or reflect, as shown in the slide before. Another alternative that was considered was to only allow a string to modify import the type. And that was considered too much restrictive, because developers would have needed to, like, invent their own DSL inside this restricted syntax space to be able to express more info. And even if this possible solution right now only allows a type attribute whose value is a string and has the same expressive problems, it could be expanded in future to allow new attributes or to allow complex values. So instead of just being a string to the host, we could, for example, add array values or, like, Boolean, anything that we might need as part of this extended options bag. We have some possible solution, but are the is a big problem. Import assertions have been a stage 3 for a while, and they’ve already been shipped in some engines. So how can we think about different solution if there are already shipping implementation of this feature? Right now existing tools and run times shouldn’t do anything. It’s fine to ship the unship the current up. Ation because there are possible users relying on that, and we can start collecting some -- in the ground some statistics, some information about how frequently import assertions are used to see if it’s possible to maybe one day unship them. In parallel, we can work on using new import assertions and semantics that we can agree on and implementations can start shipping them while maybe trying to align the assertion behavior, to this new behavior while keeping the original syntax so that users would likely not notice that there is any difference, and websites -- or server side scripts already using the assert syntax would continue working, even if it’s likely different semantics in some cases. And in long term, hopefully we could remove assert from the language. If we see it’s not compatible, maybe assert could go in normal options or deprecated and we could still give users the keyword that would represent what the feature actually does. While still keeping the compatibility with the current Stage 3 proposal. So, again, the next steps, please let’s stop shipping import assertions, so we have more time to think -- to design proper solution that would fit the needs of HTML. The syntax I’ve shown is still a strawperson. We still need to find a solution that satisfies anything, anyone, as you might remember, finding consistently has not -- we’ve already discussed a lot of this, and we already have many desires, many constraints we need to take care of, hopefully we’ll find a solution that satisfies everyone. But don’t consider the syntax as shown as final, please. If you want to work with us on finding some, like, possible final syntax, we plan to do that -- we have some goals where we talk about modules every two weeks, and you can join us in the calls and we will propose something hopefully in a future meeting, hopefully March or May. If you want to check it, I have a draft PR for the syntax and semantics shown in the slides, but, again, consider that as a very early draft since anything could still change. And lastly, since we’re considering all these changes, potentially changing some of the major semantic points of proposal or maybe the syntax, I would like to ask for a downgrade to stage 2, because right now the proposal doesn’t meet implementations to test the semantics that we need to step back in the design phase. And stage 2 might be the right step to do so. Okay. Let me see if there’s anything on the queue. -BT: We have quite a long queue. So if you’re ready to discuss, we can start with JHD. +BT: We have quite a long queue. So if you’re ready to discuss, we can start with JHD. -JHD: This was on one of your earlier slides where you talked about developers don’t necessarily have the mental model about import assertions. At the time that this proposal went to stage 3, the understanding and plan as far as anyone was aware of was that node was planning on shipping importing of JSON modules both with and without the assertion, and I think that had that happened, developers would be, I think, pretty clear that that’s what this does. But node decided through the actions of one contributor to only ship it with the assertion, and so there isn’t a currently any way, in any part of the ecosystem to get a module both with and without an assertion. I just wanted to comment on that. +JHD: This was on one of your earlier slides where you talked about developers don’t necessarily have the mental model about import assertions. At the time that this proposal went to stage 3, the understanding and plan as far as anyone was aware of was that node was planning on shipping importing of JSON modules both with and without the assertion, and I think that had that happened, developers would be, I think, pretty clear that that’s what this does. But node decided through the actions of one contributor to only ship it with the assertion, and so there isn’t a currently any way, in any part of the ecosystem to get a module both with and without an assertion. I just wanted to comment on that. NRO: And note that the web -- the HTML does the same as node, so HTML you can only import a JSON module with the mod. @@ -133,9 +133,9 @@ NRO: It’s also used in node and deno the unflagged and not unflagged. JHD: Sure, but we’re worried about maintaining web compatibility with regards to removal, not maintaining compatibility outside the web. Is there a reason not to recommend it be unshipped now? -JRL: The tooling system has extensively adopted this, it exists everywhere because we kind of figured this would be adopted as syntax, even if not everyone is making use of the syntax currently. Certain bundlers are. The wait, if we were trying to unship this feature and get it out of the ecosystem, it’s just not going to happen. Like, no one is going to unship the feature from the parser, from their transformer to let us figure out what’s going to happen, because there’s already code that’s been written with this assumption that we need assertions. The wait, if we move to a new keyword, I’ll talk about this later, we’ll save this for later. Sorry. +JRL: The tooling system has extensively adopted this, it exists everywhere because we kind of figured this would be adopted as syntax, even if not everyone is making use of the syntax currently. Certain bundlers are. The wait, if we were trying to unship this feature and get it out of the ecosystem, it’s just not going to happen. Like, no one is going to unship the feature from the parser, from their transformer to let us figure out what’s going to happen, because there’s already code that’s been written with this assumption that we need assertions. The wait, if we move to a new keyword, I’ll talk about this later, we’ll save this for later. Sorry. -SYG: I mean, sure, the flippant answer is TC39 cannot force anything to be unshipped. The less flippant answer, Chrome folks, we have discussed internally the possibility of unshipping this and given how long it’s been out and — I’ll go into this later in my later item, and given the -- what seems like anecdotal evidence right now, we haven’t done the measurements of extensive adoption in other betters of V8 and the tooling ecosystem, and the original reason why we did this import assertions in the first place, I think there’s a lot of risk in not shipping and we have decided to not unship. So while, you know, committee can recommend something to be unshipped, I suppose. +SYG: I mean, sure, the flippant answer is TC39 cannot force anything to be unshipped. The less flippant answer, Chrome folks, we have discussed internally the possibility of unshipping this and given how long it’s been out and — I’ll go into this later in my later item, and given the -- what seems like anecdotal evidence right now, we haven’t done the measurements of extensive adoption in other betters of V8 and the tooling ecosystem, and the original reason why we did this import assertions in the first place, I think there’s a lot of risk in not shipping and we have decided to not unship. So while, you know, committee can recommend something to be unshipped, I suppose. JHD: Right. @@ -153,7 +153,7 @@ SYG: I will clarify a bit more. I do not want to say it is web compatible to rem JHD: Thanks, I think that answers my question. -MLS: So can’t V8 just add the feature to embedded implementations only and not to Chrome? If it’s not being used in the web, but being used by Node and other things? +MLS: So can’t V8 just add the feature to embedded implementations only and not to Chrome? If it’s not being used in the web, but being used by Node and other things? SYG: I just don’t know if it’s not widely used. Like, we can add counters, but there’s also a delay there. It has to reach stable for it to hit the max population. But, like, yes, I agree with you if it didn’t go out in what version was this, M90 something? Let me see. It’s been a while ago. It’s been like a year and a half, I think. Like, on paper, I agree with you. But given the timeline here, I just don’t know. @@ -175,7 +175,7 @@ BT: MM says “disagree about ES6 history interpretation”, but that’s the en MM Yeah, the -- I’m not disagreeing with the moral of the story with regard to what we should be doing now, but I do think that -- I don’t want -- I do think the ES6 history is much more nuanced than that. And all together, it -- it was very good that we adopted the policy that we did, even though it caused pain and the alternative would have been worse. -DE: Yeah, I guess what the alternative course of action should have been is pretty subtle. But some -- some logic at the time was, oh, this is fine. There could have been more investigation also. +DE: Yeah, I guess what the alternative course of action should have been is pretty subtle. But some -- some logic at the time was, oh, this is fine. There could have been more investigation also. MM: I think that’s too -- I think even that’s too simple. We can take this offline. It does not affect the current debate. @@ -193,7 +193,7 @@ MLS: This presentation and the discussion that we had online before the meeting SYG: Not to me. Like, what do you mean by contradictory? -MLS: What we’re trying to assert is now, couldn’t it be contradictory? If so, it’s not an assert anymore. +MLS: What we’re trying to assert is now, couldn’t it be contradictory? If so, it’s not an assert anymore. SYG: I thought the problem was that we need to change the request at request time, and we want that – @@ -227,7 +227,7 @@ SYG: Okay. After your next response, maybe after JRL’s, I would like one of th JHD: My other item is through the progression of the import assertion proposals weren’t just two options - not just “syntax” or “inside the specifier”. I’m not worried about microsyntaxes inside the specifier. It was rejected by the ecosystem when Webpack tried it, and it is widely considered to be a bad practice. I don’t think it would take off even if various tools tried to do it again. The third option that was discussed was out of band: like CSP or import maps, a separate header or file or something. I think that that is totally workable and doesn’t have the dangers of specifiers, so I think it’s worth – I think it would be worth reconsidering all of the originally discussed options when we discovered that the one we went with doesn’t work. -JRL: So I have two comments here. One is actually my comment and response to JHD and one is my actual topic that I want to discuss. For JHD, I don’t think `assert` actually matters in the end. If you – we go this route where we remove the restriction and your loader or your bundler or host cannot understand what you meant, it’s still going to assert because it’s going to fail. It’s still an assertion in the end. For a developer looking at this, I don’t think they care at all. If I say assert type CSS and transforms to CSS module that I import, that’s fine. If I say assert whatever and it doesn’t succeed, then it assert in the end because the browser couldn’t handle it. Like, it’s still the same thing. The developer perspective where what they can observably see happens honestly isn’t going to change. Because we have to differentiate at request time and cache time right now. The way it currently behaves is the way it continues to behave because that’s the practical end result of our requirements. Specifically why do we need to change the keyword at all? This is my main topic. All it’s going to do is cause churn in the tooling ecosystem that is unnecessary. The end result we get to is a ‘with’ keyword with otherwise same syntax and same behavior as what we currently have. But we’re telling everything that’s already been written with the assert they need to update to the new keyword and it’s causing turn to get to the same result result. It seems unnecessary and painful. Like, in order for me to have adopted the assert key keyword originally that we did in the old project, I had to maintain my own parser so it could actually finish parsing and I had to set up the tooling so it could use my parser rather than the official acorn parser. It’s pain for developers to adopt the syntax because they know where it’s going. It will again take a couple of months to a year for all the tooling to update to a new with keyword just for it to do the exact same thing it’s currently doing. It’s just so painful and it’s unneeded. +JRL: So I have two comments here. One is actually my comment and response to JHD and one is my actual topic that I want to discuss. For JHD, I don’t think `assert` actually matters in the end. If you – we go this route where we remove the restriction and your loader or your bundler or host cannot understand what you meant, it’s still going to assert because it’s going to fail. It’s still an assertion in the end. For a developer looking at this, I don’t think they care at all. If I say assert type CSS and transforms to CSS module that I import, that’s fine. If I say assert whatever and it doesn’t succeed, then it assert in the end because the browser couldn’t handle it. Like, it’s still the same thing. The developer perspective where what they can observably see happens honestly isn’t going to change. Because we have to differentiate at request time and cache time right now. The way it currently behaves is the way it continues to behave because that’s the practical end result of our requirements. Specifically why do we need to change the keyword at all? This is my main topic. All it’s going to do is cause churn in the tooling ecosystem that is unnecessary. The end result we get to is a ‘with’ keyword with otherwise same syntax and same behavior as what we currently have. But we’re telling everything that’s already been written with the assert they need to update to the new keyword and it’s causing turn to get to the same result result. It seems unnecessary and painful. Like, in order for me to have adopted the assert key keyword originally that we did in the old project, I had to maintain my own parser so it could actually finish parsing and I had to set up the tooling so it could use my parser rather than the official acorn parser. It’s pain for developers to adopt the syntax because they know where it’s going. It will again take a couple of months to a year for all the tooling to update to a new with keyword just for it to do the exact same thing it’s currently doing. It’s just so painful and it’s unneeded. JHD: Whether it’s unneeded - you’re pre presupposing that changing it to the `with` keyword and removing restrictions is something that will have consensus, and it’s too early to know that with confidence. @@ -275,14 +275,13 @@ DE: So if we can adopt the shared plans along what Nicolo presented we iterate o SYG: Yes, I agree with that. The in band versus out of band design point seems unrelated to the restriction thing. To the extent that I think a problem – to explore out of band configuration for the space is different than stage 1 problem statement and I guess the practical upshot I would also block the downgrade to stage 2 if the consensus of the understanding of the consensus was in-band versus out-of-band was reopened to discussion. That is not my understanding of what a downgrade here would entail. If it is, I would block the downgrade. -BT: So it sounds like there’s a discussion that needs to be had here about whether we can communicate that the proposal is stage 2 but limit the scope of the stage 2 debate. There objections if we keep the discussion on that topic until the end about six minutes from now? I think that means JRL that we’ll skip your reply and go to your new topic, if that’s okay. +BT: So it sounds like there’s a discussion that needs to be had here about whether we can communicate that the proposal is stage 2 but limit the scope of the stage 2 debate. There objections if we keep the discussion on that topic until the end about six minutes from now? I think that means JRL that we’ll skip your reply and go to your new topic, if that’s okay. JRL: I think my two topics are the same unfortunately. If we want to downgrade to stage 2 that’s fine. If the champions are looking for that because they want to change the keyword, that’s fine. Going back to stage 2 absolutely cannot mean we’re considering out of band configuration. The proposal whatever we call it, whatever the keyword is, is in-band configuration of the module. -JHD: I think that for all the reasons that have been discussed it seems clear to me that this proposal shouldn’t be currently be stage 3 until we figured out the things that are in flux and as such, I don’t think that it makes sense to attach arbitrary restrictions to what can be discussed. I think it’s clear there wouldn’t be consensus to demote it beyond stage 2, fair enough. It’s clear there are strong opinions about what the proposal should or shouldn’t be able to do in many directions. That also means that consensus may be difficult to obtain. I think that the point of the process is to encourage discussion, not to restrict it. So I hope that we simply can agree that it belongs in stage 2 while we figure these things out and we’re able to have good-faith discussions while we do so. +JHD: I think that for all the reasons that have been discussed it seems clear to me that this proposal shouldn’t be currently be stage 3 until we figured out the things that are in flux and as such, I don’t think that it makes sense to attach arbitrary restrictions to what can be discussed. I think it’s clear there wouldn’t be consensus to demote it beyond stage 2, fair enough. It’s clear there are strong opinions about what the proposal should or shouldn’t be able to do in many directions. That also means that consensus may be difficult to obtain. I think that the point of the process is to encourage discussion, not to restrict it. So I hope that we simply can agree that it belongs in stage 2 while we figure these things out and we’re able to have good-faith discussions while we do so. -BT: That drains the queue. I think process-wise because we don’t have a rigorous, I guess, downgrade process, the downgrade I think is more of a, you know, being clear about setting expectations kind of situation and so I think it would be reasonable for us as a deliberative body to say here is the scope of the discussion we expect to have during stage 2. And I think the champions could probably stay at stage -3 as an alternative so those are the kind of options that we’re weighing here. +BT: That drains the queue. I think process-wise because we don’t have a rigorous, I guess, downgrade process, the downgrade I think is more of a, you know, being clear about setting expectations kind of situation and so I think it would be reasonable for us as a deliberative body to say here is the scope of the discussion we expect to have during stage 2. And I think the champions could probably stay at stage 3 as an alternative so those are the kind of options that we’re weighing here. NRO: Maybe we can schedule this item on this exact wording of what you’re proposing if you want to include these find the condition for stage 2 downgrade so it is clear what we’re asking for consensus on. @@ -308,11 +307,11 @@ BT: We will return to this topic, then. ### Conclusion/Resolution -* The committee came to understand the [web integration issues](https://github.com/whatwg/html/issues/7233) with import assertions, and considered multiple alternatives which enable fetches for non-JS module types to be driven by the declared imported type. Changes to both syntax and semantics were under discussion; one possibility is to change only semantics and leave syntax the same. +- The committee came to understand the [web integration issues](https://github.com/whatwg/html/issues/7233) with import assertions, and considered multiple alternatives which enable fetches for non-JS module types to be driven by the declared imported type. Changes to both syntax and semantics were under discussion; one possibility is to change only semantics and leave syntax the same. -* The champions requested demoting the proposal to Stage 2, but there was disagreement about the scope of the investigation during Stage 2. For now, there is no change in stage, but it is noted that the champions have requested that no additional implementations ship the proposal (while also *not* requesting that existing implementations unship). +- The champions requested demoting the proposal to Stage 2, but there was disagreement about the scope of the investigation during Stage 2. For now, there is no change in stage, but it is noted that the champions have requested that no additional implementations ship the proposal (while also *not* requesting that existing implementations unship). -* There will be an overflow topic to attempt to draw a conclusion. For now, the proposal remains at Stage 3, but there is a shared understanding that changes need to be made. +- There will be an overflow topic to attempt to draw a conclusion. For now, the proposal remains at Stage 3, but there is a shared understanding that changes need to be made. ## Explicit Resource Management Stage 3 update @@ -329,7 +328,7 @@ MM: So I think we should not permit using in an `eval` for one thing it is an ou KG: There's not meaningful complex complexity involved in adding it. It’s like a six word change tops. I suspect that in engines it’s more work to prohibit it it. I don’t think – like, if you have some other reason for not wanting it, I don’t feel strongly about it. I don’t think the complexity is much reason. -MM: The reason why without the scope it is not obviously a block scope and value the letter or const have to understand it is a block scope to not be surprised. There is that surprise hazard if you’re eval doing that and not using the curlies, if you need to put the curly curlies in in order to get the eval accepted than the reader reading the code is simply clearer to the reader what the meaning is of what they’re looking at. If I could – if we could retroactively have had it be the case that for the let and const also force the inclusion of the curlies so somebody reading the code would see the curlies that would have made mis misunderstanding the meaning of the code that would have reduced the misunderstanding hazard. So I just think it’s not adding value to allow it without the curlies so I would continue to require the curlies. +MM: The reason why without the scope it is not obviously a block scope and value the letter or const have to understand it is a block scope to not be surprised. There is that surprise hazard if you’re eval doing that and not using the curlies, if you need to put the curly curlies in in order to get the eval accepted than the reader reading the code is simply clearer to the reader what the meaning is of what they’re looking at. If I could – if we could retroactively have had it be the case that for the let and const also force the inclusion of the curlies so somebody reading the code would see the curlies that would have made mis misunderstanding the meaning of the code that would have reduced the misunderstanding hazard. So I just think it’s not adding value to allow it without the curlies so I would continue to require the curlies. RBN: As champion I don’t have a strong preference. I’m fine without it supported but see that the case that let and const are supported in `eval` and block scope to be rational why it should be considered. It is a const binding with special semantics occurred with the block scope contained. It seems odd not to be supported. @@ -347,13 +346,13 @@ Poll: How strongly do you feel that 'using' should be allowed at the top level o ??: Indifferent is different. -* poll results: - * 0: strong positive - * 4: positive - * 0: following - * 0: confused - * 7: indifferent - * 3: unconvinced +- poll results: + - 0: strong positive + - 4: positive + - 0: following + - 0: confused + - 7: indifferent + - 3: unconvinced DE: I voted indifferent and we should have a positive or negative scale. @@ -385,15 +384,15 @@ BT: Any objections to taking this PR? I hear no objections. So I think you can m RBN: Okay. And, again, all the other ones were mostly editorial concerns. There’s this one which again I don’t know if we require consensus, there was – again has no specific actual observable behaviors. Nothing can be introduced here that would actually be disposed. So I think – and the rest are essentially just editorial changes since the behavior is identical. Just then jump to the end of this. Again, proposal is currently spec is at the explicit re resource management repo. There is a spec text available there and also a PR against ECMA 262. I will be working on the test 262 test changes in the near future. I will also be trying to reach out to implementers to determine if anyone is looking into implementations of this. We have been discussing or I have been discussing this with Shu and others on the shared struct call as potentially being supported for things like mutex and condition variables as part of the work that we’re look looking at there. So I’m looking – I’m interested in whether other implementations are looking at this as well. And that’s all I have for this now. -BT: Thank you RBN. RBN: Thank you. +BT: Thank you RBN. RBN: Thank you. BT: With that, I think we cannot cram another item in the nine minutes remaining before lunch. So if there are no objections we will break early. See you back at 1:00. ### Conclusion/Resolution -* Ban ‘await’ as Identifier in ‘using’ (PR #138): Approved -* Support ‘using’ at top level of ‘eval’ (Issue #136): Rejected -* May consider a needs-consensus PR in the future based on implementer/community feedback. +- Ban ‘await’ as Identifier in ‘using’ (PR #138): Approved +- Support ‘using’ at top level of ‘eval’ (Issue #136): Rejected +- May consider a needs-consensus PR in the future based on implementer/community feedback. ## Discuss SuppressedError argument overlap: error and cause @@ -404,7 +403,7 @@ Presenter: Jordan Harband (JHD) JHD: Here we have `SuppressedError` Like all error types, every argument is optional – if you provide no arguments, it works fine. The only way that trying to create the error can throw is I think if you provide a message that has a `toString` that throws or there’s something in `InstallErrorCause`, not even sure what that can be. It’s highly unlikely anyone will get an exception out of creating the constructor. You’ll notice here that there are four arguments listed. The message argument is obvious - that’s present on every error. The options argument is also present on every error as of the “error cause” proposal. The `suppressed` argument is the thing that the `SuppressedError` is suppressing, which makes perfect sense. What I’m talking about today is the `error` argument, which I believe is described in the readme of the proposal as the cause of the suppression, and that semantically that’s how I understand it to be. While implementing a polyfill for this, I realized that you could have a `SuppressedError` with an error as the cause, and also a cause as the cause, and that seemed bizarre to me. So what I’m proposing is a normative change - one of the following options, unless someone suggests an option I have not considered. -JHD: One option is just remove the error argument entirely so that you construct it with a cause when you want it to indicate that argument. Another option is to remove the spec line on line number 4 - `InstallErrorCause`. It could still in the future optionally take an options argument if there were additional options added. +JHD: One option is just remove the error argument entirely so that you construct it with a cause when you want it to indicate that argument. Another option is to remove the spec line on line number 4 - `InstallErrorCause`. It could still in the future optionally take an options argument if there were additional options added. JHD: The third is to throw an exception if both `cause` and `error` are provided. @@ -420,7 +419,7 @@ JHD: That’s right. RBN: I did have a slide on this. I want to kind of reiterate my thoughts there. When we added error cause to the language, it has this kind of I would say special nature in that we added this ability to install this optional cause object on every built-in error object as a way to say that perhaps this type error was caused by something else you want to follow back. Kind of indicating a direct relationship between the error that you’re creating and the error that produced it. But it has this kind of privileged perception because it has a special way of installing it, it has specific spec text around installing it and not just attached by users as they need. And it gives it that kind of perspective that this is a kind of a special set of initial meta information to attach to any error. The suppressed error property on the other hand is not optional. It is optionality that it doesn’t error or that you can pass undefined in the constructor is – you can call the constructor without providing the argument for it is just by the nature of the JavaScript language allow allowing you to do that in any case. But undefined is a valid error because you can throw un undefined in the JavaScript. The error property itself is not optional. It’s not optionally installed and always exists. That’s because the SuppressedError, its intent is to model the specific relationship between the error that was an error that was thrown that suppresses another error that was already being thrown specifically by using declarations. This could potentially be used for other things in the future. And kind of circumvents what we do today with `try...finally` where throwing an exception and finally suppresses the error potential error thrown in the body and you lose that information. It exists to model this relationship between there is no other way to model this relationship in the javaScript language and language like Java their exceptions have method that you can use to get to the suppressed error because every error that is thrown should be an exception. That is not the case in JavaScript. This has a specific purpose to model this relationship of thing that is suppressing and thing that is suppress suppressed which is why cause doesn’t really quite fit into that. It does in the sense that yes you could say it is the cause but it doesn’t have the same level of optionality optionality. And so I brought that up in the proposal. I wanted to leave the cause capability that you can add to errors as a separate thing so it’s not confused, so users aren’t confused in the end. And that was my main goal for this direction. Also that suppressedError kind of represents a linked-list like representation of the error hierarchy and aggregate error is a flat list of errors and chose to use the error identifier for the property as the singular of errors and aggregate error for that purpose purpose. So that was most of the motivations behind that when we introduced this. If I were to have an opinion on those three options you provided I am certainly not in favor of removing the error property because again I think depending on cause could be confusing to users. I don’t think it makes sense for a suppressedError whose intent is to model this relationship between error that suppresses and error that was suppressed to optionally have the error that is suppressing. I think that would again be confusing to users. On the option of throwing if you provide both, I don’t think there is any way to conceivably do that because `undefined` is a valid throwable exception in JavaScript. Therefore I don’t see a way to really prevent that from being the case. If I were to do anything, I would likely say that suppressedError should not have the error cause. -JHD: Just to echo my original point, like, despite my preferred outcome, as long as there’s only one – I think there’s a lot more user confusion from having potentially both `error` and `cause` on a `SuppressedError ` instance. So what you just said of essentially removing line 4 from the spec sounds fine to me. +JHD: Just to echo my original point, like, despite my preferred outcome, as long as there’s only one – I think there’s a lot more user confusion from having potentially both `error` and `cause` on a `SuppressedError` instance. So what you just said of essentially removing line 4 from the spec sounds fine to me. RBN: I will say, though, that cause exists for mostly for user cases that I’m aware of. Users can provide a cause. Is there anything in the language today that actually installs cause other than if the user provides it in the constructor? Because I would say we are ascribing meaning to cause as also being an exception that was thrown whereas cause could be something else. User could create a suppressedError from one error to another error that is thrown and then the cause could be whatever they wanted to be. The causing from the using declaration or causes from my own try… finally. Yes, you could put whatever you want in errors suppress suppressed as well. We don’t do error checking on those either. But I just strongly opposed to trying to leverage cause and have this thing be optional. I just think it will be confusing. @@ -464,7 +463,7 @@ RBN: We don’t generally define the message for exceptions within the specifica EAO: I’m with JHD here. Fundamentally I think the difference between error and cause here is from an end user point of view pretty much insignificant. I mean, conceptually we’re introducing the idea in total one error could be caused or suppressing another error. And why does it – I don’t think from the end using point of view it really matters if internally the semantics of really that means for a suppressedError in particular is also significant compared to the mental load that we would be putting on people to understand that in this particular case, even if it, you know, could consider this suppressedError being caused by another error, we’re still causing it error rather than cause in the structure of it. That difference is just extra mental load which I don’t think we need. -SYG: I just want to agree with WH and KG here. I don’t think it is extra mental load. When you’re suppressing the error, both the suppressing and suppressedError are kind of equal on equal footing in the causality chain. I think that’s the important part in how developers understand cause. I think the core disagreement is that the current use of the cause property is this lineal causality chain and the idea is that conceptually what is the conceptual reason of an error being created and thrown. There’s a different kind of cause which JHD has been arguing for that is just the mechanics of what has caused an error to be created and thrown. Mechanically you can say the suppressed – sorry, the suppressing error is the thing that causes – even that actually now that I say that I take back that line of think, like, I think it just comes down to I’m convinced of what Ron said suppressed error and suppressing error having equal footing. How do you – what do you think what is the error that whether mechanically or conceptually caused the suppressedError itself, the suppressedError wrapper that contains the suppressed and the suppressing and suppressed to be created and thrown. There is no no linearity here that we normally have with the cause chain. With that reason I agree with KG and WH mother and Ron. +SYG: I just want to agree with WH and KG here. I don’t think it is extra mental load. When you’re suppressing the error, both the suppressing and suppressedError are kind of equal on equal footing in the causality chain. I think that’s the important part in how developers understand cause. I think the core disagreement is that the current use of the cause property is this lineal causality chain and the idea is that conceptually what is the conceptual reason of an error being created and thrown. There’s a different kind of cause which JHD has been arguing for that is just the mechanics of what has caused an error to be created and thrown. Mechanically you can say the suppressed – sorry, the suppressing error is the thing that causes – even that actually now that I say that I take back that line of think, like, I think it just comes down to I’m convinced of what Ron said suppressed error and suppressing error having equal footing. How do you – what do you think what is the error that whether mechanically or conceptually caused the suppressedError itself, the suppressedError wrapper that contains the suppressed and the suppressing and suppressed to be created and thrown. There is no no linearity here that we normally have with the cause chain. With that reason I agree with KG and WH mother and Ron. KG: JHD said the primary concern is not having the error property and the cause property, a possible path forward is to simply prohibit the cause property here. This of course wouldn’t affect anything for errors generated by the language, the language does not in install the cause property. It would only be a problem if the user is trying to construct a suppressedError and giving it both the `error` and `suppressed` properties and then in addition passing a cause to the options bag and I think JHD makes a reasonable case that that would be confusing or indeed Ron as well that the error and suppressed properties do sort of logically represent what the suppressedError is. There’s no like additional cause that is relevant to the suppressedError itself that isn’t like inherently part of either the error property or the suppressed property. A possible path forward is prohibit the cause property and look it up in the constructor bag and do a has check for the cause property in the constructor bag and i guess throw away different error in that case, I don’t know. Or just not look it up in the first place. But just not have the cause at all. Because it seems like it’s not a great fit for this kind of error. @@ -492,7 +491,7 @@ RPR: So what we’re being asked for consensus on is re removing install error c ??: Yes. -??: So you said you would be okay with that. +??: So you said you would be okay with that. WH: Yes. @@ -506,7 +505,7 @@ KG: I explicitly support that. RPR: We have heard from Kevin and Eemeli explicit support for removing error cause. -??: With the logic at least for me that required parameters should come before option ones. +??: With the logic at least for me that required parameters should come before option ones. JHD: To MM’s point I think this is a fundamentally different kind of error in that it’s a branch point and it doesn’t have a linear relationship in the way that others do. I would still prefer to go with this option. @@ -562,7 +561,7 @@ MF::👍 ### Conclusion/Resolution -* Reached consensus to Merge PR 67 https://github.com/tc39/proposal-intl-locale-info/pull/67 +- Reached consensus to Merge PR 67 https://github.com/tc39/proposal-intl-locale-info/pull/67 ## Parallel async iterators via a tweak to iterator helpers @@ -623,7 +622,7 @@ JHD: Please DM when you’ve done that and I’ll update the proposal statement. ### Conclusion/Resolution -* In order to reconsider how to enable better parallelism, the committee reached consensus that async iterator helpers to be split out from iterator helpers proposal and demoted to stage 2; sync helpers remain stage 3 +- In order to reconsider how to enable better parallelism, the committee reached consensus that async iterator helpers to be split out from iterator helpers proposal and demoted to stage 2; sync helpers remain stage 3 ## Temporal Stage 3 update and normative PRs @@ -643,7 +642,7 @@ PFC: **(Slide 2)** This is essentially another progress update. the proposal is PFC: **(Slide 3)** I mentioned a final push. What does that mean? I’ll talk a little bit about that. Our goals among the champions of this proposal are that we resolve all of the open discussions on existing issues. We are aiming for having no remaining normative changes to make after the March plenary, and after that point, only consider issues that are really instances of the spec not working. -PFC: **(Slide 4)** And then while we continue to resolve minor editorial points and get things into the shape that would be required for a PR into ECMA 262, this is a long-running proposal, so the spec text has some updated conventions in it, which we’d use that time to sort of quietly work on. So in the point leading up to this meeting, we had a number of very long champions meetings trying to resolve all of the known discussions on normative issues. I’m going to present the results -- most of those results today. We expect another two to three pull requests to present in the March plenary, and then after March, you know, barring the case where implementers report more issues, we plan to pause our work until the -- or pause our work on, like, making decisions for the proposal until we get to a point where it becomes feasible to ask for advancing the proposal to Stage 4. +PFC: **(Slide 4)** And then while we continue to resolve minor editorial points and get things into the shape that would be required for a PR into ECMA 262, this is a long-running proposal, so the spec text has some updated conventions in it, which we’d use that time to sort of quietly work on. So in the point leading up to this meeting, we had a number of very long champions meetings trying to resolve all of the known discussions on normative issues. I’m going to present the results -- most of those results today. We expect another two to three pull requests to present in the March plenary, and then after March, you know, barring the case where implementers report more issues, we plan to pause our work until the -- or pause our work on, like, making decisions for the proposal until we get to a point where it becomes feasible to ask for advancing the proposal to Stage 4. PFC: **(Slide 5)** So all that means that the finish line is in sight. Here some nifty clip art. And given that, I’ll have a short overview of things that you can expect after this meeting. @@ -774,7 +773,7 @@ RPR: There’s a bunch of things on the queue to start with, starting with PDL w PDL: Sorry, I was just going to mention that `Calendar.from()` and `TimeZone.from()` according to our pR, don’t just accept objects when they implement the full set of calendar protocol properties, but also when they are actual built-in temporal objects. So you can pass in a ZonedDateTime, and it would take the calendar and or time zone out of the internal slot, but only if it’s a Temporal object. So because I think that slide was slightly unclear. -PFC: Sorry, yes. That’s correct. And it’s not a change from the status quo. I guess the slide should say, "Objects _implementing the protocol_ are only accepted if they _fully_ implement the protocol." +PFC: Sorry, yes. That’s correct. And it’s not a change from the status quo. I guess the slide should say, "Objects *implementing the protocol* are only accepted if they *fully* implement the protocol." PDL: And no property bags or anything of that nature. @@ -802,7 +801,7 @@ FYT: So a different topic. So there’s a PR [#2479](https://github.com/tc39/pro PFC: Okay. So what’s your recommendation for this pR exactly? -FYT: I just don’t have enough time to support this. I’m not opposed to it, I just need more time to look into that. Can we not merge it? +FYT: I just don’t have enough time to support this. I’m not opposed to it, I just need more time to look into that. Can we not merge it? I don’t think it’s this one. Oh, yes, this one, sorry. PFC: Okay, so specifically, you would ask that we not merge this one. Does that mean, could we ask for a consensus on this one after -- @@ -889,7 +888,7 @@ MM: Okay. Excellent. I think that -- that completely satisfies the concern. PFC: All right. Thank you. -RPR: We’re four minutes over time now. I’m wondering if we can get through to the request for consensus. CDA, how -- sorry, are you willing to be quick? +RPR: We’re four minutes over time now. I’m wondering if we can get through to the request for consensus. CDA, how -- sorry, are you willing to be quick? CDA: I might have jumped the gun anyway because it sounds like it’s going to split the talk about the rest of the proposal versus this issue on naming. So that’s fine. @@ -899,7 +898,7 @@ CDA: Yeah, no, that’s fine. RPR: So are you happy that it does not affect the request for consensus now? -PFC: Is that a question for me or CDA? +PFC: Is that a question for me or CDA? RPR: That was a question for CDA. I was trying to find out if it affects consensus, but I guess we shall find out. @@ -914,7 +913,7 @@ instead of `timeZoneCode` and `calendarCode`? PFC: As far as I’m concerned, yes. If you have an objection about that one, then I’d like to split the naming concern out of the request for consensus on the rest of the change, since the actual change is much more than that. -RPR: Okay, to be clear to everyone, this request for consensus is the PRs including the proposed name for the `calendarId`. MM says no objection to consensus is based on objections. And no need to speak. DE Happy about the optimization. No need to speak. CDA is positive - IBM supports, and SYG was on the queue, but is no longer. Yeah. SYG is back. Plus support, and including the naming. And the other stuff. JHD says I do not agree with the ID naming, but everything else sounds good. Okay, JHD, just clarifying here, you are blocking that part? +RPR: Okay, to be clear to everyone, this request for consensus is the PRs including the proposed name for the `calendarId`. MM says no objection to consensus is based on objections. And no need to speak. DE Happy about the optimization. No need to speak. CDA is positive - IBM supports, and SYG was on the queue, but is no longer. Yeah. SYG is back. Plus support, and including the naming. And the other stuff. JHD says I do not agree with the ID naming, but everything else sounds good. Okay, JHD, just clarifying here, you are blocking that part? JHD: Yes, I am. All the changes that don’t involve that naming sound great to me. I appreciate the explanations, and that one I think needs further discussion. @@ -970,19 +969,19 @@ PFC: All right, thanks, everyone. ### Conclusion/Resolution -* Temporal is advancing towards a goal of being able to say that it no longer “requires implementer coordination”, with a goal from the champion group of March 2023. -* Consensus reached on the following changes: -* https://github.com/tc39/proposal-temporal/pull/2442 -* https://github.com/tc39/proposal-temporal/pull/2456 -* https://github.com/tc39/proposal-temporal/pull/2460 -* https://github.com/tc39/proposal-temporal/pull/2467 -* https://github.com/tc39/proposal-temporal/pull/2472 -* https://github.com/tc39/proposal-temporal/pull/2474 -* https://github.com/tc39/proposal-temporal/pull/2475 -* https://github.com/tc39/proposal-temporal/pull/2477 -* https://github.com/tc39/proposal-temporal/pull/2478 -* https://github.com/tc39/proposal-temporal/pull/2480 -* https://github.com/tc39/proposal-temporal/pull/2484 -* https://github.com/tc39/proposal-temporal/pull/2485 -* Consensus on https://github.com/tc39/proposal-temporal/pull/2482 except for the names of the `timeZoneId` and `calendarId` properties, which is to be discussed in an overflow item later this meeting. [Note: Consensus on the Id spelling was reached the next day.] -* TC39-TG2 will continue to investigate https://github.com/tc39/proposal-temporal/pull/2479; no concrete objections but not enough time to decide. +- Temporal is advancing towards a goal of being able to say that it no longer “requires implementer coordination”, with a goal from the champion group of March 2023. +- Consensus reached on the following changes: +- https://github.com/tc39/proposal-temporal/pull/2442 +- https://github.com/tc39/proposal-temporal/pull/2456 +- https://github.com/tc39/proposal-temporal/pull/2460 +- https://github.com/tc39/proposal-temporal/pull/2467 +- https://github.com/tc39/proposal-temporal/pull/2472 +- https://github.com/tc39/proposal-temporal/pull/2474 +- https://github.com/tc39/proposal-temporal/pull/2475 +- https://github.com/tc39/proposal-temporal/pull/2477 +- https://github.com/tc39/proposal-temporal/pull/2478 +- https://github.com/tc39/proposal-temporal/pull/2480 +- https://github.com/tc39/proposal-temporal/pull/2484 +- https://github.com/tc39/proposal-temporal/pull/2485 +- Consensus on https://github.com/tc39/proposal-temporal/pull/2482 except for the names of the `timeZoneId` and `calendarId` properties, which is to be discussed in an overflow item later this meeting. [Note: Consensus on the Id spelling was reached the next day.] +- TC39-TG2 will continue to investigate https://github.com/tc39/proposal-temporal/pull/2479; no concrete objections but not enough time to decide. diff --git a/meetings/2023-03/mar-21.md b/meetings/2023-03/mar-21.md index 3f08ba27..32049e2b 100644 --- a/meetings/2023-03/mar-21.md +++ b/meetings/2023-03/mar-21.md @@ -166,7 +166,7 @@ KG: Last and most important thing is that we are cutting ES2023. We are freezing RPR: Any other questions for Kevin? Okay, all right. thank you for that -#### Summary +### Summary A number of fixes and cleanups have been applied to the specification text. No further significant changes will be made before ES2023 is cut. We will be starting the IPR opt-out period now, and ask for approval next meeting. @@ -227,8 +227,7 @@ SFC: everyone, please get involved with pick my for it. Thank you. ### Summary -ES2023 cut is on track -Please see the user preferences proposal, User Locale Preferences https://github.com/WICG/proposals/issues/78 +ES2023 cut is on track Please see the user preferences proposal, User Locale Preferences https://github.com/WICG/proposals/issues/78 ### Conclusion @@ -402,8 +401,8 @@ Consensus on the PR Presenter: Michael Ficarra (MF) - [proposal](https://github.com/tc39/proposal-iterator-helpers) -- (slides)[https://docs.google.com/presentation/d/1BjtOjv447KcXSsz2GdV-HBnhhUTToRMHuMQO6Zlosnw/] -- (issue)[https://github.com/tc39/proposal-iterator-helpers/issues/270] +- [slides](https://docs.google.com/presentation/d/1BjtOjv447KcXSsz2GdV-HBnhhUTToRMHuMQO6Zlosnw/) +- [issue](https://github.com/tc39/proposal-iterator-helpers/issues/270) MF: Okay, so we have had a request from the community to re-evaluate the naming. If you want to follow along there the issue is 270. As background, we have two methods called take and drop. Take takes an iterator and a number of elements and produces a new iterator that is exhausted after that number of nexts. Drop takes an iterator and a number of elements and nexts the underlying iterator that many times and then yields all of the remaining elements from the underlying iterator. @@ -435,7 +434,7 @@ LCA: Oh no. So if it kind of depends on whether the iterator was for will weathe KG: But in any case Rust prevents you from being confused. -_break for lunch_ +_break for lunch._ LCA: Rust has `take` and `skip`, and they have `take_while` and `skip_while`. @@ -481,7 +480,7 @@ MM: Yeah, I support not renaming. DE: Anybody want to express concerns? -_silence_ +_silence._ MF: Great. @@ -624,12 +623,7 @@ Topic to be continued on day 3 in an overflow session, discussing the nanosecond ### Conclusion -All 5 PRs got consensus and will be merged -https://github.com/tc39/proposal-temporal/pull/2522 - Change allowing Temporal.ZonedDateTime.prototype.toLocaleString to work while disallowing Temporal.ZonedDateTime objects passed to Intl.DateTimeFormat methods -https://github.com/tc39/proposal-temporal/pull/2518 - Change to eliminate ambiguous situations where abstract operations such as MakeDay might return NaN -https://github.com/tc39/proposal-temporal/pull/2500 - Change in the validation of property bags passed to calendar methods -https://github.com/tc39/proposal-temporal/pull/2519 - Audit of user-observable lookups and calls of calendar methods, and elimination of redundant ones -https://github.com/tc39/proposal-temporal/pull/2517 - Bug fix for Duration rounding calculation with largestUnit +All 5 PRs got consensus and will be merged https://github.com/tc39/proposal-temporal/pull/2522 - Change allowing Temporal.ZonedDateTime.prototype.toLocaleString to work while disallowing Temporal.ZonedDateTime objects passed to Intl.DateTimeFormat methods https://github.com/tc39/proposal-temporal/pull/2518 - Change to eliminate ambiguous situations where abstract operations such as MakeDay might return NaN https://github.com/tc39/proposal-temporal/pull/2500 - Change in the validation of property bags passed to calendar methods https://github.com/tc39/proposal-temporal/pull/2519 - Audit of user-observable lookups and calls of calendar methods, and elimination of redundant ones https://github.com/tc39/proposal-temporal/pull/2517 - Bug fix for Duration rounding calculation with largestUnit ## Set methods: What to do about `intersection` order? diff --git a/meetings/2023-03/mar-22.md b/meetings/2023-03/mar-22.md index 4b68e56e..a802fd11 100644 --- a/meetings/2023-03/mar-22.md +++ b/meetings/2023-03/mar-22.md @@ -352,8 +352,7 @@ SYG: Confidently the time line I think is three releases. Let me pull up the sch MLS: Good information to have. I don’t think the Committee can ask you to do that, but I think it would be a good information to have. -SYG: I am volunteering to do this because it is clear to me from this conversation that the Committee would like to have only `with`, despite my and folks like justin's personal preference. If that’s the ideal end state, I want to see how likely we can – how likely it is we can get there, but I want to go into it with -eyes open that we might not be able to get there because it’s already shipped. +SYG: I am volunteering to do this because it is clear to me from this conversation that the Committee would like to have only `with`, despite my and folks like justin's personal preference. If that’s the ideal end state, I want to see how likely we can – how likely it is we can get there, but I want to go into it with eyes open that we might not be able to get there because it’s already shipped. YSV: So as next steps for this part, SYG, you’re volunteering to add a counter to see what the current web usage is. Yes. think that’s a good conclusion for that for now. @@ -384,8 +383,7 @@ YSV: I will move us along. Because I believe that the champion still wants to ge JHD: Yeah, I will be brief. I support stage 3, I have a non-blocking preference that we omit `assert` from the spec, I’m fine if we come up with a stronger category and even better if it indicates that this section will be removed in a future version of the spec if possible. Because then it’s clear that once it’s unshipped from everywhere, if it can be, we would hopefully be able to delete it from the spec entirely. If we made that clear in the document, that would be a nice thing to have. -YSV: Thank you. So NRO, I want to give the floor back to you with a quick summary. There have been a number of expressions of support for stage 3 from various parties. There has been a con -cern – Michael correct me if I'm wrong, this is a non-blocking concern with regards to shipping both assert and with due to the fact that we have – we don’t have assert currently in the ECMA-262 spec and preferably we wouldn't have it. Chrome offered to include a usage counter to see what the burden would be to do a transition there, however, expressed doubt that it would be not possible to ship both. There have been comment that is people would be okay with shipping both, although shipping with alone would be preferable, is that a correct summary of what we’ve had so far? +YSV: Thank you. So NRO, I want to give the floor back to you with a quick summary. There have been a number of expressions of support for stage 3 from various parties. There has been a con cern – Michael correct me if I'm wrong, this is a non-blocking concern with regards to shipping both assert and with due to the fact that we have – we don’t have assert currently in the ECMA-262 spec and preferably we wouldn't have it. Chrome offered to include a usage counter to see what the burden would be to do a transition there, however, expressed doubt that it would be not possible to ship both. There have been comment that is people would be okay with shipping both, although shipping with alone would be preferable, is that a correct summary of what we’ve had so far? MLS Yeah, it would be my – as JHD said earlier, we would not include assert in the spec, but that other documentation by implementations would be used for that. I’m not going to block on that. I do appreciate NRO wanting to use something like Deprecated. @@ -426,8 +424,7 @@ RPR: Thank you for staying longer than originally anticipated Yulia. That was ve Import attributes are the path forward for the standard, having re-achieved Stage 3. The keyword is `with` -As previously, there is an options bag following it -The options can form part of the interpretation of the module and "cache key" +As previously, there is an options bag following it The options can form part of the interpretation of the module and "cache key" Unknown attributes in the import statement cause an error. Although a couple delegates would prefer sticking with the keyword `assert`, the majority preferred switching to the long-term optimal solution of being more semantically well-aligned using `with` Significant debate focused around how to communicate the deprecation. @@ -438,8 +435,7 @@ Significant debate focused around how to communicate the deprecation. JS environments which currently ship `assert` are _not_ encouraged to remove it, but environments which do not yet ship `assert` are discouraged from shipping it. Chrome will gather data on usage of `assert` on the web, which can inform the deprecation path. Conditional consensus for Stage 3 on this proposal, with the conditions: -Reviews are still needed from the reviewers who volunteered – JRL and JHD, as well as the editors -The wording for normative optional+legacy needs to be updated to something stronger, probably "deprecated", and explaining the goal to remove it from the specification. +Reviews are still needed from the reviewers who volunteered – JRL and JHD, as well as the editors The wording for normative optional+legacy needs to be updated to something stronger, probably "deprecated", and explaining the goal to remove it from the specification. ## Async Explicit Resource Management @@ -690,9 +686,7 @@ DE: There were arguments on both sides. On one side there is a footgun. On the o ### Conclusion -Consensus for stage 2 -Plan to iterate during stage 2 on floating point restriction -WH & JHD to review +Consensus for stage 2 Plan to iterate during stage 2 on floating point restriction WH & JHD to review ## Float16Array for Stage 2 & 3 @@ -810,10 +804,7 @@ CDA: Can you stop the screen share and somebody can kindly pull up the notes to ### Speaker's Summary of Key Points -Implementations were not comfortable with stage 3 because they need time to determine implementability -interest in ‘bfloat16’ to be explored -interest in wasm interop to be explored -should include a rounding method +Implementations were not comfortable with stage 3 because they need time to determine implementability interest in ‘bfloat16’ to be explored interest in wasm interop to be explored should include a rounding method ### Conclusion @@ -932,8 +923,7 @@ So one direction that could be pursued here is to withdraw the proposal; another CDA: We’ve got KG in the queue. -KG: Yes. I very strongly support having regex escape method. It’s gotten trickier with v-mode regexes, because v-mode introduces a handful of punctuators that need to be escaped. And u mode does not allow unescaped characters – so we would need to modify those so that they allow those escaped characters as identity escapes. But with that change, it is -perfectly possible to have a regex.escape that escapes a thing in such a way it can be used in any context within a regex except in the repetition context. Of course it will mean different things in different contexts, like things will be escaped properly. And, like, that is a thing that people have wanted forever and we have been telling people for a long time, we’ll work on it, and we can’t just continue not doing it and saying we will work on it. It is possible for us to say we are never going to do this and I would not be in favour of that, but that would be a better outcome than the current state where we say we’re going to keep working on it. Because if we say we’re never going to do it, then node is just going to ship the thing that everyone wants and probably browsers will as well and everyone will have the thing that everyone wants, it’s just that we won’t have specified it. That’s silly. We should just do the thing people want. We got to deal with the extra complexity from V mode, but we should just do the thing people want +KG: Yes. I very strongly support having regex escape method. It’s gotten trickier with v-mode regexes, because v-mode introduces a handful of punctuators that need to be escaped. And u mode does not allow unescaped characters – so we would need to modify those so that they allow those escaped characters as identity escapes. But with that change, it is perfectly possible to have a regex.escape that escapes a thing in such a way it can be used in any context within a regex except in the repetition context. Of course it will mean different things in different contexts, like things will be escaped properly. And, like, that is a thing that people have wanted forever and we have been telling people for a long time, we’ll work on it, and we can’t just continue not doing it and saying we will work on it. It is possible for us to say we are never going to do this and I would not be in favour of that, but that would be a better outcome than the current state where we say we’re going to keep working on it. Because if we say we’re never going to do it, then node is just going to ship the thing that everyone wants and probably browsers will as well and everyone will have the thing that everyone wants, it’s just that we won’t have specified it. That’s silly. We should just do the thing people want. We got to deal with the extra complexity from V mode, but we should just do the thing people want JHD: My reply is just what he said, the function form will be shipped if we don’t decide to do something because that’s what everyone wants. @@ -1027,10 +1017,7 @@ JHD: Bearing in mind that node at least - that the only reason they haven’t sh ### Speaker's Summary of Key Points -MM is concerned about composing embedded languages by string mashing -MM expressed an opinion that the tagged template form is the superior solution -Everyone else who expressed an opinion is happy with the escape function -KG agrees with the string mashing concern in general but thinks that in this case in particular can be made fully safe +MM is concerned about composing embedded languages by string mashing MM expressed an opinion that the tagged template form is the superior solution Everyone else who expressed an opinion is happy with the escape function KG agrees with the string mashing concern in general but thinks that in this case in particular can be made fully safe ### Conclusion diff --git a/meetings/2023-03/mar-23.md b/meetings/2023-03/mar-23.md index 32c02a89..225379e2 100644 --- a/meetings/2023-03/mar-23.md +++ b/meetings/2023-03/mar-23.md @@ -103,8 +103,7 @@ USA: Thank you. Next up we have MM. One quick reminder we are about 7 minutes to MM: Okay. First of all, my compliments on how much careful engineering and the quality of the explanation. I appreciate all of that. I have a question about the relative contribution to performance of two different aspects of what you’re doing. And then I’ll also explain before you answer, I would also like to explain why I’m asking that particular question. So the question itself, when you do the message passing, if the structs that you’re passing were transitively immutable, but you were still able to share them between threads by pointer passing, I take it that would be compatible with the experiment that you actually did that you showed the results for. And obviously the full proposal has the read/write structs and the ability to do fine sharing and locking. The reason I’m asking is that all of the impacts on the programmer model and the contagion of needing to deal with concurrency through the ecosystem as people release libraries all comes from the need to coordinate a shared memory multithreading by locking that the spread of locking disciplines into user code. If all you are doing is passing transitively immutable structures through message passing between threads, that would not affect the user model at all. -_transcription service interrupted_ -_switching to bot_ +_transcription service interrupted_ _switching to bot_ SYG: Doesn't mean that we're not going to make these structs available. it probably means that by default if you don't opt into it, your server does not into it. You get these things as immutable at construction time and they can still be zero copy, message passed. You can't do the full escape hatch. fine-grain locking. but you can still do the zero copy message passing and they still can be shared, but because you don't have cross-origin isolation. you just cannot have mutations, okay? @@ -323,10 +322,8 @@ Alternatively, we could take the approach that most of the other – the rest of A knit time locking, line 11, a context value of 1. Any time this generator executes the generator holds the value of 1 in the async context global storage state, all the reasons are 1. I said, I am asserting that line 3 and 5 must always be equal and obviously that is the case here. And the – it it happens that between line 5 and 7, they are also equivalent. The generator itself holds on to its creation context. -The other logically consistent answer here is that it is the call time of the call to the next that propagates. It doesn’t matter there’s a value of 1 during the construction, but what the value is on line 14 and 17. Line 3 and 5 are equivalent. No matter what choice we make here, these two lines have to be the same. So in this case, it captures the value at the call to the next on line 14. The two value is there on line 3 and stored on line 5. The yield escapes the current execution context. It goes back to line 17. On line 17, I reinvoke this generator with a new context value and that value could be seen when that generator resumes execution. This is still logically consistent for-await. And this is the choice we want to make for generaller rarities because we don’t have to do any extra work. This already is the became the spec is implemented. Also this affects synchronous generators. Whatever choice we made for an asynchronous generator across the yield boundary, it’s the same answer for a synchronous generator. We could go with the init-time-lock value of 1. Or continue with the current call time whatever you called that next. So you get a value of 2 and a value of 3. This is still consistent with the async generators work. But we have to be mindful of this. Additionally, we have other things that look like generators but are iterators. For instance, an array iterator. If I have a generator, implement that iterator as a generator, then we have a knit time construction or hall time construction. With the way – do choose a knit time construction, we don’t have to answer, what happened for all the iterators that exist in the specification in -If we chose a init, does this get property see a value of 1? Which means I need to add a context. If we chose call time semantics, nothing needs to change. There is no change in the larger semantics of the language. Whatever the context value is, the next time, is the context that the generator stays in the body. -It is the first question question -Let’s go back to the queue. +The other logically consistent answer here is that it is the call time of the call to the next that propagates. It doesn’t matter there’s a value of 1 during the construction, but what the value is on line 14 and 17. Line 3 and 5 are equivalent. No matter what choice we make here, these two lines have to be the same. So in this case, it captures the value at the call to the next on line 14. The two value is there on line 3 and stored on line 5. The yield escapes the current execution context. It goes back to line 17. On line 17, I reinvoke this generator with a new context value and that value could be seen when that generator resumes execution. This is still logically consistent for-await. And this is the choice we want to make for generaller rarities because we don’t have to do any extra work. This already is the became the spec is implemented. Also this affects synchronous generators. Whatever choice we made for an asynchronous generator across the yield boundary, it’s the same answer for a synchronous generator. We could go with the init-time-lock value of 1. Or continue with the current call time whatever you called that next. So you get a value of 2 and a value of 3. This is still consistent with the async generators work. But we have to be mindful of this. Additionally, we have other things that look like generators but are iterators. For instance, an array iterator. If I have a generator, implement that iterator as a generator, then we have a knit time construction or hall time construction. With the way – do choose a knit time construction, we don’t have to answer, what happened for all the iterators that exist in the specification in If we chose a init, does this get property see a value of 1? Which means I need to add a context. If we chose call time semantics, nothing needs to change. There is no change in the larger semantics of the language. Whatever the context value is, the next time, is the context that the generator stays in the body. +It is the first question question Let’s go back to the queue. JRL: My opinion is that we should use call time semantics, I would like that. If we have a strong preference for a knit time construction, then this proposal is going to get very, very large. @@ -344,8 +341,7 @@ JRL: Let’s pause this and go to the issue – remind you that the issue is num The next open question that we have is around async – I’m sorry. Unhandled rejection. I have 10 minutes left. Essentially, unhandledRejection – this is the only web platform we specify because we defined the – we need to answer what is the context captured when async context – when it unhandled rejection to revoked. Line 6, a get store inside unhandled rejection listener. What is supposed to be get? We have a get stored in that unhandled rejection event listener and see some value. It can either be the time that it happens. I think that’s a silly answer because for reason in a moment, it could be the throwing context. The thing that wraps whenever the throw happens. That’s a bad answer, because we could talk about it. There is the call time, when the rejection happens. Line 14. When the rejection actually happens, which one of these three answers essentially do you think is the one that is propagated? There’s more clarification that needs to happen here. For the majority of cases, it doesn’t matter what our choice is. Unless we chose the registration time of the event listener, which is an awful choice. If we choice one of the other options, we have essentially consistent, pretty logically consistent answer. The ABC context here will be propagated to the rejection, unhandled rejection for all the promises no matter what happens. You can go through a series of call chains, async stack, async function that calls another function, that calls another function that eventually has an unhandled rejection promises. All see ABC because of the way the semantics of the proposal works. We don’t have any issues. But this really comes up when we have this specific set up. -We have a promise, the promise rejection function escapes the context, the context.run. On line 9 through 14 here, I create a variable called reject. I then invoke a context.run and get that out of the context.run. The rejection has leaked out of my context. That’s an important point to make there -What is the context at the time that this rejection happens on line 16 here? What is the context that happens here? At the moment, the answer is call. It’s whatever the reject function itself is invoked ask that is a side effect of the way that the then is the thing that captures the context, not the promise. +We have a promise, the promise rejection function escapes the context, the context.run. On line 9 through 14 here, I create a variable called reject. I then invoke a context.run and get that out of the context.run. The rejection has leaked out of my context. That’s an important point to make there What is the context at the time that this rejection happens on line 16 here? What is the context that happens here? At the moment, the answer is call. It’s whatever the reject function itself is invoked ask that is a side effect of the way that the then is the thing that captures the context, not the promise. If we want to make this – change this so that it is the init context, then I have to go through and make a larger set of changes to that the promise allocation stores the context and not the then handlers. This is the next hairy question we got. And I will jump back to the queue MM: Yeah. So my sense is that unhandled rejection handlers are there for diagnostic purposes, which is the reason why errors capture stacks is all for diagnostic purposes. You can have a rejected promise that is not rejected with an error, but with that program, it doesn’t capture a stack . . . so my suggestion is that the unhandled rejection handler, bound to default, and that if the handler wants to extract the dynamic context associated with the reason why the promise was rejected, that it can do that by using the error option @@ -415,11 +411,9 @@ JRL: Yes. I agree it should not happen. I hope it can happen because that’s th ### Speaker's Summary of Key Points -Reaches Stage 2 -Future presentations (and edits of the proposal README) will need to elaborate on the use cases, as the committee does not understand these beyond logging. +Reaches Stage 2 Future presentations (and edits of the proposal README) will need to elaborate on the use cases, as the committee does not understand these beyond logging. Open questions will be discussed on repo threads and in regular calls which to be advertised to the committee. -Need to investigate ecosystem integration -Also to investigate the implications of having automatic `context.wrap` capture of functions as suggested by JHD +Need to investigate ecosystem integration Also to investigate the implications of having automatic `context.wrap` capture of functions as suggested by JHD ### Conclusion @@ -435,8 +429,7 @@ Presenter: Peter Klecha (PKA) PKA: Okay. Hello, everybody. My name is Peter, I am a new delegate with Bloomberg. And I am here to give a brief presentation on Promise.withResolvers for Stage 1. The idea is hopefully familiar to – probably familiar to a lot of us. The plain Promise constructor works well for use cases. We pass in an executor. It takes the resolve and rejects arguments. Inside the body, we are meant to decide how the Promise gets resolved or rejected bypassing it into some async API, like, in this case. That works well for most use cases, but sometimes developers want to create a promise before deciding how or when to call its resolvers. -So when that situation arises, we have to do the dance of scooping out the resolve and reject from the body, binding to globals and going on their way, like in this case -This is a really common – I don’t want to oversell, it’s not everyday that you write this, but fairly common. It gets re – this a wheel that is reinvented all over the place. Utility function in the TypeScript. In Deno as deferred. It appears in all kinds of popular libraries +So when that situation arises, we have to do the dance of scooping out the resolve and reject from the body, binding to globals and going on their way, like in this case This is a really common – I don’t want to oversell, it’s not everyday that you write this, but fairly common. It gets re – this a wheel that is reinvented all over the place. Utility function in the TypeScript. In Deno as deferred. It appears in all kinds of popular libraries PKA: The proposal is very simple: a constructer that does away with the need for users for – developers to write this by simply returning a premise together with its resolve and reject functions on a plain object. This idea has been in Chrome before. It used to be in Promise.defer. Many people know it under that name. The name is bikeshedded in future stages. But it’s clear that there’s a need, or a desire for this functionality. And it would just a nice thing for developers to be able to have an easy way to access this functionality. @@ -488,10 +481,7 @@ NRO: +1 for stage 1 ### Speaker's Summary of Key Points -General support -This was only omitted for minimalism in ES6 -Name to be bikeshedded, "defer" has the problem that `es6-shim` deletes it -Symbol.species to be discussed +General support This was only omitted for minimalism in ES6 Name to be bikeshedded, "defer" has the problem that `es6-shim` deletes it Symbol.species to be discussed ### Conclusion @@ -563,7 +553,7 @@ CDA: Okay. Last on the queue is + 1 on preferring nanoseconds from DLM. PFC: Yeah. Thanks for the input, everyone. -#### Summary +### Summary The committee weighed pros and cons of nanoseconds vs microseconds, concluding to stick with nanoseconds as the granularity of all Temporal time/instant types, to enable interchange with other systems. @@ -613,8 +603,7 @@ However, in Temporal, ZonedDateTime has the identifier in its `toString` output. So that was essentially the impetus for me doing this proposal this problem will get worse than it is and there’s a fair element of evidence that it’s already pretty bad. JGT: So let’s talk about proposed solutions to these problems. There’s two groups of these solutions. The first group makes small changes to the spec text, to reduce the divergence between implementations and the spec and implementations and the spec. Tighten up the spec to prevent future divergence and work to see if we can converge on a consistent approach for now. -And then a second part of this, to complement the changes above is to make small API changes that make it less disruptive when this changes in the first place. This will really help for the inevitable next time we have a Kiev to Kyiv change -We split these solutions up into 6 steps under the idea that some might get blocked or technical issues. We didn’t want the perfect to be the enemy of the good. We can move forward on the rest. +And then a second part of this, to complement the changes above is to make small API changes that make it less disruptive when this changes in the first place. This will really help for the inevitable next time we have a Kiev to Kyiv change We split these solutions up into 6 steps under the idea that some might get blocked or technical issues. We didn’t want the perfect to be the enemy of the good. We can move forward on the rest. The first category is the spec changes. To simplify the abstract operations that deal with time zone identifiers. His makes every other change easier. Following this is to clarify the spec to divergence from getting worse. Not make a normative change yet, but to say, if you are doing this in the future, please don’t do this because it’s stupid. Please head in this way. And then work with the implementers of V8 and WebKit to see and their upstream dependencies, like ICU to see get a solution to the out of date identifiers. These are for current code from Temporal. These are the biggest source of problems today. There’s only 13 of them. Which is really good news. Which means the absolute worst case, there’s a hard coded mapping table to get updated once every year. Ideally we want a single upstream place where this happens. We’re talking about whether CLDR can be at that place. If nothing works, the worst case is not bad. Because the rate of change is low. If we can get through 1, 2, and 3 we will be in a good place to add normative spec text that prevent the problems in the future as the limitations move forward, they will not diverge. @@ -740,16 +729,10 @@ As far as potential semantics, the goal is to align with the evaluate order and In the case of method parameters, the decorators would be provided before the decorators method. Constructor would be before the class itself. And parameters are applied independently. As with decorators on individual declarations, it would be in reverse order. It means that if we were to step through the evaluation order of decorators here, we could start by looking at the first parameter in the parameter list, meaning that the first decorator would be applying is the one closest to the parameter declaration. When that is then applied, we then move on to applying the one that is slightly further back. The earlier in document order in this case. To match the same reverse order, the – that we see with decorators today, once we finish with parameter 1, we might move on to parameter 2. Which means starting with the decorator closest to the parameter declaration. And then moving on to the one that is next. Or that is previous to it in document order. -And then obviously moving on to the ones of the method, which occur in that method order. This order is important because of the fact that when we are applying the decorators, they are applied to the – initial method declaration itself before any other method decorators could potentially replace it, and no longer be valid . . . it wouldn’t match the parameters that are here -The next question is whether or not – what kind of things might you be able to do. TypeScript has limitations with its parameter decorators, which are not terrible limitations. You can still do interesting things as I have shown with the decorator constructor parameter injection and even with rep parameters. That could be achieved today with something like the access to context metadata. Access to at least the ordinary index of the parameter. That would essentially allow you to emulate what the parameter decorates in the leg say are able to do. There’s other things that are interesting in here that we could potentially do -So I will – this is an overview but go into detail of what these are in the upcoming slides -Like any decorator, it has the same API. Accept a target and a context. Parameters, like fields, don't have a representation. Not one that exists at the time of the declaration. Thus, the expectation would be that parameter decorators like fields received undefined as the target because there’s nothing to wrap or replace here. +And then obviously moving on to the ones of the method, which occur in that method order. This order is important because of the fact that when we are applying the decorators, they are applied to the – initial method declaration itself before any other method decorators could potentially replace it, and no longer be valid . . . it wouldn’t match the parameters that are here The next question is whether or not – what kind of things might you be able to do. TypeScript has limitations with its parameter decorators, which are not terrible limitations. You can still do interesting things as I have shown with the decorator constructor parameter injection and even with rep parameters. That could be achieved today with something like the access to context metadata. Access to at least the ordinary index of the parameter. That would essentially allow you to emulate what the parameter decorates in the leg say are able to do. There’s other things that are interesting in here that we could potentially do So I will – this is an overview but go into detail of what these are in the upcoming slides Like any decorator, it has the same API. Accept a target and a context. Parameters, like fields, don't have a representation. Not one that exists at the time of the declaration. Thus, the expectation would be that parameter decorators like fields received undefined as the target because there’s nothing to wrap or replace here. As far as the second parameter context, we would again need some type of way to differentiate from others. We are using “parameter” as the name here. -At the least, we expect an ordinal index because that’s the only thing guaranteed at the time that it’s applied. You can’t get to a parameter’s name, in type text decorators because you only have access to function Proto-type string 2 to see that. Names are optional because binding patterns don’t have names to refer to. But having a name, even if it is optional, is useful in these cases. With parameter binding for HTML routes is that not having to repeat the name of the parameter is extremely useful, as with the example to database fields in ORMs -One thing that TypeScript doesn’t have, and you can’t do today with the parameter decorators is annotate rest limitations . . . to know – it’s useful for cases where there’s an array, but do you know if it’s an array as multiple or single arrays. -Another possibility we could have something like add initializing, adding static and extra that apply to the class, but not the function body. It’s important because these are all things that are declared and aligned with all decorators are defined with methods and fields and declarations themselves -Another thing that is important and ties into the decorator metadata proposal is that one of the two key things that parameter decorators are useful for is associating metadata about that parameter. This is necessary for the DI constructor parameter injection case. It’s extremely – it’s necessary for most FFI cases. It’s extremely necessary I for HTTP route parameter binding -Another thing this is looking into the ability to look at the function that the parameter is on. When you look at something like a field or method, you are attached to the class as well. You get context whether the method is static or nonstatic, AKA, instant or Proto-type. This is important when defining metadata on the objects, you need to differentiate what things you are describing. If you have two decorators on two fields or on two parameters, you need to be able to and want to differentiate, then you need to create an object graph within the piece of metadata to differentiate between what field was this attached to, what method was this attached to and in the case of the parameters, what was the parameter that this was attached to at the time. +At the least, we expect an ordinal index because that’s the only thing guaranteed at the time that it’s applied. You can’t get to a parameter’s name, in type text decorators because you only have access to function Proto-type string 2 to see that. Names are optional because binding patterns don’t have names to refer to. But having a name, even if it is optional, is useful in these cases. With parameter binding for HTML routes is that not having to repeat the name of the parameter is extremely useful, as with the example to database fields in ORMs One thing that TypeScript doesn’t have, and you can’t do today with the parameter decorators is annotate rest limitations . . . to know – it’s useful for cases where there’s an array, but do you know if it’s an array as multiple or single arrays. +Another possibility we could have something like add initializing, adding static and extra that apply to the class, but not the function body. It’s important because these are all things that are declared and aligned with all decorators are defined with methods and fields and declarations themselves Another thing that is important and ties into the decorator metadata proposal is that one of the two key things that parameter decorators are useful for is associating metadata about that parameter. This is necessary for the DI constructor parameter injection case. It’s extremely – it’s necessary for most FFI cases. It’s extremely necessary I for HTTP route parameter binding Another thing this is looking into the ability to look at the function that the parameter is on. When you look at something like a field or method, you are attached to the class as well. You get context whether the method is static or nonstatic, AKA, instant or Proto-type. This is important when defining metadata on the objects, you need to differentiate what things you are describing. If you have two decorators on two fields or on two parameters, you need to be able to and want to differentiate, then you need to create an object graph within the piece of metadata to differentiate between what field was this attached to, what method was this attached to and in the case of the parameters, what was the parameter that this was attached to at the time. Here, we have chosen to use the name function as opposed to method or something else to maintain a consistent API. And allow us in the future support parameter decorators on function decorators. One thing that is really interesting with this design and with the design of decorators in Stage 3 is that there is this potential capability that we don’t really have or didn’t build into the legacy decorator support in TypeScript. These are very limited today. They can only be used to collect metadata. they are only observational. It’s not designed to return a function that replaces the function that was attached to. We didn’t want a parameter to have the type of capability. It was too complex and cause problems with decorators that are replied by other parameters. @@ -775,8 +758,7 @@ As I said, everything that I am doing in here is with a forward-thought to what KG: I understand the reasons that function declaration – well, I understand some of the reasons at least that function declarations don’t have parameters. But – sorry. Function declarations don’t have decorators in part because of the hoisting complexity. There’s another reason, which is that they are less obviously a good idea. However none of that is relevant to the fact that most of these use cases do not – are not particularly about classes. While I get that you want to restrict the scope to narrow thing and advance it and later add function parameters, that assumes we are definitely doing function parameter decorators. And I think that is far from a foregone conclusion. I should say this is not a stage 1 blocker. But I am not okay with advancing to stage 2 with only class methods having parameter decorators. I just really don’t think we should be in that state. If we’re solving the problems that you are laying out, we are not solving them just for class methods; we are solving for functions in general. We can’t just do 50% of parameter decorators. We just can’t. -RBN: My only again – comment to that is that I think we were in the same boat with decorators. And I think function decorators are valuable. My first experience with TC39, I was invited by Luke what was the PM and he brought me to present the decorators proposal and this was back in like I think late 2012, early 2013. I would have to look at the email thread for that -And these were all things we considered. We went threw years of discussions with the angular team and part of that design was function decorators. It’s been hoisting and issues around hoisting and potentially factors getting in the way if you add a decorator. That stymied the entire proposal for a while. +RBN: My only again – comment to that is that I think we were in the same boat with decorators. And I think function decorators are valuable. My first experience with TC39, I was invited by Luke what was the PM and he brought me to present the decorators proposal and this was back in like I think late 2012, early 2013. I would have to look at the email thread for that And these were all things we considered. We went threw years of discussions with the angular team and part of that design was function decorators. It’s been hoisting and issues around hoisting and potentially factors getting in the way if you add a decorator. That stymied the entire proposal for a while. I will get back to it here . . . I plan to take all of these things into account with design. The design for this should – and we end up adopting function decorators, whatever that takes, this should work with that and I wouldn’t see again, I agree I don’t see this is advances to stage 2 if we make it so that these would never work with function decorators. I do however want to avoid these same type of issue where this just can’t advance because we can’t figure out function decorators. There’s too much values in the capabilities and that’s shown in the ecosystem that these are worth having, even if it’s limited space. We need to design to support this, but I would be concerned about blocking this purely on the we haven’t figured out function decorators yet so we shouldn’t take this. @@ -836,8 +818,7 @@ SYG: Stating the goals and the problem is different than what the title is. JHD: Sure. Forget the title. What is the problem statement? -RBN: Essentially, there were two: one is that we are trying to – like to enable some more flexible metaprogramming capabilities at that allow the motivations I listed. Request for routing . . . these are hard to do today. I think I showed in the example of FFIs, that’s the current FFI APIs, the eyeballing and disconnect. These are hard to do especially with class methods, which have the same methods with method decorators. We don’t have that ability during definition time to kind of inject and intercept and make these changes and do this type of recording -These are things that are really hard to do right or become more complicated because of that eyeballing that you have to do. I mean, you could emulate parameter decorators using normal method decorates, which is how they worked forever, it’s a wrapper around what the parameter decorator had beenings like. But you have to eyeball what is the index. If I refactor and move a parameter around, I am having to figure outer what is the index has changed to, this is complicated if you want these benefits. So we like to make this a lot easier. So this is a feature to make these capabilities easier. The other problem statement is, we have a large community of the TypeScript that we like to migrate to native decorators and hoping . . . and not to rewrite the code to switch to the native code. We understand that there’s a likelihood that if this does make it past stage 1 or make it to stage 1 and beyond, there might be changes that result in limitations. The same thing happened with – kind of happened with field decorators and the same – and the need to have like the accessory key word and the limitation now that we don’t have the ability to have the paired or tangled get set. We are shown there is a broad community involved – or interested in this, used this and like to bring that capability here. +RBN: Essentially, there were two: one is that we are trying to – like to enable some more flexible metaprogramming capabilities at that allow the motivations I listed. Request for routing . . . these are hard to do today. I think I showed in the example of FFIs, that’s the current FFI APIs, the eyeballing and disconnect. These are hard to do especially with class methods, which have the same methods with method decorators. We don’t have that ability during definition time to kind of inject and intercept and make these changes and do this type of recording These are things that are really hard to do right or become more complicated because of that eyeballing that you have to do. I mean, you could emulate parameter decorators using normal method decorates, which is how they worked forever, it’s a wrapper around what the parameter decorator had beenings like. But you have to eyeball what is the index. If I refactor and move a parameter around, I am having to figure outer what is the index has changed to, this is complicated if you want these benefits. So we like to make this a lot easier. So this is a feature to make these capabilities easier. The other problem statement is, we have a large community of the TypeScript that we like to migrate to native decorators and hoping . . . and not to rewrite the code to switch to the native code. We understand that there’s a likelihood that if this does make it past stage 1 or make it to stage 1 and beyond, there might be changes that result in limitations. The same thing happened with – kind of happened with field decorators and the same – and the need to have like the accessory key word and the limitation now that we don’t have the ability to have the paired or tangled get set. We are shown there is a broad community involved – or interested in this, used this and like to bring that capability here. Again we are trying to solve an issue around improving the developer experience and have evidence backed by years of users showing this is a great way to do that. JHD: Thank you. @@ -943,8 +924,7 @@ Presenter: Ron Buckton (RBN) RBN: I am going to bring up where we left off. This slides I am about to show are this is the same slide running as before. I have added some slides for additional discussion. Let me just get those shared. This is the slide we left off on. And I am going to go into a little discussion here. The consensus we had on it was Tuesday, to move toward with the syntax. I did an investigation into this. I put up PR’s I mentioned. I’ve had – I think at least SYG looked at it and talked it through with him in Matrix yesterday. But to give kind of an example of what we are looking at, to introduce a cover grammar for `await using`, it might look I will show on zero in on the slides. This is off the use of cover grammar for specifically the cover parenthesis expression and cover parenthesized expression and parameter list and the cover expression and async arrow head. Essentially, what we produce here is rather than an await expression. We produce a cover grammar that covers the same thing that await expression covers. -RBN: But then it wouldn’t be bind until the later point, when static semantics are applied -Again this is just could be covering await unary expression, identity [?] to a wait expression. Where this matters as you bubble up out of to assignment expressions to expression statements, you cannot have an identifier name follow an expression on the same line. That is invalid. Today it triggers – it wouldn’t trigger ASI [?]. It’s on the same line. And then specifically, what want to opt into for await using, so in that case we would have an `await using` declaration. And this has again – this cover await expression and a wait using declaration head, cover grammar, this matches now the case where expression now fails because you cannot have something following this in the expression case. +RBN: But then it wouldn’t be bind until the later point, when static semantics are applied Again this is just could be covering await unary expression, identity [?] to a wait expression. Where this matters as you bubble up out of to assignment expressions to expression statements, you cannot have an identifier name follow an expression on the same line. That is invalid. Today it triggers – it wouldn’t trigger ASI [?]. It’s on the same line. And then specifically, what want to opt into for await using, so in that case we would have an `await using` declaration. And this has again – this cover await expression and a wait using declaration head, cover grammar, this matches now the case where expression now fails because you cannot have something following this in the expression case. Here, we would say though a no line terminator works and then parse a binding list that does not include patterns. The specific parse parameter - production parameter shown here is relatively new. I just merged it into the research management case because of an editor comment about using the parameter in two different ways. This is more consistent. I am presenting this here as well. We successfully parse this and if semantics . . . and verify this is a valid cover for await, new line term later, slots into the space we had before. Now, the implications of this are that again cover await expression using deliciousing [?] head will eagerly consume what would be the content of the await expression. And then again, followed by identifier name and illegal . . . @@ -956,8 +936,7 @@ RBN: The reason this fails the case of the await using declaration is that await RBN: One of the other things that we looked into – I don’t have this in the slides, but I investigated async using as a grammar. It has the same level of complexity, except there’s one small benefit at least to the await using grammar in that in both the case of await expression and await using declaration there’s a plus await context. It’s fairly easy to restrict these and those cases. In the case of the `async using` expression case for an async using X = Y type declaration is in + await, but the async using, an arrow function head is not necessarily parsed. So there’s a little bit of discrepancy there -RBN: On the parser complexity side, I did implement this in TypeScript 3 weeks ago. But I was trying to it each of the possible cases. TypeScript is not necessarily LR1, most of the parse is but we have a couple cases of infinite look-ahead how we deal with – generics and arrow functions. But we primarily stick to LR1 and 2. We might max out at 3 token look ahead in case that is are not specifically handling arrow functions. In TypeScript this requires to disambiguiate to look ahead. If we saw the await token in a statement context, which could allow expression statement, if we saw the next token using with no line terminator in between and the next is an identifier with no line terminator in between, it’s definitely not an await using and only using deliciousing [?] and parse as such -If parsing complexity for a parser that permits two token look ahead, it would perform such look ahead to make this simple. +RBN: On the parser complexity side, I did implement this in TypeScript 3 weeks ago. But I was trying to it each of the possible cases. TypeScript is not necessarily LR1, most of the parse is but we have a couple cases of infinite look-ahead how we deal with – generics and arrow functions. But we primarily stick to LR1 and 2. We might max out at 3 token look ahead in case that is are not specifically handling arrow functions. In TypeScript this requires to disambiguiate to look ahead. If we saw the await token in a statement context, which could allow expression statement, if we saw the next token using with no line terminator in between and the next is an identifier with no line terminator in between, it’s definitely not an await using and only using deliciousing [?] and parse as such If parsing complexity for a parser that permits two token look ahead, it would perform such look ahead to make this simple. RBN: So . . . again I talked about this with SYG. I haven’t had really feedback from anyone else about the cover grammar. So I am not sure if there are any issues with the grammar. I think this is feasible and I would like the chance to go to the queue, see if anyone has feedback or concerns and potentially see if this is enough to advance to stage 3. @@ -1007,8 +986,7 @@ RBN: From my investigations, it’s 100% feasible. It’s whether or not the cov RBN: So I think given that condition from Waldemar and a couple in the queue as well . . . -RPR: Yeah. + 1 from DE, MM, and CDA. So I am only hearing support and WH has got his conditional -review. But it sounds like people are confident. +RPR: Yeah. + 1 from DE, MM, and CDA. So I am only hearing support and WH has got his conditional review. But it sounds like people are confident. RBN: To clarify, any observation given the specifics around the cover grammar not be available in general @@ -1057,7 +1035,7 @@ WH: This will make everyone else’s life easier too. ### Conclusion - Stage 3, conditionally on final review of cover grammar by WH -- Consensus on normative change to remove `[lookahead != `await`]` restriction for sync `using` declarations. +- Consensus on normative change to remove `[lookahead !=`await`]` restriction for sync `using` declarations. - Support from WH, SYG, DLM, MM, DE - We already have WH and SYG and MF has been the reviewers back in Stage 2 @@ -1098,5 +1076,3 @@ JHD: Perhaps just less urgent than originally believed. DE: Sure. RPR: This is the end of the meeting. Thank you to our meeting host F5! - -_END OF MEETING_