diff --git a/.markdownlint-cli2.jsonc b/.markdownlint-cli2.jsonc index 9dcace06..e81a6dd0 100644 --- a/.markdownlint-cli2.jsonc +++ b/.markdownlint-cli2.jsonc @@ -5,7 +5,7 @@ "ignores": [ "node_modules/**", "meetings/201*/*.md", - "meetings/202[0-1]*/*.md", + "meetings/2020*/*.md", "scripts/test-samples/*" ] } diff --git a/meetings/2021-01/jan-25.md b/meetings/2021-01/jan-25.md index 26a06112..3272bd5e 100644 --- a/meetings/2021-01/jan-25.md +++ b/meetings/2021-01/jan-25.md @@ -1,7 +1,8 @@ # 25 January, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Ross Kirsling | RKG | Sony | @@ -40,7 +41,7 @@ | Yulia Startsev | YSV | Mozilla | | Chengzhong Wu | CZW | Alibaba | ---- +----- ## Opening, welcome, housekeeping @@ -56,18 +57,18 @@ Let's talk notes. I would like to encourage people to step up and volunteer for Yes, there is a cat on the call. That's my cat. -Istvan: My best regards to your cat. So yeah, but he should also be on the participants list. +Istvan: My best regards to your cat. So yeah, but he should also be on the participants list. -Okay. So basically this is the typical Secretariat report, so I will have here 15 minutes, but I will try to do it as fast as I can. So what has happened lately. I just will show you the schedule for 2021. It was already shown at the last meeting with one exception (December 2021 meeting) and then there is the tc39 management confirmation for 2021. I don't know whether it it will be done here or whether you will have a separate agenda point. +Okay. So basically this is the typical Secretariat report, so I will have here 15 minutes, but I will try to do it as fast as I can. So what has happened lately. I just will show you the schedule for 2021. It was already shown at the last meeting with one exception (December 2021 meeting) and then there is the tc39 management confirmation for 2021. I don't know whether it it will be done here or whether you will have a separate agenda point. The new Ecma Website has been launched over the weekend. As usual when you are implementing a new website, you know, nothing works for nothing works. Last weekend it was suddenly switched on and then what we have found, immediately we have lost the access to the TC39 standard. -Then status of the liaison agreement with Calcom basically nothing happened, but I have also a slide on that and then there is a brief report from the December 2020 Ecma General Assembly. +Then status of the liaison agreement with Calcom basically nothing happened, but I have also a slide on that and then there is a brief report from the December 2020 Ecma General Assembly. So this is the on the agenda. Regarding the recent tc39 meeting participation, 62 people participated remotely and 22 organizations. So as you see it is still quite quite high. So we have a very good participation so we will see how at this meeting it will looks like. -So then the next one so these are the standard download statistic for the entire year 2020. So what is interesting? The trend is absolutely the same as it has been in the past. So again, I will be very quick on that. So altogether the downloads of the Ecma standard 2020 67,000 and basically half of all the downloads they came from the tc39 standards and then here are the listed here. The quality of the PDF versions of TC39 standards are not good and something has to be done, especially on the Ecma 262. E.g. there are no page counters and then no working links within the document etc. Formatting is not nice Etc. I have complained about this for many many times. Then here are the access and the download statistics so you can see the access statistic. This is actually what is interesting for the tc39 development people and also like you it is much much higher. We still could not figure out why does it come that still the sixth edition has the highest level of access? But altogether in total, you can see 670,000 access and then these are the download figures for the different editions very much similar, what we have had in the past. +So then the next one so these are the standard download statistic for the entire year 2020. So what is interesting? The trend is absolutely the same as it has been in the past. So again, I will be very quick on that. So altogether the downloads of the Ecma standard 2020 67,000 and basically half of all the downloads they came from the tc39 standards and then here are the listed here. The quality of the PDF versions of TC39 standards are not good and something has to be done, especially on the Ecma 262. E.g. there are no page counters and then no working links within the document etc. Formatting is not nice Etc. I have complained about this for many many times. Then here are the access and the download statistics so you can see the access statistic. This is actually what is interesting for the tc39 development people and also like you it is much much higher. We still could not figure out why does it come that still the sixth edition has the highest level of access? But altogether in total, you can see 670,000 access and then these are the download figures for the different editions very much similar, what we have had in the past. Okay, so regarding the approval schedule for ES 2021. So this is a repetition practically from my last presentation in order that we remember the important deadlines. The June 2021 GA is going to be in Switzerland - where the ES2021 will be formally approved. It would be a face-to-face meeting, but honestly speaking I question that as we can already see that the speed of the vaccination goes much much slower than expected in my opinion. This will be also a remote meeting - but this is just my personal opinion on that. We will see. @@ -75,19 +76,19 @@ Okay. So what what are these two months that we in TC39 have to obey? The first Okay, regarding the approval of the year ES2021 specifications. So we have two possibilities. First is at the March meeting or at the latest also it would work with the April meeting. This would be the number three meeting. Letter ballot would also be possible, but I not recommend it. There we could decide how long that should go, usually two or three weeks and then we could do it. But so my real preference would be on the second or the third TC39 meeting. -OK the management confirmation. We have agreed that the last meeting that the proposed 2021 Chair Team would be confirmed at this meeting. What I have understood there is no modification on that. Nothing came in as a new candidature, so we could do the approval here and now or as a separate agenda point. It will be a separate agenda point. +OK the management confirmation. We have agreed that the last meeting that the proposed 2021 Chair Team would be confirmed at this meeting. What I have understood there is no modification on that. Nothing came in as a new candidature, so we could do the approval here and now or as a separate agenda point. It will be a separate agenda point. New Ecma website: There is a “task” and “information sharing” between the Ecma Website and the TC39 maintained websites like tc39.es or tc39 Github. First, we should link the two. E.g. on tc39.es there are only links to the ES2021 drafts, but not to the approved ES2020 and earlier approved standards, which are on the official Ecma website. -We also need to check the text of the current Ecma website on TC39, The URL has not changed, but we need as usually a final adjustment also of the tc39 text. There are several mistakes for instance regarding the frequency of the meetings Etc. So honestly speaking also now I will take the effort to go over the facts and then make some corrections but I also invite you also to do the same. We definitely need a good separation and harmonization between the different websites, what we have on the Ecma website (like the approved Ecma standards, then announced major events such as when the next meeting is coming up and who is in the TC39 Chairman's group? Who are the members of TC task groups? etc, so this is everything on the Ecma website, but then the new drafts and other ongoing activities Etc. It is shared among tc39.es and tc39 GitHub Etc. So I would like to invite that especially the web masters of both sides talk. +We also need to check the text of the current Ecma website on TC39, The URL has not changed, but we need as usually a final adjustment also of the tc39 text. There are several mistakes for instance regarding the frequency of the meetings Etc. So honestly speaking also now I will take the effort to go over the facts and then make some corrections but I also invite you also to do the same. We definitely need a good separation and harmonization between the different websites, what we have on the Ecma website (like the approved Ecma standards, then announced major events such as when the next meeting is coming up and who is in the TC39 Chairman's group? Who are the members of TC task groups? etc, so this is everything on the Ecma website, but then the new drafts and other ongoing activities Etc. It is shared among tc39.es and tc39 GitHub Etc. So I would like to invite that especially the web masters of both sides talk. -So regarding CalConnect, it is a repetition of where are we now. Basically we are already working with CalConnect experts in individual capacity as invited experts Etc. There's obviously no problem with that, but if we would like to formalize our liaison with CalConnect as an organization, then they have to introduce also sort of royalty free patent policy. We suggested to install an “experimental RF patent policy” in order that we can work with each other at the moment. So far they still did not come back to the Ecma Secretariat. So it is the same situation what we had in November. Not really a problem in my opinion, but I just wanted to inform you where we are. +So regarding CalConnect, it is a repetition of where are we now. Basically we are already working with CalConnect experts in individual capacity as invited experts Etc. There's obviously no problem with that, but if we would like to formalize our liaison with CalConnect as an organization, then they have to introduce also sort of royalty free patent policy. We suggested to install an “experimental RF patent policy” in order that we can work with each other at the moment. So far they still did not come back to the Ecma Secretariat. So it is the same situation what we had in November. Not really a problem in my opinion, but I just wanted to inform you where we are. Okay regarding the TC39 2021 schedule. If you remember at the last TC39 meeting Waldemar pointed out that there was a conflict between the Ecma GA and the last entry for the December meeting. I think it was December 8 and 9 it would be a remote remote meeting for two days and it was a clash with the Ecma general assembly. So it has been decided by the TC39 chair group to the 14 and 15. So this should be now correct. I hope you know the other dates have not changed. Okay, so some points for the Ecma GA. So these are the two new members whom we have have now in TC39. One is a Japanese company for SPC membership. And then Coinbase. So this is the new company of JHD. So fortunately they also joined as associate members. So many thanks to both organizations. And there has been a reorganization at Stripe. So they used to be ordinary members. Now they are called RunKit and it is an spc member and they are also most welcome to continue as an spc member under the name of a RunKit. -Okay the Ecma financing: I have taken all the figures from the GA documents, if somebody is interested in the figures you may study these figures. The good news regarding the membership approval for 2021: It is basically unchanged (for now for more than 20 years in row…). So here the GA unanimously approved the unchanged membership levels for 2021. So 70,000 for ordinary members and so on. So no changes. +Okay the Ecma financing: I have taken all the figures from the GA documents, if somebody is interested in the figures you may study these figures. The good news regarding the membership approval for 2021: It is basically unchanged (for now for more than 20 years in row…). So here the GA unanimously approved the unchanged membership levels for 2021. So 70,000 for ordinary members and so on. So no changes. Now regarding the officers for the ECMA Management. So the management has been reelected for one year. so Isabelle Valet-Harper for Microsoft is president. Joel Marcey from Facebook is vice president, Johem Friedrich is the treasurer. so I think in the management there is no change if I remember correctly. @@ -95,17 +96,17 @@ We have changes in the execom. We have a new representative from Google.. So if anybody has sort of question, then please do ask me either now or per email or whatever. -DE: Istvan mentioned the issue about the PDFs that we generate being poor quality. One one factor in this is that we don't have professional typesetting. We discussed this last TC39 meeting and concluded that it would be very important for us to have professional typesetting support. The editor group wrote a letter expressing this desire, which I forwarded to Ecma management and ExeCom, in December. We haven't gotten a response. What next steps do you recommend? +DE: Istvan mentioned the issue about the PDFs that we generate being poor quality. One one factor in this is that we don't have professional typesetting. We discussed this last TC39 meeting and concluded that it would be very important for us to have professional typesetting support. The editor group wrote a letter expressing this desire, which I forwarded to Ecma management and ExeCom, in December. We haven't gotten a response. What next steps do you recommend? -Istvan: I have heard this from Patrick Luthi. I mean, he mentioned that in an email in just one single sentence, but I don't know anything about it in details. So it would very helpful if you could keep me informed, you know, what was exactly in the letter and and then I could discuss that with Patrick. So at the moment. I have only heard this from him not from you. But you know, what is the plan? I don't know. And then we can cut it short. In in my opinion, you know ECMA really should take money into its hand because the only guy who could do it or guys and ladies, you know, who could do it in the ecma office this is Patrick Charollais, but he's also not the best to do this type of high precision job. It was really lot of very heavy editing job. So in that case we need to hire somebody. If you have somebody in mind or if you can give us a good address, that would be helpful. Now, this is on the assumption they would accept this proposal. +Istvan: I have heard this from Patrick Luthi. I mean, he mentioned that in an email in just one single sentence, but I don't know anything about it in details. So it would very helpful if you could keep me informed, you know, what was exactly in the letter and and then I could discuss that with Patrick. So at the moment. I have only heard this from him not from you. But you know, what is the plan? I don't know. And then we can cut it short. In in my opinion, you know ECMA really should take money into its hand because the only guy who could do it or guys and ladies, you know, who could do it in the ecma office this is Patrick Charollais, but he's also not the best to do this type of high precision job. It was really lot of very heavy editing job. So in that case we need to hire somebody. If you have somebody in mind or if you can give us a good address, that would be helpful. Now, this is on the assumption they would accept this proposal. YSV: Okay, so that will be followed up with an email from Dan. Next up is myself regarding the question about the website that Istvan raised. At the moment I don't have any tasks that specifically need to be done on behalf of ECMA. We don't have a task lined up to add the editors, the chairs, and the currently participating members. What we can do is send you an email about this to coordinate on what sections of the tc39.es website are missing. Currently, we're part of the way through a translation project on that website. So we'll also need to make sure that that work gets done for these new pages or sections as well. -Istvan: So what I have found in the TC39.es website that there wasn't that there is no link to the approved standards. At least. I didn't find it. So this is definitely something like that and maybe also there are some others. I mean, this is a good opportunity because you know, we are always so busy, that we do it maybe once and then four years we are not doing it. We should be doing it on a regular basis. But in practice we don't find the time do it. +Istvan: So what I have found in the TC39.es website that there wasn't that there is no link to the approved standards. At least. I didn't find it. So this is definitely something like that and maybe also there are some others. I mean, this is a good opportunity because you know, we are always so busy, that we do it maybe once and then four years we are not doing it. We should be doing it on a regular basis. But in practice we don't find the time do it. YSV: Great. I'll send an email and we'll catch up on that. -DE: (read by YSV) It's important that Ecma website updates include URL redirects when they move assets as has been mentioned in the reflector. +DE: (read by YSV) It's important that Ecma website updates include URL redirects when they move assets as has been mentioned in the reflector. Istvan: I hate these web-site re-design changes. I have to tell you. So that was the reason why I didn't do it for 13 years or whatever. It always ends up in this type of disaster. So that's the reason why I am calling for your patience and it will take them sometimes, until the new website is working. To a level which is satisfactory. @@ -120,7 +121,9 @@ Istvan: So the website content, of course, you know, but “look and feel”, yo YSV: All right. I'm going to move us along because we're just a little bit over time and I think we've covered all of the concerns here ## Editors Update + Presenter: Kevin Gibbons + - [slides](http://j.mp/262editor202101) KG: Okay, I'm gonna be driving but Shu, JHD, and Michael are also editors and they should jump in if they have anything to say that I'm missing. It's going to be the same format a presentation will go over the major editorial changes, and then normative changes, upcoming changes, and so on. @@ -129,7 +132,7 @@ KG: We added a yield macro along the lines of the Await macro that allows you to KG: We added CSS to do normative optional styling for blocks that are normative optional. I will talk about that a little more in the next slide. -KG: #2254: We have had a few different issues with the lookahead assertions in the grammar, being imprecisely specified or being used in a way which was not allowed by the specification. So I rewrote that section. So it's now more precise and a little more general. If you are interested in that sort of thing, please take a look and let me know if you have comments. +KG: #2254: We have had a few different issues with the lookahead assertions in the grammar, being imprecisely specified or being used in a way which was not allowed by the specification. So I rewrote that section. So it's now more precise and a little more general. If you are interested in that sort of thing, please take a look and let me know if you have comments. KG: #2271, I will discuss again in a minute, is the change we have been talking about for most of the last year, which we finally got around to making. I'm very excited about that. Like I said, I'll talk about that more in a second. @@ -141,21 +144,21 @@ KG: Yeah, and then 2280 split up the definition of IterationStatement. Iteration KG: And then I changed ecmarkup to add borders and backgrounds to right hand sides of grammar productions when you hover over them. And again, I will demo that. -KG: so I promised to show you two things. So first thing was this normative optional rendering, this is what it looks like. This is just a direct screenshot of the specification when there is a section of the specification outside of annex B which is normative optional - for example WeakRef.deref - it look like this. The committee has expressed intent to move other parts of annex B into the main specification and perhaps make some of those will be normative optional for example, the __proto__ accessor on Object.prototype. Those will be styled in this way. +KG: so I promised to show you two things. So first thing was this normative optional rendering, this is what it looks like. This is just a direct screenshot of the specification when there is a section of the specification outside of annex B which is normative optional - for example WeakRef.deref - it look like this. The committee has expressed intent to move other parts of annex B into the main specification and perhaps make some of those will be normative optional for example, the `__proto__` accessor on Object.prototype. Those will be styled in this way. -KG: And then I also promised to show you the change to syntax directed operations. So this is what the specification looked like as of a few months ago. If you went to any production it would have the definition of the production and then it would have a number of these partial definitions of syntax directed operations. So for example IsFunctionDefinition is a syntax directed operation. It was defined over however many different Productions and the part of it that was defined over shift expression was here, and I hope you will agree with me that this is perhaps not extremely useful. Usually the thing that you are interested in is, what does IsFunctionDefinition do? You'd see it is used in this HasName operation. And so you would be reading HasName and you would want to know what IsFunctionDefinition did, and in order to answer that question you would just need to grep around for all of the different places it's defined and look at all of them. And now as of the latest version there is just an IsFunctionDefinition defined somewhere and you can see you can just click on this link, which is new, and it will take you to the single place that IsFunctionDefinition is defined, and that has its definition for all of the productions over which it is defined. Yeah, so that's the change. They are a few of them up in this new top level section for syntax directed operations for some of the more general operations, and then there are other operations that are defined throughout the specification. So if you go to, I don't know, HasCallInTailPosition, this is defined in the tail position calls section because that's where it makes the most sense but it still is defined only in this one place instead of being split up across 30 different places. +KG: And then I also promised to show you the change to syntax directed operations. So this is what the specification looked like as of a few months ago. If you went to any production it would have the definition of the production and then it would have a number of these partial definitions of syntax directed operations. So for example IsFunctionDefinition is a syntax directed operation. It was defined over however many different Productions and the part of it that was defined over shift expression was here, and I hope you will agree with me that this is perhaps not extremely useful. Usually the thing that you are interested in is, what does IsFunctionDefinition do? You'd see it is used in this HasName operation. And so you would be reading HasName and you would want to know what IsFunctionDefinition did, and in order to answer that question you would just need to grep around for all of the different places it's defined and look at all of them. And now as of the latest version there is just an IsFunctionDefinition defined somewhere and you can see you can just click on this link, which is new, and it will take you to the single place that IsFunctionDefinition is defined, and that has its definition for all of the productions over which it is defined. Yeah, so that's the change. They are a few of them up in this new top level section for syntax directed operations for some of the more general operations, and then there are other operations that are defined throughout the specification. So if you go to, I don't know, HasCallInTailPosition, this is defined in the tail position calls section because that's where it makes the most sense but it still is defined only in this one place instead of being split up across 30 different places. -KG: And then the last part of this which we have merged yet, but which is in an open PR for - or perhaps we've merged it in last five minutes while I wasn't looking - in the case you wanted to go the other direction, the direction that was previously encouraged where you go to the definition for the syntax and then find the syntax directed operations that are defined over that syntax, as of this PR that's now something you will be able to do. If you go to any syntax definition section and you hover over a production, you get this little tool tip that gives you Syntax Directed Operations defined over that production. If you click on it you get these syntax directed operations. And then you can click this to go back to the definition that you were just looking at. Oh, yeah, and you can see this little background on the right hand sides that I mentioned earlier. +KG: And then the last part of this which we have merged yet, but which is in an open PR for - or perhaps we've merged it in last five minutes while I wasn't looking - in the case you wanted to go the other direction, the direction that was previously encouraged where you go to the definition for the syntax and then find the syntax directed operations that are defined over that syntax, as of this PR that's now something you will be able to do. If you go to any syntax definition section and you hover over a production, you get this little tool tip that gives you Syntax Directed Operations defined over that production. If you click on it you get these syntax directed operations. And then you can click this to go back to the definition that you were just looking at. Oh, yeah, and you can see this little background on the right hand sides that I mentioned earlier. -JHD: The other thing is if now that you can hover over and see references to SDOs. +JHD: The other thing is if now that you can hover over and see references to SDOs. KG: This is true. If you hover IsFunctionDefinition you can see all of the call sites. There Was previously "references" for each of these sections, but it didn't do anything. So now there is a "references" that actually works. Yeah, so that's the big editorial changes that we've made to the specification. -SYG: I could jump in real quick. Notably some SDOs will remain special cased, that cannot be consolidated, like Evaluation. +SYG: I could jump in real quick. Notably some SDOs will remain special cased, that cannot be consolidated, like Evaluation. -KG: Sorry. Yes, that's a good thing to mention the specifically Early Errors and Evaluation are arguably syntax directed operations, but they remain a place that they were.. +KG: Sorry. Yes, that's a good thing to mention the specifically Early Errors and Evaluation are arguably syntax directed operations, but they remain a place that they were.. -YSV: I can quickly read out MM’s comment, which is: huge understandability gains. It looks awesome what you've done. Thanks so much. +YSV: I can quickly read out MM’s comment, which is: huge understandability gains. It looks awesome what you've done. Thanks so much. KG: I am so excited to not have to grep, to just be able to click the function definition for body text or whatever so nice. All right, back the slides. @@ -163,11 +166,11 @@ KG: We landed just a couple of normative changes since the last meeting all of t KG: 2210. This is one of the many "typed array specification does not match web reality" things we got consensus for at the last meeting. -KG: then #2252 is just a tiny tweak which I wanted to call out here because technically we didn't have consensus for this change, but it was the editors' understanding that this was always the intent of these specifications and it just failed to express that clearly. So this is if you have a JSON object that has two properties in the same object literal whose name is literally "__proto__", if you had that object literal in ecmascript source text it would be an early but it is not supposed to be an error when doing JSON.parse. So now the specification is more explicit about that error not applying in this context. +KG: then #2252 is just a tiny tweak which I wanted to call out here because technically we didn't have consensus for this change, but it was the editors' understanding that this was always the intent of these specifications and it just failed to express that clearly. So this is if you have a JSON object that has two properties in the same object literal whose name is literally `__proto__`, if you had that object literal in ecmascript source text it would be an early but it is not supposed to be an error when doing JSON.parse. So now the specification is more explicit about that error not applying in this context. SYG: Can I jump in real quick about #2250? This was also a normative change that technically we do not have consensus on but the decision was to merge the fix because the hardware operations for x86 lock compare exchange or the pairs of operations on arm64 to implement compare exchange. They work a certain way and the model meant that they could not be straightforward with used. So in my opinion there wasn't really another way to actually fix this so I did not bring it to committee for wider deliberation. For folks who have expertise here and are interested please take a look at #2250, it lays out the problem and the solution in detail and if you have concerns then please raise them to me and I'll be happy to revert the change and then we can discuss it plenary. Thanks. -YSV: Just one quick comment about time. We are going to be hitting the time box pretty soon. +YSV: Just one quick comment about time. We are going to be hitting the time box pretty soon. KG: Only have a couple more things to say. So just very briefly. We have a very similar list of upcoming and planned work. I'm not going to go over all of this in detail. Some of the big ones are, I intend to make multi-page builds available now that we've finished the syntax directed operation refactor. This is, currently the specification is available as a single document, a single HTML file. I intend to make available a version of the specification where each of the top level sections is its own HTML document. Links will still work across them and so on. This is for people whose devices are not happy about downloading 7 megabytes of HTML, they can just load the section that they're interested in. @@ -180,26 +183,28 @@ JHD: The intention will be to produce the artifact for ES 2021 before the March YSV: The queue is currently empty but there have been a lot of warm comments in IRC. ## ECMA-402 editor’s report + Presenter: Shane Carr + - [slides](https://docs.google.com/presentation/d/1xIH-aloYcirEPOu5RM2pvJPdwob_9wjVZ_9BpiEAzJQ/edit) SFC: I'll be giving the update presentation today and hopefully Richard and Leo will also jump in a little bit. Let me share my screen. -SFC: (presents slides introducing ECMA-402) +SFC: (presents slides introducing ECMA-402) -YSV: Can I jump in for a second? We are having a hard time with keeping up with the notes. Can I have a couple more cursors on the document? +YSV: Can I jump in for a second? We are having a hard time with keeping up with the notes. Can I have a couple more cursors on the document? -YSV: I see two people are currently taking part. I see I think 3 4 5 OK 6 fantastic. Thank you, please go ahead. +YSV: I see two people are currently taking part. I see I think 3 4 5 OK 6 fantastic. Thank you, please go ahead. (Note-takers can also help edit the last section, which was a bit poor) SFC: Okay, great. So the editors this year have been Richard Gibson (RGN) and Leo Balter (LEO) and thanks very much for their work. You'll hear a little about them in the next slide about ES 2021. I'm the “convener” or chair of the group. Ujjwal Sharma has also been doing a lot of great work to help lead to the group and below is a list of many of the delegates who attend our monthly calls. About ES 2021, Richard and Leo say they plan to cut the ES 2021 as soon as the stage for proposals and consensus PRs that we agreed to this meeting are merged. Do Leo and Richard have anything to add and discuss about this? -LEO: I believe yeah. No, I don't have anything to say I'm on top of these stage for proposals. I'm excited about it as the editor. I'm looking forward for ES 2021 cut. Shane Do you intend to show the Wiki page or yes, I have a slide later about twinkled. Perfect. +LEO: I believe yeah. No, I don't have anything to say I'm on top of these stage for proposals. I'm excited about it as the editor. I'm looking forward for ES 2021 cut. Shane Do you intend to show the Wiki page or yes, I have a slide later about twinkled. Perfect. -RGN: I Echo those sentiments. +RGN: I Echo those sentiments. -SFC: Thank you. So, the first order of business today is our new pull requests. We have two new pull requests that were seeking consensus for that. We're hoping to merge into the 2021 spec. +SFC: Thank you. So, the first order of business today is our new pull requests. We have two new pull requests that were seeking consensus for that. We're hoping to merge into the 2021 spec. [Shane Presents https://github.com/tc39/ecma402/pull/429] @@ -209,7 +214,7 @@ SFC: The first one of these two PRs is from Jeff Walden (JSW) who's a SpiderMonk SFC: The other one is #500 from Alexey who's done work on JSC. This changes some of Legacy Constructor behavior on older Intl objects in a way that Alexey has discussed here, by using OrdinaryHasInstance instead of the instanceof operator. We also discussed this at the 402 meeting last month. After a little bit of back and forth, we decided that the benefits of this pull request outweigh the risks. So we're also asking for a consensus on this pull request to ECMA-402. I can't see the queue, but does anyone have questions about these to pull requests? #429 and #500 -YSV: the queue is currently empty, but it might take a moment for people to absorb everything. +YSV: the queue is currently empty, but it might take a moment for people to absorb everything. SFC: Okay. I'll go ahead and leave these two links up and we'll come back at the end of this presentation if anyone. @@ -217,7 +222,7 @@ SFC: Okay, so I'll give a quick update on where the proposals stand. We have Dat SFC: Intl.Segmenter is also at stage 3. It's shipping in Chrome 87 as well as on JSC trunk. There's a couple questions to answer before we get to stage 4, but this is very close to stage 4. I don't know if Richard has anything to add to the status. -RGN: No, nothing at this time. We'll see how it goes between now and the next meeting. +RGN: No, nothing at this time. We'll see how it goes between now and the next meeting. SFC: Sounds good. Thank you Richard. We have Intl.NumberFormat V3; I'm the champion on this proposal. I'm hoping to get this ready for stage 3 at the next meeting or the meeting soon after that one. This is adding a number of important features that users requested for Intl number format. This is already at stage 2. @@ -229,27 +234,27 @@ SFC: The Smart Unit Preferences proposal is at stage 1. This is blocked on discu SFC: Stage 0 proposals: Extend TimeZoneName, which is a new proposal, championed also by Frank, up for stage 1 this meeting, be looking out for that. -SFC: Another proposal that I am the co-champion of is eraDisplay, there's also again a presentation later this meeting to promote this one to stage one. I've gotten a lot of help from Louis-Aimé de Fouquières, who's been joining our calls and has a lot of expertise in this field of non Gregorian calendar logic so really looking forward to this proposal +SFC: Another proposal that I am the co-champion of is eraDisplay, there's also again a presentation later this meeting to promote this one to stage one. I've gotten a lot of help from Louis-Aimé de Fouquières, who's been joining our calls and has a lot of expertise in this field of non Gregorian calendar logic so really looking forward to this proposal [Shane presents https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking] - + SFC: Leo alluded to this earlier. We have a proposal and PR progress tracking wiki page on our GitHub where we track how pull requests and proposals are moving through the process in particular as you can see here. We like to track how these proposals in one place show how these proposals are progressing on consensus for the two groups as well as on test 262 and mdn documentation, and then the three main implementations that are that we look at for implementing 402. So most of these PRs up here are older but have now been caught up including on the JSC implementation and I really appreciate the work that all the JSC people have been putting into this over the last over the year of 2020. This is a really great to see this table being filled out with all these check marks since the last time I presented this. I think there were a lot of x's, but it's really exciting to see all these check marks now. These are the two open PRs that we have up for we have up for promotion for consensus and up for stage 3. -LEO: I also want to highlight that the check marks are also linking to what they need for so you don't see like there is only a test but you also have a link to the current test respective test issue for each thing and mdn page or whatever they go to. +LEO: I also want to highlight that the check marks are also linking to what they need for so you don't see like there is only a test but you also have a link to the current test respective test issue for each thing and mdn page or whatever they go to. SFC: Yep. So, for example, I can see almost all these check marks are clickable. The ones in this column are clickable; for example, these deep link to the notes for where we achieve that consensus and then like if I click like this button to goes to the tests for this PR you can go and see exactly what the tests were for that PR when those tests were checked in. So thank you for everyone for maintaining help helping maintain this this this page we couldn't do this without the work of Leo and Richard and Frank and everyone everyone else who's contributed. Making sure that this wiki page stays up-to-date. And that's my last slide. My very last slide is this one which I'll just keep up for the remainder of our time slot. But does anyone have any questions on the Queue? [Shane presents Get Involved! slide] -YSV: There are no questions on the queue. I wanted to raise one thing about the data management that you're currently doing in the wiki. I don't know if you're familiar with the browser compatibility data work done by mdn that currently links the WHATWG. and other specs where you can get the information about something was implemented for all browsers. +YSV: There are no questions on the queue. I wanted to raise one thing about the data management that you're currently doing in the wiki. I don't know if you're familiar with the browser compatibility data work done by mdn that currently links the WHATWG. and other specs where you can get the information about something was implemented for all browsers. SFC: Yeah, we're definitely familiar with the efforts. One of our delegates, Romulo Cintra, has been largely our champion on the MDN side. He hasn't had quite as much time to commit to it in 2020 as previously so I think that Michael Cohen and Daniel Ehrenberg have also been working on getting the lag up to the date on the browser compatible tables on MDN. The purpose of the status wiki is more to is more for the 402 side to track the stage advancement requirements. In terms of usefulness to developers, I see that more as on the MDN side. I think that hopefully our status wiki could be perhaps helpful when building out the compatibility tables on MDN, but I see those as serving very related but different uses. -YSV: There's a big project around doing the tc39 data set. It's under https://github.com/tc39/dataset in our GitHub or he's which is using the browser compatibility data from mdn and syncing all of our information about proposals along with that data and making sure that we've got a centralized location for all of our data if that's helpful for you folks. We can definitely take into account your needs as well in case you want to do automated syncing of anything. +YSV: There's a big project around doing the tc39 data set. It's under https://github.com/tc39/dataset in our GitHub or he's which is using the browser compatibility data from mdn and syncing all of our information about proposals along with that data and making sure that we've got a centralized location for all of our data if that's helpful for you folks. We can definitely take into account your needs as well in case you want to do automated syncing of anything. -SFC: Thank you very much for flagging. I'll definitely follow up on that. Thank you very much for watching. +SFC: Thank you very much for flagging. I'll definitely follow up on that. Thank you very much for watching. -YSV: Are there any other comments or questions or are there any comments or questions about the two issues the chain raised earlier that he wanted to get consensus on? +YSV: Are there any other comments or questions or are there any comments or questions about the two issues the chain raised earlier that he wanted to get consensus on? LEO: I have a comment as the editor one of the things that Shane has mentioned enumeration API: that we raised a request for privacy assessment, but we haven't been able to get anything since around like me 2020. This is like a proposal that we have interested to each interest to move this forward, but it's blocked by like the lack of people available to do this. So I urge if anyone is interested in taking a look ask you kindly please go there. We would appreciate your help. @@ -269,24 +274,25 @@ SFC: Okay, I'll follow up with you offline on the status of #500. Consensus on #429; will follow up with YSV on #500 - YSV followed up, and confirmed that it is fine offline - ## 404 status update + Presenter: Chip Morningstar (CM) YSV: do we have any updates for 404? -CM: No updates. The Earth continues in its orbit. Everything is fine. +CM: No updates. The Earth continues in its orbit. Everything is fine. ## TC53 liaison report + Presenter: Peter Hoddie -YSV: We have, up next the Ecma TC53 Liaison report from Peter. Is there anything to report? +YSV: We have, up next the Ecma TC53 Liaison report from Peter. Is there anything to report? PHE : Sorry, actually, I hadn't prepared for that today. I will simply note that the committee's making great progress and work towards our first actual standard submitted to the general assembly in June and so we are pushing forward to have a final draft in February. We've gotten some really good feedback from folks who have reviewed it for the first time. So if anybody is bored or looking for an interesting challenge, please take a look. We really appreciate any feedback that people have on the spec that's there. It's on the public GitHub site. It's easy to find, but if you need a link to it, just let me know. That's it. Thank you. YSV: Thank you very much for that quick update. -PHE: There's still plenty of work to do. We're going through all the excitement that tc39 goes through to dot all Is and cross all the Ts but it should be great, hopefully. +PHE: There's still plenty of work to do. We're going through all the excitement that tc39 goes through to dot all Is and cross all the Ts but it should be great, hopefully. ## Code of Conduct committee @@ -296,35 +302,39 @@ AKI: We met last week. There were once again, no new reports. YSV: That means that we're all very well-behaved right now, which is great. -AKI: You know what actually you know, what I actually will say one thing. We had a minor concern that people were choosing not to report or didn't know they could report which is why I mentioned that there are several avenues to report a concern on our website, so if anything does come up, don't forget that you can ask for the assistance of the committee +AKI: You know what actually you know, what I actually will say one thing. We had a minor concern that people were choosing not to report or didn't know they could report which is why I mentioned that there are several avenues to report a concern on our website, so if anything does come up, don't forget that you can ask for the assistance of the committee JHD: If you have any hesitation about that, you can definitely ask for anonymity or for us not to follow up, just to notify us - because we would even appreciate just being aware of things even if we're not able to act on them. ## Chair group + Presenter: Rob Palmer YSV: Okay the next agenda item is the confirmation of the 2021 chair group from Rob. -RPR: All right. This is going to be very quick. So in the previous meeting we presented the proposal for the chair group this year and this is the announcement of that. So last year we had Aki, Brian, Myles, Rob, and this year Myles moves on. So we thank him very much for his service. And we're really grateful that they would stay on and are very appreciative. Okay, that is all and thank you. +RPR: All right. This is going to be very quick. So in the previous meeting we presented the proposal for the chair group this year and this is the announcement of that. So last year we had Aki, Brian, Myles, Rob, and this year Myles moves on. So we thank him very much for his service. And we're really grateful that they would stay on and are very appreciative. Okay, that is all and thank you. DE: Yeah, I wanted mention that I've volunteered to help with certain administrative tasks for the for the chair group, especially starting with kind of formalizing the some of the smaller details about the invited expert procedure and the way that IPR policy is administered and think there was something else that Aki asked about and also following up on the use of these funds. I don't think I'll be involved in any of the chair meetings or anything like that just on a specific case by case basis to help. So please let me if any of this exceeds reasonable bounds- I don't have any intention to join the peer group, but I'm happy to help in these limited ways. -AKI: Yeah, actually both both Daniel and JHD have volunteered to help with some of our more like organizing things that are not specifically like they have no bearing on decisions that we make as a trigger whatever but will help us immensely and I appreciate both of you in advance for any of that. Especially one of the things that I've been trying and failing to do has been related to getting like a formal funding request together. So Daniel you are going to to save me from that stress and I appreciate it. +AKI: Yeah, actually both both Daniel and JHD have volunteered to help with some of our more like organizing things that are not specifically like they have no bearing on decisions that we make as a trigger whatever but will help us immensely and I appreciate both of you in advance for any of that. Especially one of the things that I've been trying and failing to do has been related to getting like a formal funding request together. So Daniel you are going to to save me from that stress and I appreciate it. DE: Yeah, Aki actually did have one budget allocation that you actually did get approved--nonviolent communication training, but then we didn't get around to using that. We'll see an update later in this meeting about that particular topic. YSV: Maybe we should have a chair back office group. Are there any other questions regarding the chairs or any of the work that's happening there? Otherwise, I have to talk like a normal person again so can someone jump in and take over chairing for me? -YSV: We have agreed that the chair group as it is looks good, and we're going ahead with that. So I have to switch because now the next thing is my topic so I will share my screen. +YSV: We have agreed that the chair group as it is looks good, and we're going ahead with that. So I have to switch because now the next thing is my topic so I will share my screen. ### Conclusion/Resolution + 2021 Chair group is confirmed. ## Runtime Semantics for MemberExpression do not conform to web reality + Presenter: Yulia Startsev (YSV) + - [Pull Request](https://github.com/tc39/ecma262/pull/2267) -YSV: You folks may remember this issue from #2018, I believe it also came up in #2014. This is related to how the specification defines MemberExpression runtime semantics compared to how it actually works in implementations. Implementations have chosen to not directly implement the semantics here as written for a couple of reasons. I myself have worked through this and tried to fix it for Firefox. In our case RequireObjectCoercible was in the wrong order because it was not very efficient to implement in the way that it was specified. This issue arose again with private fields. +YSV: You folks may remember this issue from #2018, I believe it also came up in #2014. This is related to how the specification defines MemberExpression runtime semantics compared to how it actually works in implementations. Implementations have chosen to not directly implement the semantics here as written for a couple of reasons. I myself have worked through this and tried to fix it for Firefox. In our case RequireObjectCoercible was in the wrong order because it was not very efficient to implement in the way that it was specified. This issue arose again with private fields. YSV: Private fields are specified in the same way as our existing member expression runtime semantics. Firefox has now implemented it as specified – in contrast to our non-private implementation. The other major engines, JSC and V8, both use the behavior of member Expressions consistently with themselves and are inconsistent with the spec. This is something that we should actually address fully before we add more complexity to this. @@ -332,15 +342,15 @@ YSV: The PR that we have to fix this is would allow null and undefined in refere JDH: I've tried to read through all this stuff and I'm not sure I understand. Is there no option that would make like null.y = () there throw before calling f, and if not then why not? Like I understand like that comment saying that the spec says that it should throw before it evaluates F and calls it but with this PR my understanding is that then that will evaluate and call F before it throws. So if that's the correct understanding then my question is like why can't we make it do what the spec and intuition suggests that it should. -YSV: I thought that actually it would call it but would still throw here when we do the step of assigning. We get the y key and then we access the entire expression and it would throw when assigning at least. +YSV: I thought that actually it would call it but would still throw here when we do the step of assigning. We get the y key and then we access the entire expression and it would throw when assigning at least. KG: I believe it will call the f function if I'm remembering reading this which I might not be. but I believe it will call f. -SYG: I thought that was the point of this PR, right? I thought that was the point as well. +SYG: I thought that was the point of this PR, right? I thought that was the point as well. -JDH: So I guess that's my question. Like that's not the intuitive thing to me. The intuitive thing is that it never even gets to the equal sign because it throws before, because the left hand side is nonsense. If there's a reason we can't fix that evaluation ordering than I'd love to know it and if not, then like it'd be great if there's a way to make the make the evaluation ordering match what I think the intuition of the average programmer. +JDH: So I guess that's my question. Like that's not the intuitive thing to me. The intuitive thing is that it never even gets to the equal sign because it throws before, because the left hand side is nonsense. If there's a reason we can't fix that evaluation ordering than I'd love to know it and if not, then like it'd be great if there's a way to make the make the evaluation ordering match what I think the intuition of the average programmer. -BSH: I would think the expectation is obviously you have to calculate what you're assigning before you assign it. So I'm not sure if that's a universal intuition. +BSH: I would think the expectation is obviously you have to calculate what you're assigning before you assign it. So I'm not sure if that's a universal intuition. SYG: I just want to combine the two and respond to JHD as well my intuition and perhaps you know, my intuition is colored by being a language implementer, but I have the same intuition that Bradford shared, which is that I expect most evaluation for assignments to happen on the right hand first, then the left hand side not the left to to right order. And this I think is -- well, at least I'm fairly confident in that this intuition is shared by language implementers given by the variety of bugs that we have around references all over the place where we where's like by far and away the easiest thing to implement not just easy, but also like intuitive and efficient is to kind of visit the right-hand side of your assignment emit a bunch of stuff for that then visit the Target and then you met a bunch of stuff for that. Whereas with JS currently says, first visit the left-hand side, do some stuff like some error checking, maybe then visit the right hand side figure out what value we need to assign then visit the left- hand side again. So it's this left right left to like back and forth thing whereas you would really never implement it that way. You would always implement it as let me just visit each side wants and because of there's some state that's observable because you go left right left. That part is really unintuitive like if we were already consistent in that like like be visit always visit left hand side first and then the right hand side and that's it. That will will be fine, but that's also Possible right because it's like you can't perform the evaluation until you've done the right in side @@ -362,26 +372,25 @@ DE: I'm happy about this change. JHD: I want to say "yes and". I just want to understand - this change is fine and I understand the implementer-motivated justification for it, and it seems fine. But I want to understand if this has any consequence on future proposals. And if so, what is it? We don't have to necessarily discuss that in plenary and that shouldn't necessarily block the PR, but I think it's important that that be clearly stated somewhere and if I'm the only one who doesn't get it then I'll figure that out on my own time. -YSV: How about we get in touch after and we'll figure that out together. +YSV: How about we get in touch after and we'll figure that out together. -JDH: Thank you. +JDH: Thank you. ### Conclusion/Resolution + - Consensus on the PR - YSV and JHD to follow up about implications - ## RegExp match indices + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-regexp-match-indices) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkfgkZ2bXeIlMCiAK8w?e=640eSA) - RBN: Alright, so we've been having discussion over several meetings now about the regex match indices proposal. It's been sitting at stage 3 while we were waiting for implementer feedback. In the last meeting. We had some feedback from Michael Saboff JSC implementation. We've had previous feedback from Shu and the V8 implementation and some there were some concerns we wanted to address. So that's why I'm bringing this back before before committee today. -RBN: first just to reiterate the motivations for this proposal. So the main motivations for providing the regex match indices is to add some information that we currently don't have a means of extracting with regular Expressions today, namely the start end and indices for capture groups within a match. Currently we only provide the index of the entire match rather than any individual capture groups and the only way to get the length is by checking. You string length of the match itself. So this doesn't provide enough useful information for parsing tools to be able to report a crime report accurate position information into some invalid text. That's person using a regular expression. Also, it doesn't give you the ability to use the native regular expression object for syntax highlighting such as used in text make grammars, which are used by a number of editors today. projects like the VSCode textmate package depend on the Node native bindings for only guruma to accomplish this and the other ways of being able to actually get these capturing groups is to capture everything and then calculate it out yourself and that's expensive complex and easy to make mistakes. - +RBN: first just to reiterate the motivations for this proposal. So the main motivations for providing the regex match indices is to add some information that we currently don't have a means of extracting with regular Expressions today, namely the start end and indices for capture groups within a match. Currently we only provide the index of the entire match rather than any individual capture groups and the only way to get the length is by checking. You string length of the match itself. So this doesn't provide enough useful information for parsing tools to be able to report a crime report accurate position information into some invalid text. That's person using a regular expression. Also, it doesn't give you the ability to use the native regular expression object for syntax highlighting such as used in text make grammars, which are used by a number of editors today. projects like the VSCode textmate package depend on the Node native bindings for only guruma to accomplish this and the other ways of being able to actually get these capturing groups is to capture everything and then calculate it out yourself and that's expensive complex and easy to make mistakes. RBN: So just a brief history of The Proposal it was adopted for stage 1 back in May of 2018, the original proposal unconditionally added an offsets property to the RegExp built-in exec method at the time. We were aware of possible performance concerns and there were some possible mitigations we were choosing to investigate. When we reached consensus for stage 2 in July of 2018, we had discussed mitigation strategies one was either passing a callback to RegExp built-in exec or passing an options object into exec, matchall, Etc, that would allow you to conditionally add the indices to the regular expression result. We advanced to stage 2 and this also gave us some time to investigate possible performance implications and whether or not not they would have any meaningful impact on regular runtime code. In July of 2019 we advanced to stage three. Several things we decided at that point were that both the callback and options object were subclassing hazards. They ran issues with the with at match and the built-in match symbol would be problematic if anyone tried to do subclassing and we've discussed the complexity of subclassing with expect was ripped built-ins a number of times in the past. Another thing that we had done is we had changed the name of the offsets property two indices to more accurately aligned with the naming that we were using for next index and match.index. So kept us within the same nomenclature The Proposal name itself changed at that time, and we also had some feedback that early perfect vestigation. The performance overhead might for V8 indicated that be negligible consensus at that point was on using a simpler API that unconditionally added indices again based on the original stage one proposal to the result from RegExp built-in exec and we advance to stage 3 with simpler API. So some additional updates in December of 2019. We had a stage three updates where V8 shared their implementation concerns from a full Moon tation based on the spec. We there was a question at the time it was posed as to whether or not we were willing to move forward with a proposal at stage three without changes even with these possible performance costs where we willing to pay them and at Time the conclusion was that we would make no changes and that leads us to the last meeting in November of twenty twenty JC shared their performance concerns and proposed mitigation steps as well given that we now had to implementers that had concerns about performance that we might want to investigate individually. We concluded in that meeting that the implementers would be the champion revisit mitigations and come back to plenary with the result of our discussion. @@ -391,9 +400,9 @@ RBN: One of the things we found is that most of these implementations do not use RBN Perl uses the `d` flag for a perl backwards compatibility feature that indicates that it should end in their documentation. They indicate it should be avoided if possible it It has the in their public official documentation for it Specifically says don't use it unless you have to that's because Pearl change their default behavior for regular expression parsing to consistently support Unicode in certain ways and `d` was added as a back and Pat for A bad way that you should not do this anymore. -RBN: Java uses the `d` flag to limit the how the dot carot and dollar patterns match to only match new line and not other possible new lines, it wouldn't match carriage return, line feed for example. I don't see that as being something we do that we would be that concerned about +RBN: Java uses the `d` flag to limit the how the dot carot and dollar patterns match to only match new line and not other possible new lines, it wouldn't match carriage return, line feed for example. I don't see that as being something we do that we would be that concerned about -RBN: and the only other one that uses d is on (?), which uses lower case d for backwards compatibility with Ruby (?. Other than that, no other language uses it with the exception of only guruma which uses a capital d, which wouldn't be a conflict and d and all of these cases is not considered to be a standard flag. It's they're considered to be extension flags from what is a normal regular expression. One thing we did also consider but rejected was using an uppercase. I, there are a number of regular expression implementations that use uppercase Flags such as Java util RegExp Oniguruma etcetera. A couple of things when we could jump in. +RBN: and the only other one that uses d is on (?), which uses lower case d for backwards compatibility with Ruby (?. Other than that, no other language uses it with the exception of only guruma which uses a capital d, which wouldn't be a conflict and d and all of these cases is not considered to be a standard flag. It's they're considered to be extension flags from what is a normal regular expression. One thing we did also consider but rejected was using an uppercase. I, there are a number of regular expression implementations that use uppercase Flags such as Java util RegExp Oniguruma etcetera. A couple of things when we could jump in. SYG: Is Onigma a different implementations from Onigormuma? @@ -411,7 +420,7 @@ RBN: There's a number of cases of flags that we might want to introduce in the f MF: I disagree with those statements, but for this particular proposal, I think that your justification was fine. -RBN: I wanted to just kind of just bring up the status of where things are right now because this also speaks to the flag. So the proposal is currently stage 3. As far as stage four criteria progress, we have the proposal spec text. There is a pull request that's already out for the version that does not have a d flag. The pull request has been merged for test 262, the version that does not have D. There is a PR for test262 with the addition of the D flag that has not been merged yet as far as I'm aware. There is a pull request for the pr which is out of date. It needs to be updated based on the D flag. There are two implementations. That one is prior to the `d` flag and the other one is not yet shipping. So we're still doing some investigation. +RBN: I wanted to just kind of just bring up the status of where things are right now because this also speaks to the flag. So the proposal is currently stage 3. As far as stage four criteria progress, we have the proposal spec text. There is a pull request that's already out for the version that does not have a d flag. The pull request has been merged for test 262, the version that does not have D. There is a PR for test262 with the addition of the D flag that has not been merged yet as far as I'm aware. There is a pull request for the pr which is out of date. It needs to be updated based on the D flag. There are two implementations. That one is prior to the `d` flag and the other one is not yet shipping. So we're still doing some investigation. SYG: So we have an implementation of the D flag in V8 and it pretty much matches with what the feedback that Michael Saboff gave for the flag in JSC. There does need to be kind of two shapes that you need to cache at VM start time one for the result objects without the indices and result objects with the indices. But otherwise it fixes performance regressions the exception oof the flags getter then this. And this is a perfectly acceptable because I don't believe the flag getter needs to be particularly performance, but having a new flag has the effect of like - due to the terrible subclassing that we have ,speaking for my own opinion there, if you have a subclass regex you have to look look regular property look up of the the has indices boolean getter on the Prototype if you have some complex reg X so that's that's like another property lookup, it complicates the flags building a little bit, but I don't think that's that's a big deal at all. Just want to call that out as the flag is not like completely just for free does have some implications for the subclassing. Other than that, there were no issues with the implementation and should be good to go. If we get consensus here. It should be smooth sailing to land and try to ship this in V8. @@ -421,7 +430,7 @@ MS: So JavaScriptCore we plan on landing the `d` flag any day now and we will la MM: SYG, could you just say very briefly what the status is of the attempt to remove the subclassing weirdness from language. -SYG: Yes, unfortunately the V8 team hasn't had many cycles to build a custom version of V8 and chrome to test to test the various different versions. I don't remember if you remember the kind of classification of the different types of subclassing but we plan to still build out a custom version of the engine to see what breaks. The risk is still high that type 2 removal is not possible to do what path but we're still feeling optimistic for type 3 and 4 and particularly this subclassing issue for regexes is type 4, I believe where, we delegate to to overwritten overridden methods and getters on the subclass instance. So yeah this year we'll get to it. If we get an intern headcount that would speed things up. But otherwise this year we'll get to okay. Thank you. +SYG: Yes, unfortunately the V8 team hasn't had many cycles to build a custom version of V8 and chrome to test to test the various different versions. I don't remember if you remember the kind of classification of the different types of subclassing but we plan to still build out a custom version of the engine to see what breaks. The risk is still high that type 2 removal is not possible to do what path but we're still feeling optimistic for type 3 and 4 and particularly this subclassing issue for regexes is type 4, I believe where, we delegate to to overwritten overridden methods and getters on the subclass instance. So yeah this year we'll get to it. If we get an intern headcount that would speed things up. But otherwise this year we'll get to okay. Thank you. YSV: Mozilla is also working on this. We have a build where we're going to see what the removal will look like via Telemetry. We might have some data soon. @@ -441,15 +450,14 @@ RBN: I appreciate it. Thank you. Consensus on the `d` flag for match indices. - ## JSON Modules for stage 3 Presenter: Dan Clark + - [slides](https://docs.google.com/presentation/d/1pHLXcoMX-DiJ3MFFu3ts8U7zDGdVrDnXeB9-Njh68q0/edit?usp=sharing) - [proposal](https://github.com/tc39/proposal-json-modules) - -DDC: Okay, folks should see slides, and if you don't see slides, please stop me. So we are coming back with JSON modules again for stage 3. Just quick recap what this was, it's this syntax that looks like this where I can use an import statement to import a JSON object from a JSON file. What this is about, the proposal is basically stating what host is required to do when the type=’JSON’ import assertion is present, with the goal of achieving consistent behavior for JSON modules across hosts. The big thing that has kind of been the sticking point for Stage 3 with this proposal is this question of whether that JSON object you get back is immutable or not. The arguments for why it should be mutable were that it's more natural to developers who are used to immutability and JS modules and like other future modules types are being discussed such as like CSS modules on the web which are also going to be also going to be mutable and and if JSON Jules are immutable it kind of blocks scenarios where developer might need to modify them. And the other side of the coin, the disadvantage of this mutability is concerns about bugs where one module imports a JSON module and changes it and another module doesn't expect that change and doesn't know that it should be doing something to get a fresh copy. +DDC: Okay, folks should see slides, and if you don't see slides, please stop me. So we are coming back with JSON modules again for stage 3. Just quick recap what this was, it's this syntax that looks like this where I can use an import statement to import a JSON object from a JSON file. What this is about, the proposal is basically stating what host is required to do when the type=’JSON’ import assertion is present, with the goal of achieving consistent behavior for JSON modules across hosts. The big thing that has kind of been the sticking point for Stage 3 with this proposal is this question of whether that JSON object you get back is immutable or not. The arguments for why it should be mutable were that it's more natural to developers who are used to immutability and JS modules and like other future modules types are being discussed such as like CSS modules on the web which are also going to be also going to be mutable and and if JSON Jules are immutable it kind of blocks scenarios where developer might need to modify them. And the other side of the coin, the disadvantage of this mutability is concerns about bugs where one module imports a JSON module and changes it and another module doesn't expect that change and doesn't know that it should be doing something to get a fresh copy. DDC: So there's been quite a bit of discussion back and forth on these. There's proponents of both sides of this argument and it's kind of looked like we're not really going to come together and all agree on what the best approach was. With this where things left off the last meeting in November was that we were reading the temperature of the room and thinking that there were multiple people for both sides of this debate. Our impression was that the temperature was more towards the immutable side with however it seemed that at the time that there were no blocking objections either side. This turned out not to be correct, and we do have a blocking objection if we were to do immutable JSON modules. Given that, we want to come back today and ask for stage 3 on the mutable version with the understanding that there are still folks who feel that either side of this is correct with the mutable and immutable. It seems unlikely that everybody is going to come to the same preferences here, but our understanding now is that there are no blocking objections for Mutable JSON modules. We want to come and ask for two stages 3 for this version of Proposal with keeping in mind the greater goal here of hopefully achieving interoperability between JSON modules on all hosts. @@ -461,7 +469,7 @@ JDH: Yeah, sure. That's me. So, I think that the idioms of the JavaScript langua MM: Thank you. -CM: This was the follow-up to what JHD said. He said “just be the first importer”. What does that mean? I don't know how to interpret that. +CM: This was the follow-up to what JHD said. He said “just be the first importer”. What does that mean? I don't know how to interpret that. JHD: Sure. So in the same way, if you want to be the first code run in your realm so that you can lock down `Array.prototype`, let's say, then depending on how your application is built you have to arrange that your lockdown code is the first to run. So you do that and then however you achieve that you can then lockdown or modify `Array.prototype` however you like, and then all future code will believe, or run under the impression that, this is how the realm was born. @@ -502,7 +510,9 @@ AKI: any objections? [silence] I think we can go ahead and call that consensus. Outcome: Consensus for stage 3, with mutable semantics ## Array.isTemplateObject + Presenter: Krzysztof Kotowicz + - [slides](https://docs.google.com/presentation/d/1a16AxSDVyvZgvt8n2PX4_dYgnMUTvPt0WZr3aNJ3gfI/edit#slide=id.p) - [proposal](https://github.com/tc39/proposal-array-is-template-object) @@ -514,7 +524,7 @@ KOT: For example, we can have a sensitive operation (e.g. HTML templating) imple KOT: However, the problem is that this template is trusted implicitly. So it is possible that your code base also contains functions that don't really consider the fact that the first argument of the sensitive operation function (the template) should never be user-controlled and should come from trusted sources? One can have a wrapper function that creates an array and calls the sensitive operation with something that comes from strings and ultimately might have been attacker controlled. And in practice these cases happen. The proposal is to make a check that is robust against such class of problems. Currently JS also allows for solving that partially. You can do some kind of a weak, so to speak, brand check for template objects. For example, template objects are frozen and they have a frozen 'raw' property, so you can brand-check for that. However, of course this is weak because it's forgeable. So what I want to propose is something that is not weak in that way. -The mechanisms to enable this are terribly simple. We just have a [[TemplateObject]] slot Arrays, on regular arrays it's simply set to false and in the getTemplateObject algorithm. we set the slot value to true. And then (this is the contentious part), we introduce a new function, Array.isTemplateObject to read this slot value and to serve as a brand-check. This allows for secure literalness check. You could captures early the isTemplateObject function and have a target template tag function that brand-checks input when called, and, for example, makes an instanceof check to make sure that the template object comes from the same realm. This explicit trust check asserts that the template is definitely part of the user application's code and could not have been user-controlled - it's safe now.That function now also wraps nicely because the whole template object is frozen. +The mechanisms to enable this are terribly simple. We just have a [[TemplateObject]] slot Arrays, on regular arrays it's simply set to false and in the getTemplateObject algorithm. we set the slot value to true. And then (this is the contentious part), we introduce a new function, Array.isTemplateObject to read this slot value and to serve as a brand-check. This allows for secure literalness check. You could captures early the isTemplateObject function and have a target template tag function that brand-checks input when called, and, for example, makes an instanceof check to make sure that the template object comes from the same realm. This explicit trust check asserts that the template is definitely part of the user application's code and could not have been user-controlled - it's safe now.That function now also wraps nicely because the whole template object is frozen. KOT: One can build on top of that - for example a generic script URL validation function that doesn't need to encode your particular application rules (script origins, for example - or schemas for the URLs that you deem safe to load scripts from). Just make a brand check (URL template is a template object), parse the whole template as a URL and then optionally introduce some other other basic checks, like for example, maybe your application wants to allow only the scripts from a certain domain or maybe you want to disallow interpolations completely or maybe you want want to allow them only in the query part of the URL. And with that having such a library function you can create values which you know have come from the authored code and we'll only load scripts that were trustworthy enough for your code authors. @@ -524,11 +534,11 @@ KOT: This check works well with other XSS prevention mechanisms built into the w KOT: Mark Miller raised two issues. One of them is that the Array.IsTemplateObject object is based on an internal slot, which is cross-realm. If you have a template created in a separate realm that has access to your realm, it would have the brand. Internal slots are cross realm. Mark’s opinion is (Mark, please correct if this is not the correct representation) - is that we should stop adding cross realm internal slots in ecmascript because each of them creates this problem for membranes - and the membrane systems would have to work around it. The second is that eval breaks its robustness. -KOT: For the membrane transparency argument. I believe It's currently a practice to introduce algorithms which use internal slots for making either brand checks or other kinds of decisions and the cross realm-ness of internal slots is a feature. There are more examples which violate the membrane transparency in the same way. For example, the JSON stringify function arguments will look into internal slots, ArrayBuffer.isView behaves the same, some functions on the Promise objects. I think another clear example is that a web platform uses internal slots for brand checking objects everywhere - you can't really use a Proxy of an element as an element. It just doesn't work. +KOT: For the membrane transparency argument. I believe It's currently a practice to introduce algorithms which use internal slots for making either brand checks or other kinds of decisions and the cross realm-ness of internal slots is a feature. There are more examples which violate the membrane transparency in the same way. For example, the JSON stringify function arguments will look into internal slots, ArrayBuffer.isView behaves the same, some functions on the Promise objects. I think another clear example is that a web platform uses internal slots for brand checking objects everywhere - you can't really use a Proxy of an element as an element. It just doesn't work. KOT: The tentative solution Mark proposed was to move the Array.isTemplateObject to Array.prototype which solves the transparency problem. The concern I would like to put under discussion is this: I'm not sure whether the membrane transparency is the axiom that we should be following. For me personally as the champion it really doesn't matter whether the function is on the prototype or no, both work. I think it needs some decision or some consensus around how brand checks should be working in ecmascript. -MM: Okay, so first of all, thank you. You conveyed my position pretty well. There's one major additional qualification that I'd like to add that I've said by the way, aspart of the threads. I know that you're already clear on these are those which that the reason why moving into the Prototype solves one of the two issues is because of the membrane transparency issue. There's tremendous number of methods that access internal slots on their `this` argument and that are inherited by instances from prototypes. So the argument about practical membrane transparency. is that for those internal methods that access internal slots only? There is this the typical way one obtains one invokes the The method is let's take the data option is a great example is by taking a date instance and saying .getFullYear. So you're fetching the method itself through the same membrane which wraps it and then everything just works. The case that doesn't work is when you do the equivalent of date.prototype.getFullYear.call, starting with the date in your own realm and you're applying it to a membrane proxy for a date in another one. So our criteria is what we've been what we've been calling practical transparency and that's that's admittedly and we kind of the Krzysztof is doing is the right methodology that might overcome this restrictions. I'm very interested in accumulating a list oof violations that he mentioned. I would say two of them are not practical objections to transparency for other reasons. One because it's using the Primitive wrapper objects. The number object that is being, you new capital number of the number a new capital string of the strength that never happens implicitly. Those are never created by strict code, we got rid of all the implicit creation and I have never come across a reason to create them explicitly. So I think that practical code just doesn't encounter the wrappers. So transparent Behavior across Realms with regard to the wrapper's is non-issue. +MM: Okay, so first of all, thank you. You conveyed my position pretty well. There's one major additional qualification that I'd like to add that I've said by the way, aspart of the threads. I know that you're already clear on these are those which that the reason why moving into the Prototype solves one of the two issues is because of the membrane transparency issue. There's tremendous number of methods that access internal slots on their `this` argument and that are inherited by instances from prototypes. So the argument about practical membrane transparency. is that for those internal methods that access internal slots only? There is this the typical way one obtains one invokes the The method is let's take the data option is a great example is by taking a date instance and saying .getFullYear. So you're fetching the method itself through the same membrane which wraps it and then everything just works. The case that doesn't work is when you do the equivalent of date.prototype.getFullYear.call, starting with the date in your own realm and you're applying it to a membrane proxy for a date in another one. So our criteria is what we've been what we've been calling practical transparency and that's that's admittedly and we kind of the Krzysztof is doing is the right methodology that might overcome this restrictions. I'm very interested in accumulating a list oof violations that he mentioned. I would say two of them are not practical objections to transparency for other reasons. One because it's using the Primitive wrapper objects. The number object that is being, you new capital number of the number a new capital string of the strength that never happens implicitly. Those are never created by strict code, we got rid of all the implicit creation and I have never come across a reason to create them explicitly. So I think that practical code just doesn't encounter the wrappers. So transparent Behavior across Realms with regard to the wrapper's is non-issue. MM: the more interesting one, the more painful one is Promise.resolved, which when you apply it to a promise that it recognizes its promise and returns it without coercing if you apply it to a proxy for promises it does not recognize that as as a promise however it does recognize it as thenable. So the reason we introduced the whole thenable thing, which we've paid a tremendous price for, was because we introduced it into an ecosystem in which we were coexisting with many other promise libraries at the pond and we wanted that coexistence to be as transparent as possible. So the the Thenable assimilation trick was one that enabled us enabled each promise System, including the built-in one to some extent treating the other promises as if they were their own promises and that worked for tremendous amount of code. And that's the case here as well. @@ -550,7 +560,7 @@ WH: Yes, but you're specifically not trying to protect against hostile code here KOT: well this being for example easier to analyze right? -WH: It's a tradeoff because you're making things like proxies and membranes much harder — you're adding complexity and I just don't see that you're getting anything for it. There were alternatives listed earlier and I want to understand why those do not work, or if they do. +WH: It's a tradeoff because you're making things like proxies and membranes much harder — you're adding complexity and I just don't see that you're getting anything for it. There were alternatives listed earlier and I want to understand why those do not work, or if they do. DE: Historically, the biggest alternative here was a way to check whether a string is literal. This was proposed in TC39 a while ago. I think that's very problematic because suddenly certain strings are literal and some strings are not literal that have the same value, and you have to sort of flow this literalness through everything. This template check is very localized and it gives you a very concrete check of a very easy to to understand property. Agree with KOT’s answer to the question about whether this is making proxies and membranes were difficult because I completely disagree with that characterization. @@ -596,9 +606,9 @@ AKI: This looks like there's a lot to be resolved. We need to move on. So if we Conclusion: No advancement -Reason: +Reason: -* Solution is incomplete: Argument that the proposal is meaningless unless all evals are suppressed. This is making an assumption that all evals that are reachable in the object graph are all suppressed and that there's no objects created by an attacker within the object graph, but this doesn’t take into account threat models that members are concerned about +- Solution is incomplete: Argument that the proposal is meaningless unless all evals are suppressed. This is making an assumption that all evals that are reachable in the object graph are all suppressed and that there's no objects created by an attacker within the object graph, but this doesn’t take into account threat models that members are concerned about ## JS Module blocks @@ -624,7 +634,7 @@ MM: and I also wanted to bring up that there is a potential confusion calling th SUR: I think when I presented for stage 1 with the already had short discussions on the compartments repo on the proposal repository. I think address at least some of your questions, so I would ask you to take a look at those and if there's still more questions to be had opened them on the repository if that makes sense for you, that would be helpful. -MM: Yeah, that makes sense. I think we're very much aligned in this proposal and any rough spots I expect us to work through quickly. +MM: Yeah, that makes sense. I think we're very much aligned in this proposal and any rough spots I expect us to work through quickly. JRL: Okay, so you in one of your slides you are describing the module object is going to be a class of module object or module block. Is this different than the namespace module that you get with import *? In my head they seem like the exact same thing. @@ -686,37 +696,37 @@ RBN: in addition break and continue statements are forbidden would not necessari RBN: Super call is forbidden. There is no nothing you can do to call super in a static initializer. -RBN: super property is permitted. this This allows you invoke methods on a base class using super because `this` is preserved, which is another item listed here. The `this` receiver in the Constructor function is the Constructor function of the class so that you can do this-dot assignments. Arguments is also forbidden. We don't capture the arguments of the outer scope because again, this is very much like a function and about function evaluation. and just like a function evaluation if you create a VAR declaration inside of the static block, it does not not pois'd outside of the static block. It stays locally scoped. There are a couple things on the cue that I think would be useful to address at this particular point before I move on to the next slide. +RBN: super property is permitted. this This allows you invoke methods on a base class using super because `this` is preserved, which is another item listed here. The `this` receiver in the Constructor function is the Constructor function of the class so that you can do this-dot assignments. Arguments is also forbidden. We don't capture the arguments of the outer scope because again, this is very much like a function and about function evaluation. and just like a function evaluation if you create a VAR declaration inside of the static block, it does not not pois'd outside of the static block. It stays locally scoped. There are a couple things on the cue that I think would be useful to address at this particular point before I move on to the next slide. -KG: So I guess i’m first on the cue. Sorry, I was busy taking notes. Sure. I can talk about this now. Basically this fit that you've been describing where it's it behaves like an iffy is really strange to me. It doesn't to me look like there is a natural like you are defining and then calling a function boundary. It looks more like you are just like it's just a nested block inside of the outer context and so all of these restrictions It Square behaves like you had done an iffy without it like syntactically looking like you are invoking the function is very strange to me. My preference is that you should be able to inherit the yield status and the awake status from outside of the class and that far declarations would host to the containing function and so on like any other block. +KG: So I guess i’m first on the cue. Sorry, I was busy taking notes. Sure. I can talk about this now. Basically this fit that you've been describing where it's it behaves like an iffy is really strange to me. It doesn't to me look like there is a natural like you are defining and then calling a function boundary. It looks more like you are just like it's just a nested block inside of the outer context and so all of these restrictions It Square behaves like you had done an iffy without it like syntactically looking like you are invoking the function is very strange to me. My preference is that you should be able to inherit the yield status and the awake status from outside of the class and that far declarations would host to the containing function and so on like any other block. RBN: there are reasons that I disagree with. Primarily if you're coming from from another language that has this capability - and there are several that have this in addition to Java C# - those all have the same semantics. It's essentially a function that is evaluated with specific State specific scope. If you were to evaluate this as a block and divorce ourselves from any prior art in this space, just treat it as a block that runs inside the class. There are things that we would end up doing that are a violation of other constraints that we have in the language or maybe not necessarily constraints, but other pre-existing conventions, for example if I I wanted to do an assignment to a static property, how do I do that assignment? Normally, I would use a this especially if I was translating a static initializer that already used this assignments. So I said a Static-X equals one aesthetic y equals this.X Plus 1 for example that we if you move that into a static block for initialization, you need something to reference that could be the class name. We do have cases where classes don't have names. It's possible You could introduce the class name, but we have a general practice of allowing you to use this in static methods allowing you to this use this and static initializers using this in a static block seems like it should be the right way to go. Makes the most sense from from a developer's perspective coming from those cases if it were just like a block but inside the class then the this would have to be the outer this not the inner this and that can be confusing and things won't refactor properly and we already have had discussions about the class.access access in texts that were have caused that proposal to freeze in place for the the time being. So we're running out of options and the common most well recognized option is that we would use this this. That's just generally the that works in JavaScript. There is no other place in the language today where this means something just because you went into a different block unless you're going into a new function scope. This doesn't change inside of a catch Claus. This doesn't change inside of a regular block. This doesn't change inside of a try statement or a for Loop the this is always bound once And if we change that just for this block then that can be very confusing. Why would yield and await work when I can't win this doesn't work so you can't necessarily just take existing code that's outside the block and refactor it into a static into a static initialization block. There's always going to be caveats you have to work with there's value. However in this being its own block from debugging perspective stack traces finding out where my code broke. I mean you have line line information. That's also useful to know that It happened during static initialization being able to use super property so I can reference the Base Class implementation of a static method versus my my overridden version because I might need to do that is something that's necessary having super change inside of a block of wood again violate existing expectations about how Funk how blocks and functions work. So pretty much every case of something that we need to make this a usable feature. Almost mandates that it needs needs to be essentially a function environment. -KG: So I disagree with basically every single point there. Let me just go through them. So first off in Java, I don't think that Javas semantics are very clearly like it's an iffy. I think Java semantics semantics are actually much more to it's just some code that's running as if it's a block. as a concrete example you bring up this, but in Java the this in a static block just doesn't work like it syntax error to use this in a static block. It is not rebound to the class and I think that also points to it not being necessary that this be bound to the class since it's not in Java. +KG: So I disagree with basically every single point there. Let me just go through them. So first off in Java, I don't think that Javas semantics are very clearly like it's an iffy. I think Java semantics semantics are actually much more to it's just some code that's running as if it's a block. as a concrete example you bring up this, but in Java the this in a static block just doesn't work like it syntax error to use this in a static block. It is not rebound to the class and I think that also points to it not being necessary that this be bound to the class since it's not in Java. RBN: I'd like to respond to that real quick. The reason is it's the C# is the same way you You can't use this reference the static side of a class inside of a block but you don't have to reference anything. You just use the variable That's something that name. That is the we don't support in JavaScript and never have you can't just say X as a field and just say x inside of a method and that references X you have to say this. so the the this dot Carriage into JavaScript in many places is a de facto part of the language. So unfortunately, we have to have a way to reference these fields. So your statement doesn't hold with this because there's no other way to access those properties are all right. -KG: There is another way to access those properties, which is to reference the class name. The way that you currently access those properties. So anyway, I just the main point is that you said that the semantics from the president's from other languages is that it should be if he liked and I don't think that's true. I don't think it's if you like in Java, you also said that it's necessary to have this I disagree. I think that referring to the class name is more natural. So the third thing is you said that there's no other context in which we introduced a new this binding without like having a function like context. My contention is that this is not a function like context and so introducing a new this binding is know like if it is weird with the current semantics then it is equally weird if you can like await across the boundary people are not going to suddenly start seeing this as a function. It's not like a function if it Is this that is just a weirdness that it has now. There is a natural place to consider there to be a function like boundary, which is the body of the class as opposed to the body of the static block body of the class is a natural place to consider the function like boundary because the body of the class introduces a new strictness context and this is the only context where we introduce we go from sloppy mode to strict mode without transitioning to a new function context. So if you considered the body of of the class to be a new function context. Well, that doesn't agree with the semantics that you're proposing here because the computed property names in the body of the class have visibility of this and they have visibility of is the value outside of the class. It is not the class itself. So that's just not like a coherent way to say that the static block is a new function. And I don't think it's necessary to do. So, I think it is more natural to consider it to just be a block and to allow people to write the class name to refer to properties +KG: There is another way to access those properties, which is to reference the class name. The way that you currently access those properties. So anyway, I just the main point is that you said that the semantics from the president's from other languages is that it should be if he liked and I don't think that's true. I don't think it's if you like in Java, you also said that it's necessary to have this I disagree. I think that referring to the class name is more natural. So the third thing is you said that there's no other context in which we introduced a new this binding without like having a function like context. My contention is that this is not a function like context and so introducing a new this binding is know like if it is weird with the current semantics then it is equally weird if you can like await across the boundary people are not going to suddenly start seeing this as a function. It's not like a function if it Is this that is just a weirdness that it has now. There is a natural place to consider there to be a function like boundary, which is the body of the class as opposed to the body of the static block body of the class is a natural place to consider the function like boundary because the body of the class introduces a new strictness context and this is the only context where we introduce we go from sloppy mode to strict mode without transitioning to a new function context. So if you considered the body of of the class to be a new function context. Well, that doesn't agree with the semantics that you're proposing here because the computed property names in the body of the class have visibility of this and they have visibility of is the value outside of the class. It is not the class itself. So that's just not like a coherent way to say that the static block is a new function. And I don't think it's necessary to do. So, I think it is more natural to consider it to just be a block and to allow people to write the class name to refer to properties RBN: again, my biggest concern with any of that is we are already in the class static features proposal allowing this in initializers if we don't allow it in a static block and have it reference reference to class then we It'll be a stumbling block for anyone that needs to transition over whether they need to change it from a class to the to use a class name or not. And I'd rather maintain consistency. So I think having this binding is very important. You said that for Java it acts like it's just a statement in the outer block. Can you create I can't recall this written out, but can you create a class in a statement context or are they allBasically top level ? -KG: Java is a class-based language. You can create a class with in another class and you can in some contexts create an anonymous class in expressing confidence, but right there's not a like really clean way to differentiate whether it's an iffy are not +KG: Java is a class-based language. You can create a class with in another class and you can in some contexts create an anonymous class in expressing confidence, but right there's not a like really clean way to differentiate whether it's an iffy are not -RBN: because in those cases from what I understand. Java also does not carry over things like break or continue. They have explicit handling on how return that return cannot work. So again, doesn't you can't put me in a car? +RBN: because in those cases from what I understand. Java also does not carry over things like break or continue. They have explicit handling on how return that return cannot work. So again, doesn't you can't put me in a car? KG: You can be in Context in which you could identify you could call them one of those like the places that you can write a class are not places that you could write a return. So it just doesn't come on. -DH: I think there's more people in the queue that have thoughts about these topics. All right. +DH: I think there's more people in the queue that have thoughts about these topics. All right. ?:Y'all ready to maybe Bradley has a drink first. Bradley: I think there's a lot of passionate debate. That's good. But the restrictions here seem to match what I'm saying reading the static Field initializers restrictions. So regardless of if something is a function or not, it seems like most of the restrictions do actually match static static field initializers. If not all of them, so maybe we could reframe it that way and and have different discussion on why it shouldn't match static field initializers. -RBN: I also want to bring up this was brought up in the issue thread where you had discussed this on the issue tracker. There's already another proposal that is talking about whether not classes themselves could theoretically have async constructors and the carrying over of await from an outer context can seem very weird in those cases. So the main rationale for a lot of the design and reserving await is that we don't know what a lot of these cases are going to be yet. So I've reserved await so that if we decide that we will never allow await to carry over from the Block then we don't have to change anything if we decide that we do want to allow await to carry over into the block for some reason then we can remove the Restriction. But if we allowed a weight as an identifier, then we couldn't remove this restriction because it could theoretically be a breaking change. Yield isn't an issue because yield is a reserved word so it can't can't be used as a Identifier in strict mode code. +RBN: I also want to bring up this was brought up in the issue thread where you had discussed this on the issue tracker. There's already another proposal that is talking about whether not classes themselves could theoretically have async constructors and the carrying over of await from an outer context can seem very weird in those cases. So the main rationale for a lot of the design and reserving await is that we don't know what a lot of these cases are going to be yet. So I've reserved await so that if we decide that we will never allow await to carry over from the Block then we don't have to change anything if we decide that we do want to allow await to carry over into the block for some reason then we can remove the Restriction. But if we allowed a weight as an identifier, then we couldn't remove this restriction because it could theoretically be a breaking change. Yield isn't an issue because yield is a reserved word so it can't can't be used as a Identifier in strict mode code. RBN: so we're trying to as much as we can maintain these invariants but there are still invariants I think are profoundly important like having the accurate `this` binding that it is essentially lexically bound rather than dynamically bound. Whether this is treated like a function block, I still don't necessarily believe that var should be hoisted out of it. And again you've mentioned there's already already the weirdness of a Mode context when you switch plus when this is evaluated were already in a new declarative environment due to just how private field scoping works in the and public fields scoping works in the proposed in the various proposals. Unless we anything else we can move forward I think all -AKI: all right great moving on is called Amar and she also has a reply to this specific topic and there's a lot but there's also some new topics to get to so, please keep that in mind. +AKI: all right great moving on is called Amar and she also has a reply to this specific topic and there's a lot but there's also some new topics to get to so, please keep that in mind. WH: I find Ron’s semantics here to be quite reasonable. The treatment of `this` makes sense and produces the least amount of friction when refactoring code. So I support the semantics at least as far as `this` is concerned, along with the related features. @@ -724,33 +734,33 @@ AKI: You know, and I know this isn't all JavaScript joke, and I'm not meaning to SYG: Okay, so there's weird inconsistencies that cut both ways the about treating it to be a block like thing or a function like thing. I like I strongly strongly agree with Ron that we really don't want gorgeous to foist out and if we treat it like a block then naturally we would expect words to hoist out but on the other hand. To the let's align with Field initializers Point from from Bradley. I want to clarify that a little bit that the restrictions were talking talking about like no, return no break break. Like that's that just out today because it's an expression position the initializer and not a statement position right like is the met them in the model is not that these things are are the restrictions are matched. Because we design them that way the expressions are matched. Sorry Sorry, the properties are match because they just don't work in in an expression, right? -BF: I don't really like trying to mental model because we're a diverse group of people, but I would say that they're in an expression position is a good point. However things like super call being forbidden makes sense in the same way, right we have is also forbidden +BF: I don't really like trying to mental model because we're a diverse group of people, but I would say that they're in an expression position is a good point. However things like super call being forbidden makes sense in the same way, right we have is also forbidden KG: Sorry, could forbidden makes sense the best. we get could we get another note taker? I'm trying to follow the conversation and I can't also take notes. -AKI: All right, another note taker please. Remember it's way easier now than it used to be. You just have to edit with the quit the computer says instead of writing down every word or attempting to write an agreement about Thank you. +AKI: All right, another note taker please. Remember it's way easier now than it used to be. You just have to edit with the quit the computer says instead of writing down every word or attempting to write an agreement about Thank you. -BF: the only thing about expression position that I find interesting. Well poking around in different browsers right now is `await` is sometimes used as an identifier, not as the operator in classes. I don't know if that's a bug I would have to reread some possibly. So that basically leaves us with `return` and I don't understand what you would be returning from just like I don't understand what you would return when you're using a field initializer. So I think although they’re expressions they just don't have a clear slot that they’re returning. +BF: the only thing about expression position that I find interesting. Well poking around in different browsers right now is `await` is sometimes used as an identifier, not as the operator in classes. I don't know if that's a bug I would have to reread some possibly. So that basically leaves us with `return` and I don't understand what you would be returning from just like I don't understand what you would return when you're using a field initializer. So I think although they’re expressions they just don't have a clear slot that they’re returning. SYG: Right and to try to wrap up this whole discussion to take a step back. It seems like for my RC, please correct me if I'm wrong Kevin that your main problem that that you want to solve is not like return and break off to work, but that you want stuff that has `await` in the static initializer block and if we were to take the position of aligning with field initializers, these were away is disallowed that naturally precludes that whole use case. So is it useful to create? Question to Ron or to the committee as do we care about allowing the `await` use case to be expressed in the static initializer block. -KG: The only time I have encountered any one of these pieces I have been surprised by it. the second thing is that as a practical matter like half of the code that I would write that would use this use as `await` and not being able to would be frustrating but those are like there is a theoretical concern in addition to be part of the concern. +KG: The only time I have encountered any one of these pieces I have been surprised by it. the second thing is that as a practical matter like half of the code that I would write that would use this use as `await` and not being able to would be frustrating but those are like there is a theoretical concern in addition to be part of the concern. -RBN: I was one of the dress this and it's another thing that I mentioned the issue it is still possible to craft a static block that would allow allow you to perform sync sync initialization. So possible could just not have static blocks at all and it's like abuse computed property names. I don't yes, and that's why did not think that argument. Yeah. +RBN: I was one of the dress this and it's another thing that I mentioned the issue it is still possible to craft a static block that would allow allow you to perform sync sync initialization. So possible could just not have static blocks at all and it's like abuse computed property names. I don't yes, and that's why did not think that argument. Yeah. -RBN: Well, I'm not a huge fan of that and I've been investigating typescript admit to it because somebody said oh this doesn't work and they want this to work and I'm so that's why I brought up that specific case of using computed property names, but it is possible to for example, write an async function that exists inside the static block gets evaluated and then you just the promise for it out. I to the static block, but again, I haven't specifically said that a weight is forbidden for ever and if I if that were the case then we wouldn't be reserving `await` as an identifier. It's one of those things where if we want to allow await here, we'd have to figure out can we allow it inside of field initializer? And then how does that apply to the class? Do we need to have a syntactic marker on the class that indicates the class has a sync code that could be running. There are a lot of things that are there that I would rather not take on for a like minimum viable version of this feature that we can investigate in the future and that's why saying allowing `await` to carry over doesn't preclude this being for example, a evaluated like an IIFE. It just could be theoretically evaluated like an async IIFE and then we we await it is to me. in an async function, but then there's a lot of complexity that's and it's there that we would have to figure out both for this and for class fields and by just essentially reserving the syntax so that it can't be used in a static block for now gives us the ability to investigate that without delaying the value of the feature right now. Yes. I don't know if there's any more going to add to this one to move on. I think I think let's move on move on ons great. +RBN: Well, I'm not a huge fan of that and I've been investigating typescript admit to it because somebody said oh this doesn't work and they want this to work and I'm so that's why I brought up that specific case of using computed property names, but it is possible to for example, write an async function that exists inside the static block gets evaluated and then you just the promise for it out. I to the static block, but again, I haven't specifically said that a weight is forbidden for ever and if I if that were the case then we wouldn't be reserving `await` as an identifier. It's one of those things where if we want to allow await here, we'd have to figure out can we allow it inside of field initializer? And then how does that apply to the class? Do we need to have a syntactic marker on the class that indicates the class has a sync code that could be running. There are a lot of things that are there that I would rather not take on for a like minimum viable version of this feature that we can investigate in the future and that's why saying allowing `await` to carry over doesn't preclude this being for example, a evaluated like an IIFE. It just could be theoretically evaluated like an async IIFE and then we we await it is to me. in an async function, but then there's a lot of complexity that's and it's there that we would have to figure out both for this and for class fields and by just essentially reserving the syntax so that it can't be used in a static block for now gives us the ability to investigate that without delaying the value of the feature right now. Yes. I don't know if there's any more going to add to this one to move on. I think I think let's move on move on ons great. -DE: if we have two minutes left, and we should let Ron go through the rest of his presentation as much as I'd like to talk also. +DE: if we have two minutes left, and we should let Ron go through the rest of his presentation as much as I'd like to talk also. -RBN: There is one thing I did want to get to for this. There was was an open question that has since been resolved about the behavior of `new.target` in static blocks, as it seemed to be underspecified in the static syntax proposal when I wrote this slide. The behavior has now been clarified, that `new.target` should return `undefined`. Per issue #25, I plan to follow the same semantics. +RBN: There is one thing I did want to get to for this. There was was an open question that has since been resolved about the behavior of `new.target` in static blocks, as it seemed to be underspecified in the static syntax proposal when I wrote this slide. The behavior has now been clarified, that `new.target` should return `undefined`. Per issue #25, I plan to follow the same semantics. -RBN: The next question I have is whether or not we should allow multiple interleaved static initialization blocks. C# does not allow this as it acts more like a Constructor for the type. There's only one static block, and it emulates regular constructor evaluation: all fields are initialized and then the Constructor body evaluates just like in JavaScript. So that was the original basis for the design. However, Java’s Static Initializers (Java’s version of static blocks) can be interleaved and are evaluated in document order. This is one thing that I'd like to have some input from the committee as to the direction we should take. I could possibly temporarily straddle the fence and say that you can only have one and it runs in document order so that we open up the ability to do this in the future - if you want preserve C#-like semantics then you would need to place your static block after all static fields. Alternatively, we choose now to follow the Java approach and allow you to have multiple static blocks. There's no reason not to allow them, so it seems like it might be valuable. Does anyone have anything that they care to say if not, so real quick. +RBN: The next question I have is whether or not we should allow multiple interleaved static initialization blocks. C# does not allow this as it acts more like a Constructor for the type. There's only one static block, and it emulates regular constructor evaluation: all fields are initialized and then the Constructor body evaluates just like in JavaScript. So that was the original basis for the design. However, Java’s Static Initializers (Java’s version of static blocks) can be interleaved and are evaluated in document order. This is one thing that I'd like to have some input from the committee as to the direction we should take. I could possibly temporarily straddle the fence and say that you can only have one and it runs in document order so that we open up the ability to do this in the future - if you want preserve C#-like semantics then you would need to place your static block after all static fields. Alternatively, we choose now to follow the Java approach and allow you to have multiple static blocks. There's no reason not to allow them, so it seems like it might be valuable. Does anyone have anything that they care to say if not, so real quick. -AKI: We're at the original time box. We also don't have any presentations that could fit in the remaining time. Do we wish to extend the time box on this topic? +AKI: We're at the original time box. We also don't have any presentations that could fit in the remaining time. Do we wish to extend the time box on this topic? RBN: I would like to if that's possible. -???: I'm good with that. +???: I'm good with that. AKI: All right. Okay. So Daniel, did you want to go back to your topic? @@ -760,7 +770,6 @@ DE: I think I think this behavior makes sense. `new.target` could be an error, b DE: Overall, I think this proposal is great. The only change I would make is this permitting multiple static blocks, interleaving them with static fields. I still respect the motivation that Ron had for limiting to one block and writing it. So in my opinion for stage 3 this would be ready for stage 3 if the committee prefers the single later static block semantics - DE: Editorially, I would prefer that a lot of duplication in the specification be removed. Maybe a third of the specification is duplicating the syntax for blocks. And another third is duplicating the semantics for methods. I think we should simply reference those and then the specification would be like a third as long. RBN: I also intend to change the layering to be based on the static syntax proposal. At the time the rendered spec text wasn't up to date with the class fields proposals, so I wasn't able to leverage that. Otherwise, I would most likely be leveraging some of the different designs that are already in that proposal. This is something that I plan to change, and I agree. @@ -807,7 +816,7 @@ WH: Here's my dilemma. I fully support this proposal, but I think it would need DE: I think it's important that this have editorial review and I think what we can do is I support this conditionally advancing to stage 3 with another round of editorial review before it really reaches stage 3. A few of us can sign up as the editorial reviewers. I'd be happy to sign up too. Can we do this for you offline? Because I'm not sure there's more to discuss in committee. How would you feel about that? -SYG: Can we go to Kevin's topic? I think the semantics for the interleaving things are not that complicated. So let's at least explicitly enumerate them and agree to that right now and then do the traditional thing where we get on the editorial review. +SYG: Can we go to Kevin's topic? I think the semantics for the interleaving things are not that complicated. So let's at least explicitly enumerate them and agree to that right now and then do the traditional thing where we get on the editorial review. KG: All right. If we're doing multiple initializers, multiple static blocks. There's a question of where the boundary of the bar scope is. Is it, each static block gets its own var scope and that scope is things in that scope are not visible to like initializers and computed property names and so on. @@ -821,12 +830,14 @@ RBN: yes, we would essentially use the same mechanism the static field static fi SYG: Okay, then I support conditional advancement with the request that after #26 is complete, please send alerts to the engine folks. Usually stage 3 means we're going to start looking at planting it but since the final bits are not nailed down. -RBN: Yeah, and I'll hold off on any updates to the readme on stage advancement until after these changes have been merged. All right. So do we have conditional stage 3? +RBN: Yeah, and I'll hold off on any updates to the readme on stage advancement until after these changes have been merged. All right. So do we have conditional stage 3? [yes] + ### Conclusion/Resolution Conclusion: Conditional advancement to Stage 3 based on the following conditions: + - Issue #25: `new.target` should return `undefined` (aligns with methods/static fields) - Issue #26: Support multiple interleaved `static {}` evaluated in document order alongside field initializers. Each block has its own `var` scope, as each block is essentially an immediately-invoked Method. - Editorial review of above changes by DE and TC39 Editors diff --git a/meetings/2021-01/jan-26.md b/meetings/2021-01/jan-26.md index 1b22f4af..86e134d5 100644 --- a/meetings/2021-01/jan-26.md +++ b/meetings/2021-01/jan-26.md @@ -1,7 +1,8 @@ # 26 January, 2021 Meeting Notes + --- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Ross Kirsling | RKG | Sony | @@ -43,9 +44,10 @@ --- ## `Intl.DateTimeFormat` for stage 4 + Presenter: Felipe Balbontín (FBN) -- [proposal]() +- proposal - [slides](https://docs.google.com/presentation/d/e/2PACX-1vQz-E0XlFl_FPYnQqIpJ02ZeMFgXo_xI-s_enRBNj2zLsQe-OR_BVJb15FFcqR-YP_jVq_GfKl9vpGO/pub?slide=id.p) To start I would like to take some time to remind everyone about the motivation for this proposal. So it is very common for websites to need to display date ranges or date intervals in order to to show the span of event such as the duration of a tree or like a time period in this this graph, when permitting or when displaying - [inaudible]. So as I was saying, how can we format this in a really concise way? So a naive approach would be okay. Let's instantiate a written format specific Locale and setup options and then we can just form at the two dates independently, and then we can format the strings together using some kind of delimiter like in this case a `-`. This would work, but it has a couple of problems. the first problem depends on the kind of fields that you're trying to display. by "calendar Fields" I mean like the month name, the day of the year or even hours minutes seconds. So depending on the kind of fields that you're trying to display and the dates that you're trying to fix the composer that interval it may happen that you may end up repeating some of these calendar fields unnecessary because they're not adding any new information like for instance in this example. Given that the two dates that you're trying to format are in the same month, you're actually repeating the month name. The second problem is that the way which you format or you are representing intervals is locale dependent. So in English, it's perfectly fine to use that `-` but there are other locales where we would use a different string a different symbol or something like that and together with this also the order in which the dates are being formatted is also called event what is that it there may be some locales where the second date should be displayed first, or it's customary to display the second date first. @@ -54,7 +56,7 @@ FBN: Because of these issues, we are putting together the format range proposal. So it's important to note that this proposal is based on (?) on the intervals. So to give you an example on how we would use this in the case of format range. You will instantiate dateTImeFormat using providing the Locale options and then you'll just go from a range with the two dates and it will output the string as I mentioned before with the most concise string representation for this state interval or date range like in this case because the two dates are on the same month, then we're displaying the month named on the year just once on the on the interval the format string. In the case of format to parts, it will output a list of items each items and mentioned before represent a specific part of the formatted interval. Each of these items will contain three Fields. The first two are the same ones that are currently returned by the regular formattoparts the type basically indicates that they are associated to this particular item, the value is a substring of the formatted interval or formatted string and then we added a third field which is sort and in this case Source basically indicates to which of the two dates this particular item is related to or comes from. So as you can see here for instance the month name and the year are shared between the two dates while the first day that's displayed here at comes from the first date while the second day comes from the date. The reason for adding a formattoparts method here is the same as for the other formattoparts methods and basically is to allow for a more flexible display for the UI. -FBN: so second, so now that I went over the motivation for the proposal and some examples I would like to present the biggest updates that have happened since we got this proposal to stage 3. So one of the biggest user facing changes, normative changes, is that previously when either start date or end end date were undefined we were throwing a RangeError. now after some feedback we receive as well as a discussion at the for to working group the conclusion was that it was probably during this case to throw a TypeError because TypeError in this case would better represent the kind of error that we want to convey when throwing an error in this case. Also, it's important to note that this is more like - here we're making this more consistent with the other parts of the 402 spec where we're trying a type error in similar cases when an argument or on our options are undefined. The other two big updates that have been done to the spec text: now we're supporting two additional options that were added to datetimeformat that were other recently advanced to Stage 3. for the first one is fractional second digits this One is a simple basically this option allows whoever is using different permit to get it to display the fractional seconds when formatting and eight and now we're doing the same when formatting day intervals and similarly. We are also supporting to format to intervals when when the user is setting date style or or time style. +FBN: so second, so now that I went over the motivation for the proposal and some examples I would like to present the biggest updates that have happened since we got this proposal to stage 3. So one of the biggest user facing changes, normative changes, is that previously when either start date or end end date were undefined we were throwing a RangeError. now after some feedback we receive as well as a discussion at the for to working group the conclusion was that it was probably during this case to throw a TypeError because TypeError in this case would better represent the kind of error that we want to convey when throwing an error in this case. Also, it's important to note that this is more like - here we're making this more consistent with the other parts of the 402 spec where we're trying a type error in similar cases when an argument or on our options are undefined. The other two big updates that have been done to the spec text: now we're supporting two additional options that were added to datetimeformat that were other recently advanced to Stage 3. for the first one is fractional second digits this One is a simple basically this option allows whoever is using different permit to get it to display the fractional seconds when formatting and eight and now we're doing the same when formatting day intervals and similarly. We are also supporting to format to intervals when when the user is setting date style or or time style. FBN: So what's the current status of this proposal. This proposal was was implemented in JSC and SM was was shipped in Chrome. Also, there's test262 tests and we got editor signoffs. @@ -64,21 +66,19 @@ FBN: I will like to ask the committee for approval to advance to state four. RPR: Okay. So are there any objections to stage 4? [silence] No one is on the queue. So oh, yeah. Congratulations. You have stage 4. - ## Conclusion Stage 4 - -# ResizableArrayBuffer and GrowableSharedArrayBuffer updates +## ResizableArrayBuffer and GrowableSharedArrayBuffer updates SYG: I am not asking for advancement. This is just a few updates to let folks know of the some changes making to the proposal and for discussion around those changes. -SYG: So the first change is that currently the proposal has this behavior that was calling once out of bounds stays out of bounds. This was a kind of weird mental model, and also involves some extra implementation complexity, so I'm planning to change that on recommendation from Maria, fellow V8 team member. And then the second change I wanted to discuss with folks who have opinions here is maybe allows some implementation to do implementation-defined rounding of the max size and possibly also the size passed to the resize method. The motivation for the second change is that, one of the original motivating use cases for this proposal is webassembly which only allows resizes of its linear memory to be in multiples of its home page size, which I think is 4K. So you can imagine for a large buffer that you might want to resize especially with this in place growth strategy where you reserve some pages for you don't actually commit them in the OS that you want to round up to like a full page size. And the question is, Should we build the API basically to make that kind of rounding observable? and I'll go into that a little bit. So first up this out of bounds behavior. the currently the out of bounds behavior is something like this. suppose we have a resizable array buffer that initially is 128 bytes and then has a max size of 256 bytes. I have a view on it of 32 you and 32s starting an an opposite 0 So it's a fixed window window view from 0 to 32 x 4. And initially you can view everything because everything is in bounds. It's possible that this u32 view becomes partially out of bounds, for example, if I resize the buffer to be only 64 bytes. The View technically can look at byte 0 to bytes 64 but since it's a u32 length array parts of it is out of bounds and the current behavior is that when that happens It behaves as if its buffer were detached, meaning any access to the elements become undefined, the length reports at zero and importantly it can't ever go back in bounds. If it were ever observed as being out of bounds like this one has, even if the underlying Buffer were to be resized a to a size where the entire window view becomes in bounds again it then we go back into bounds and it still reports its length of 0 and a still reports all elements as undefined. The weirdness here is that I thought this would kind of make things easier to reason about you have more guarantees like if if you ever saw something out of bounds you can you don't have to worry about it ever coming back into bounds, but the weirdness is that, one, this only happens if you actually kind of observe the view going out of bounds. Suppose I deleted these two lines that I have highlighted. That is, I resize the underlying buffer. I don't actually touch u32, and then I resize the underlying buffer again so that it goes back into bounds. In this case the current/old behavior is that u32 never goes out of bounds because I never observed it going out of bounds. So it kind of stays working forever. And this is a problem upon thinking about it some more and getting feedback from other V8 folks. Namely like imagine. You have a debug and release build in your app where you're kind of poking at the buffer in the debug build and because you poked at it, you might observe it out of bounds and when you turn no release you no longer observe it out of bounds. So this kind of Divergence in behavior possibility between debug and release is kind of hard to think about and it's just kind of weird that it only gets into this detached state that you can't make it go back in bounds if you you look at it. Secondly this current behavior just makes things more complicated for implementations as well because it has to track this it now on each instance "Have I ever been observed to be out of bounds?" If so treat all access as undefined and so forth. So that extra bit is not really necessary if we just change the behavior to be what you would expect out of bounds checks to be, which is you check on every access. so the new behavior is basically the same except for the last three lines where if I make the underlying buffer back in bounds, because I'm just checking bounds on every access, it is possible to make an array buffer or typed array view go back into bounds. What has not changed in the new behavior, is that if a typed array is partially out of bounds you can't access any part of it. But if you resize the underlying buffer such that the entire typed arrays back in bounds, then you can access it again. Of course the newly resized memory will be 0, just like the regular resize. And the mental model for this is basically on every access, on a length access, on am element access, you just check the length of the underlying array and you check if the entirety of the type of array is in bounds of the underlying array buffer. If so, you can access, if not, everything is undefined. +SYG: So the first change is that currently the proposal has this behavior that was calling once out of bounds stays out of bounds. This was a kind of weird mental model, and also involves some extra implementation complexity, so I'm planning to change that on recommendation from Maria, fellow V8 team member. And then the second change I wanted to discuss with folks who have opinions here is maybe allows some implementation to do implementation-defined rounding of the max size and possibly also the size passed to the resize method. The motivation for the second change is that, one of the original motivating use cases for this proposal is webassembly which only allows resizes of its linear memory to be in multiples of its home page size, which I think is 4K. So you can imagine for a large buffer that you might want to resize especially with this in place growth strategy where you reserve some pages for you don't actually commit them in the OS that you want to round up to like a full page size. And the question is, Should we build the API basically to make that kind of rounding observable? and I'll go into that a little bit. So first up this out of bounds behavior. the currently the out of bounds behavior is something like this. suppose we have a resizable array buffer that initially is 128 bytes and then has a max size of 256 bytes. I have a view on it of 32 you and 32s starting an an opposite 0 So it's a fixed window window view from 0 to 32 x 4. And initially you can view everything because everything is in bounds. It's possible that this u32 view becomes partially out of bounds, for example, if I resize the buffer to be only 64 bytes. The View technically can look at byte 0 to bytes 64 but since it's a u32 length array parts of it is out of bounds and the current behavior is that when that happens It behaves as if its buffer were detached, meaning any access to the elements become undefined, the length reports at zero and importantly it can't ever go back in bounds. If it were ever observed as being out of bounds like this one has, even if the underlying Buffer were to be resized a to a size where the entire window view becomes in bounds again it then we go back into bounds and it still reports its length of 0 and a still reports all elements as undefined. The weirdness here is that I thought this would kind of make things easier to reason about you have more guarantees like if if you ever saw something out of bounds you can you don't have to worry about it ever coming back into bounds, but the weirdness is that, one, this only happens if you actually kind of observe the view going out of bounds. Suppose I deleted these two lines that I have highlighted. That is, I resize the underlying buffer. I don't actually touch u32, and then I resize the underlying buffer again so that it goes back into bounds. In this case the current/old behavior is that u32 never goes out of bounds because I never observed it going out of bounds. So it kind of stays working forever. And this is a problem upon thinking about it some more and getting feedback from other V8 folks. Namely like imagine. You have a debug and release build in your app where you're kind of poking at the buffer in the debug build and because you poked at it, you might observe it out of bounds and when you turn no release you no longer observe it out of bounds. So this kind of Divergence in behavior possibility between debug and release is kind of hard to think about and it's just kind of weird that it only gets into this detached state that you can't make it go back in bounds if you you look at it. Secondly this current behavior just makes things more complicated for implementations as well because it has to track this it now on each instance "Have I ever been observed to be out of bounds?" If so treat all access as undefined and so forth. So that extra bit is not really necessary if we just change the behavior to be what you would expect out of bounds checks to be, which is you check on every access. so the new behavior is basically the same except for the last three lines where if I make the underlying buffer back in bounds, because I'm just checking bounds on every access, it is possible to make an array buffer or typed array view go back into bounds. What has not changed in the new behavior, is that if a typed array is partially out of bounds you can't access any part of it. But if you resize the underlying buffer such that the entire typed arrays back in bounds, then you can access it again. Of course the newly resized memory will be 0, just like the regular resize. And the mental model for this is basically on every access, on a length access, on am element access, you just check the length of the underlying array and you check if the entirety of the type of array is in bounds of the underlying array buffer. If so, you can access, if not, everything is undefined. SYG: There is an alternative that's possible where we allow the typed array to be partially accessible if it is partially out of bounds. I reject this alternative. I don't think it's a good idea because the only way that typed arrays can go out of bounds to begin with is if you give it a fixed length - remember as part of this proposal I am also proposing these auto-length tracking typed arrays. Auto length tracking typed arrays can go out of bounds if they started at a non zero offset and the underlying buffer is resized to be smaller than the offset. But other than that, the usual way that a typed array can go out of bounds is if it's given a fixed length, if it's partially out of bounds is it's because it was given a fixed length. and if we allow a partially out of bounds typed array to be partially viewable, that seems really weird. Do you represent - do you report the length as only the part of the array that is inbounds? If so, that seems to kind of break the intention and the expectation of the API. If I create a typed array with an explicit length, and then I view it later, because it's partially out of bounds I get a typed array of different lengths? That seems to be not good. So I would rather not have this alternative and keep the existing behavior of, if any part of it is out of bounds then consider the entire entire typed array inaccessible. -SYG: So that is the first change I am proposing. Before I move on to the second change, because it's kind of unrelated to this one, is there anything in the queue for questions? [no] +SYG: So that is the first change I am proposing. Before I move on to the second change, because it's kind of unrelated to this one, is there anything in the queue for questions? [no] SYG: Then I will continue with the proposed second change. This is more of a discussion. I haven't really made up my mind on the extent of this change. So the the basic idea is that you are making a new array buffer or a new resizable array buffer. Should we allow the implementation to do some implementation defined rounding up of the maximum size? We'll start with the maximum size, the same question also extends to the length as well, but the the max sides is I think it's less controversial and this is very reasonable to round up because if you are doing the growth in place, the implementation strategy or the way you would do that is you would call something like mmap or your os's equivalent of mmap to get some virtual memory pages, but not actually make them be backed by the physical memory until you need it. And if you have some pages you're going to cream and being a size that is multiple of your operating systems pages. So even if I give you in this example I request four thousand bytes. The implementation probably will round that up to some multiple of this page size, which is often 4K. So just for the sake of the example it's rounded up to 4K. @@ -118,13 +118,13 @@ SYG: Yeah, I share that concern. It could be people depending on implementation, MM: Yeah and to make just a completely bizarre analogy, when I grew up as a programmer C had a sizeof operation that was used a lot to write sort of portable code where the code itself was responsible for adapting to machine differences because the word size differences between different machines with such an important thing to optimize for, and these days we don't do that. We just pretend that over that, you know, we just ignore those the performance differences between different words sizes on different machines and just force everyone into a platform neutral common Behavior. -SYG: Yeah, I think that is that is a compelling argument to me in that you know, a big value of the web platform is the consistency across platforms. And the pros here are mainly kind of implementation complexity, we have fewer size fields to track. And that is not just, oh we care about an extra word or two of memory per table for instance. I think the main Pros their of implementation simplicity is higher chance of getting this right and fewer chances of security bugs given how popular an avenue of attack array buffers are. The more complex these new kinds of buffer implementations are, the more likely there will be bugs. +SYG: Yeah, I think that is that is a compelling argument to me in that you know, a big value of the web platform is the consistency across platforms. And the pros here are mainly kind of implementation complexity, we have fewer size fields to track. And that is not just, oh we care about an extra word or two of memory per table for instance. I think the main Pros their of implementation simplicity is higher chance of getting this right and fewer chances of security bugs given how popular an avenue of attack array buffers are. The more complex these new kinds of buffer implementations are, the more likely there will be bugs. MM: You're certainly speaking to my heart there. SYG: If you have a security question. Let's hold a discussion off because the second half of this presentation will be the security reviews that we've undertaken in Chrome and I'll present some feedback there. -SYG: So the second half of this is, we requested some security reviews. Not of the implementation but of the design and the implementation strategies that we had in mind from two teams within Google. One is the Google Chrome platform security, you know people who gate keep the security reviews of the actual features that we merge into Chrome, and second is project zero who of course has great experience in actually developing exploits and how to exploit web stuff and browser stuff. And I'm happy to report that they were both satisfied with the security risk mitigations that we laid out. To wit the three risks that we laid out to them, the three known risks, were that one, type the typed arraylength can change whereas they couldn't really before and the mitigation is that well the reality today, is that length can already change except they can only change to one value, zero, due to detach, and we have confidence that over the years we've kind of put the detached checks and all the right places now and all this new kind of array buffer and these new kinds of of Auto length tracking typed arrays would add is they generalize the logic the detached checks, but we already have a set of points where we know that we need to audit to make sure that they can account for changes Beyond Beyond just changing to zero. The second known risk is that the array buffer data pointer might change and this has had bugs in the past where a jit might incorrectly cache a data pointer and then due a resize - Sorry, not in the past, but like one possible class of bugs is that you might incorrectly cashe a data point where it was constant and if your implementation strategy is that you actually moved the data pointer you might now be pointing to Freed memory and that leads to you know, arbitrary exploits. And the mitigation here is explicit in the design in that for implementations where the implementation strategy both in place growth makes sense, the design explicitly allows, that by requiring a max length, so that the data pointer does not have to change. Then finally there's a risk that because this kind of overloads the typed array constructor it might - like implementing the new features might cause vulnerabilities in the existing typed arrays which are much more widely used and which are much more security and performance dependent. And there's really no magical way to mitigate this other than careful auditing and trying to reuse the battle-hardened paths that engines already already have for typed arrays. This is probably the risk risk with the biggest chance of failure due to human error, but I think it is a risk worth taking for the expressivity that we gain. +SYG: So the second half of this is, we requested some security reviews. Not of the implementation but of the design and the implementation strategies that we had in mind from two teams within Google. One is the Google Chrome platform security, you know people who gate keep the security reviews of the actual features that we merge into Chrome, and second is project zero who of course has great experience in actually developing exploits and how to exploit web stuff and browser stuff. And I'm happy to report that they were both satisfied with the security risk mitigations that we laid out. To wit the three risks that we laid out to them, the three known risks, were that one, type the typed arraylength can change whereas they couldn't really before and the mitigation is that well the reality today, is that length can already change except they can only change to one value, zero, due to detach, and we have confidence that over the years we've kind of put the detached checks and all the right places now and all this new kind of array buffer and these new kinds of of Auto length tracking typed arrays would add is they generalize the logic the detached checks, but we already have a set of points where we know that we need to audit to make sure that they can account for changes Beyond Beyond just changing to zero. The second known risk is that the array buffer data pointer might change and this has had bugs in the past where a jit might incorrectly cache a data pointer and then due a resize - Sorry, not in the past, but like one possible class of bugs is that you might incorrectly cashe a data point where it was constant and if your implementation strategy is that you actually moved the data pointer you might now be pointing to Freed memory and that leads to you know, arbitrary exploits. And the mitigation here is explicit in the design in that for implementations where the implementation strategy both in place growth makes sense, the design explicitly allows, that by requiring a max length, so that the data pointer does not have to change. Then finally there's a risk that because this kind of overloads the typed array constructor it might - like implementing the new features might cause vulnerabilities in the existing typed arrays which are much more widely used and which are much more security and performance dependent. And there's really no magical way to mitigate this other than careful auditing and trying to reuse the battle-hardened paths that engines already already have for typed arrays. This is probably the risk risk with the biggest chance of failure due to human error, but I think it is a risk worth taking for the expressivity that we gain. SYG: Before I go back to the queue, we are planning for stage 3 in March. The reviewers were moddable, Mozilla, and Apple. These of course browser vendors with also security concerns and moddable, who have a very different environment that they would Implement with different implementation strategy. I think they said they always want to reallocate and compact and it would be good to have them review and make sure that their use cases are met as well. @@ -173,12 +173,12 @@ SYG All right, that's the queue thank you very much, and I'm not sure I've been Was not seeking advancement ## Dynamic host brand checks for stage 2 + Presenter: Krzysztof Kotowicz (KOT) - [proposal](https://github.com/tc39/proposal-dynamic-code-brand-checks) - [slides](https://docs.google.com/presentation/d/17X-v6uCIYZaG7RXUAbfPgYfzVOdKmjrQ1yW6MqlY1hA/edit) - KOT: Eval is evil. Ecmascript already has hooks to disable it - the problem is that it did not result in eval eradication for a large class of JavaScript programs. What happens instead? The larger the program gets, the more dependencies it has and the probability that a single eval call exists in them rises - in the end people just continue to run applications with eval enabled. In case of CSP & web applications an unsafe-eval keyword is used. There has been research done over the years of how large the problem is. It is significant - vastly over half of the web applications that do try to have content security policy struggle with dependencies and are blocked on their dependencies to lock down the eval. In practice we end up with a security control that is too restrictive, too high of a barrier to actually use to improve the security posture of a given program. There are practical examples of eval being used, usually through the dependencies, that are not very easy to replace. One of those exemptions is a polyfill for globalThis. Another one is checking whether a given syntax is supported, in this particular case async functions. Performance penalties have been mentioned as a blocker for removing eval, some third party dependencies do magic things that are simply harder to do with then without eval. Sourcemaps are generated with eval(), development environments very commonly use eval all over the place, so not enforcing eval on dev, but on prod may introduce production breakages. On top of that, the most scary example that I have seen in my work on web application security, is the angularjs example. Angularjs framework wanted to work in a mode that blocks eval, in Chrome extensions. The code pretty much introduced a meta evaluator of arbitrary code. You could reach to the window object then execute from there. This particular control blocking eval globally pushed angularjs into introducing this workaround - so I still want sort of arbitrary execution (expressions), but without using eval - how can I do it securely?. This is what angular came up with and in the end it introduces a whole class of problems that we call script gadgets. KOT: Can we do better? I think we can. The roots of the problem is that libraries have large install bases, they have code size constraints, they need to support the interpreters like in the angularjs case - and those dependencies have little incentive to move off eval. Moving off eval is a cost for third party code most of the time, whereas it does bring benefits to applications, to the integrator. And the status quo seems to be after years that eval with no guardrails is the standard. The applications cannot effectively move to a more locked down environment because they have one, two or more instances of eval and the problem is growing, because it's not easy to stop introducing eval if eval is allowed. @@ -189,7 +189,7 @@ KOT: This is a good moment of making a short segway to Trusted Types on the web KOT: What happens with that approach is that we can assure that those DOM XSS sinks, those risky functions can only ever be called with values that have passed through one of their policy rules functions. The policies are also guarded and they are a burden to create. It's an additional hurdle to create a policy, to have it whitelisted in your CSP for example, so in practice we see in web applications that have enabled trusted types, developers instead move to those secure alternatives. For example, very commonly CSS elements' innerHTML property was assigned some style text. Instead, now the textContent is used. What happens is even in a complex web application, there's a small number of policies at play at all and those policies form the security relevant surface and are being locked down for example in a closure, in a module, it very much bounds the security reviews. In order to reason about the web application one either needs to look at the entirety of its code - which could potentially, you know, call innerHTML with a user controlled string. [With Trusted Types] I only need to look at the policies and make sure that the risky policies don't leak into application code or the policies themselves make types secure by construction, such that the security rules to convert HTML work in a way that I deem safe and the default policy on top of that enables a gradual migration. What is important here is with trusted types, after enforcing the rules there's no regressions. Even if your web application was using a couple of eval functions that have been transformed into using a trusted type, you know for sure that no new dependencies that use eval will be introduced, so it stops being a problem forever. Right? The problem becomes bounded and approachable and you can reduce the "evalness" of the application as you go. -KOT: Here's where the proposal comes into play. We would like this approach to be also used for eval and the family of functions that compile code. The way we propose to introduce It Is by having a host defined slot called [[HostDefinedCodeLike]]. That's on pretty much arbitrary objects. What is very important is that the value of this slot is being set by the host and the host guarantees or promises to be immutable. Once it's been set on a given object instance, it doesn't change. Trusted types satisfy that condition. And then of course, there's a nice code-like check that checks the presence in the value in the slot. So once we have that primitive we can allow code like objects in eval such that objects blessed by the host could simply be passed either to eval or a new Function. That requires some hooking. And the real hooking is mostly done in this one host call out. So I propose to replace the HostEnsureCanCompileStrings into a HostValidateDynamicCode. What is different here is that on top of passing the realms to the host, I also propose to pass the string, extracted by ecmascript from the object back to the host for validation, and to pass a flag on whether the input to eval or to the Function constructor was composed of only code like objects. ECMAScript actually does the stringification of the code like objects, and informs the host on whether something was originally a code like object. Additionally there's some context for the host to make a more precise check - for example to distinguish the Function constructor from an eval call. What is also important is that the hook also integrates with the default policy behavior. This particular host callback returns a string and that string is eventually what would have been executed whereas previously the host hook could only reject the value. This one can reject the value, but it also can be turned into a modified one, if the host decides so. This is pretty much hooking into the default policy of trusted types. +KOT: Here's where the proposal comes into play. We would like this approach to be also used for eval and the family of functions that compile code. The way we propose to introduce It Is by having a host defined slot called [[HostDefinedCodeLike]]. That's on pretty much arbitrary objects. What is very important is that the value of this slot is being set by the host and the host guarantees or promises to be immutable. Once it's been set on a given object instance, it doesn't change. Trusted types satisfy that condition. And then of course, there's a nice code-like check that checks the presence in the value in the slot. So once we have that primitive we can allow code like objects in eval such that objects blessed by the host could simply be passed either to eval or a new Function. That requires some hooking. And the real hooking is mostly done in this one host call out. So I propose to replace the HostEnsureCanCompileStrings into a HostValidateDynamicCode. What is different here is that on top of passing the realms to the host, I also propose to pass the string, extracted by ecmascript from the object back to the host for validation, and to pass a flag on whether the input to eval or to the Function constructor was composed of only code like objects. ECMAScript actually does the stringification of the code like objects, and informs the host on whether something was originally a code like object. Additionally there's some context for the host to make a more precise check - for example to distinguish the Function constructor from an eval call. What is also important is that the hook also integrates with the default policy behavior. This particular host callback returns a string and that string is eventually what would have been executed whereas previously the host hook could only reject the value. This one can reject the value, but it also can be turned into a modified one, if the host decides so. This is pretty much hooking into the default policy of trusted types. KOT: How does it work in specific algorithms in eval? We need to change eval's early return. Eval is an identity function for non-strings, now this needs to be aware of a code like object. New Function is a little bit more complex. I propose to modify the algorithm to stringify all arguments from code like objects and compute a flag that stores if all of them have been code like. Currently the host check in CreateDynamicFunction is done before the function body is constructed, but the browser implementations don't follow, as CSP requires to put the assembled function body in the security policy violation reports, so the hook happens later. This presented code should not throw, the Function constructor should be rejecting this value before the object is being stringified. And this is for example how Safari behaves. Whereas all the other browsers, at least Mozilla and Chrome behave in a way that satisfies CSP. @@ -213,7 +213,7 @@ MM: Perhaps narrowly. All together I'm very reluctant to see this thing go forwa KOT: Which particular thing would you consider messy? -MM: I think that the extra slot you're proposing to add to all objects and then having it be an argument check is-code-like of all just has all of the same problems as the is-template-object. In order to be meaningful, it needs to be eval-relative and it needs to be eval-relative for exactly the reason that we've been arguing about with is-template-like and which the interpreter example helps highlight, which is foreign evals should be treated like foreign evals and meta interpreter. You shouldn't allow crosstalk between foreign eval that give each one power over the other because the any then any of L is completely threatening to all of them. +MM: I think that the extra slot you're proposing to add to all objects and then having it be an argument check is-code-like of all just has all of the same problems as the is-template-object. In order to be meaningful, it needs to be eval-relative and it needs to be eval-relative for exactly the reason that we've been arguing about with is-template-like and which the interpreter example helps highlight, which is foreign evals should be treated like foreign evals and meta interpreter. You shouldn't allow crosstalk between foreign eval that give each one power over the other because the any then any of L is completely threatening to all of them. KOT: but for this not how web platform operates. Even the content security policy propagation rules say that the moment you create a blank iframe it inherits the CSP rules, right? So that means that you can eval across realms pretty much. @@ -233,21 +233,20 @@ MM: I am reluctant to let this thing go forward with Stage 2 without it having a Not advancing - ## Realms update -Presenter: Shu Yu-Go (SYG) -- [proposal]() -- [slides]() +Presenter: Shu Yu-Go (SYG) +- proposal +- slides SYG: Up front I'd like to give an update on the the Chrome position on realms and how we're working with the champions like Caridy and Leo and the Salesforce folks. We've had some progress there, but since I don't have slides or anything technical to really talk about let's not go into technical details too much. I think that might be a non-productive discussion. -SYG: Very high level first. Realms has had internal push back from Chrome for quite a while as it has to it has has developed and Notably, there's disagreement that they're - you know, we think that realms does solve a use case there is value in Realms in particular even for some Google properties like amp. It would help amp kind of run their amp script in a more synchronous way; currently amp runs its amp scripts in a worker and has to deal with asynchrony for no good reason. If we had Realms there doesn't need to be any asynchrony. There is a valuable use case to be solved where you want to run some js code that's kind of trusted, like you can trust to run the code or you at least trust it to not be like actively malicious because the realm is a synchronous in process thing. So you partially trust it, and you want it to not have to be exposed to the effects of mutations in the outer realm, you want it to have like a fresh set of stuff. This use case we think is important. The cons of the current Realms proposal as we're debating this out internally is that as we have seen with the development of Spectre, and and as we have seen a common misunderstanding with possible users of Realms, is that folks tend to misunderstand the isolation guarantees that are given by Realms. Now, this is somewhat nuanced. In a post Spectre threat model, if you care about side-channel attacks, then Realms do absolutely nothing, because they are in-process. The state of the art for trying to isolate your code from side-channel attacks by a spectre-like gadgets is at least a process boundary. So depending on how hard-line you are are in thinking what kind of security guarantee that even possible with realms. From the security folks' point of view when we spoke to the Chrome security architecture, they were very concerned that the users might treat this as if it had isolation guarantees because it certainly seems like an isolation primitive where you like spin up this new thing that you run some code in that in and it doesn't have any access to the outside world. Whereas in fact that cannot be implemented as secure as the security architecture folks want. +SYG: Very high level first. Realms has had internal push back from Chrome for quite a while as it has to it has has developed and Notably, there's disagreement that they're - you know, we think that realms does solve a use case there is value in Realms in particular even for some Google properties like amp. It would help amp kind of run their amp script in a more synchronous way; currently amp runs its amp scripts in a worker and has to deal with asynchrony for no good reason. If we had Realms there doesn't need to be any asynchrony. There is a valuable use case to be solved where you want to run some js code that's kind of trusted, like you can trust to run the code or you at least trust it to not be like actively malicious because the realm is a synchronous in process thing. So you partially trust it, and you want it to not have to be exposed to the effects of mutations in the outer realm, you want it to have like a fresh set of stuff. This use case we think is important. The cons of the current Realms proposal as we're debating this out internally is that as we have seen with the development of Spectre, and and as we have seen a common misunderstanding with possible users of Realms, is that folks tend to misunderstand the isolation guarantees that are given by Realms. Now, this is somewhat nuanced. In a post Spectre threat model, if you care about side-channel attacks, then Realms do absolutely nothing, because they are in-process. The state of the art for trying to isolate your code from side-channel attacks by a spectre-like gadgets is at least a process boundary. So depending on how hard-line you are are in thinking what kind of security guarantee that even possible with realms. From the security folks' point of view when we spoke to the Chrome security architecture, they were very concerned that the users might treat this as if it had isolation guarantees because it certainly seems like an isolation primitive where you like spin up this new thing that you run some code in that in and it doesn't have any access to the outside world. Whereas in fact that cannot be implemented as secure as the security architecture folks want. SYG: so that's I think the main push back is that there are there's a foot gun here that makes it easy to reach for this this isolation like mechanism that in fact does not give you the isolation you think it gives. Their possible ways forward here. The simplest one is, you know, rename it to something real ugly, like with with the word "insecure" or "unisolated" in the name or something, there's compromises here. During this discussion with various teams internally in the various Chrome Architects security architecture groups internally and with Domenic Denicola, who has a lot of as you know web expertise, one of the interesting middle paths here is that we came up with an alternative that I won't officially present here because we're still working through the details, nothing really is pinned down and Caridy and team are also considering just how well it works for their use case internally - that there's a middle path to kind of get ahead of the foot gun that we're mainly worried about. One of the foot guns in addition to Spectre, which is you know, I think we have to just basically admit that Realms is a thing that is vulnerable to Spectre, there's no way to get around it, the whole motivation for the proposal is in-process sync, so that kind of that means you will be vulnerable to Spectre. But one of the other foot guns that we're also worried about is - suppose like Specter didn't exist. Even in that world it is still difficult to use Realms correctly because you have to pass stuff into Realms to give it the initial set of capabilities that you want, if you you don't properly intercept kind all paths, it's very easy to get at something in the outside realm that breaks whatever application Level isolation guarantees that you wanted to give. It just seems it is difficult to use correctly this kind of API. And to get around that kind of foot gun one idea we were playing around with with is, imposing a actual hard boundary at the realm that you do not let references pass back and forth between the inner and the outer realm, you do not allow the two object graphs to be intertwined and instead you have some kind of synchronous message passing API to do your message passing back and forth out of the realm. This could be something like using structured clone except it's synchronous. It could be a new API that there's some kind of copying thing, I know Caridy has some other interesting ideas to add kind of new levels of expressivity here that's not you know, just like copying that might be better the drawback to this approach. -SYG: So, okay, so to the pro to this approach is without like by construction, you cannot mix the object graphs, because by construction you cannot mix the object graphs you would have more guarantees. The con is that it is strictly less expressive than the current proposal. This is a a subtle point. The way in which it is less expressive is that because you can no longer actually mix the two object graphs with strong references as you normally would, if you have a cycle between the inner realm and the outer Realm, you can no longer keep that cycle alive - like you can make the cycle leak all the time, but you can no longer make it be garbage collected as any inter-realm cycle would be. Meaning you you lose this this nice lifetime automatic life time management. If you need a cycle of live objects across Realms like a proxy one side that keeps its its Target alive on the other side. And the target itself on the other side points to a proxy that points back to a Target on the other side like a like a normal cycle. Because these are not actual references, the GC has no visibility into what the liveness, what the reachability property is. It just thinks there is no reference. So if you don't manually keep it alive it'll just collect half the cycle and the only way to really keep it alive is you just like pin it and you make it live forever. So that's kind of crappy. It precludes a certain use case that Fires live life cycles across the realm I admit I don't fully understand the use cases around that but caridy and team says to me that that use case is used by (?) close. So that's the primary drawback. +SYG: So, okay, so to the pro to this approach is without like by construction, you cannot mix the object graphs, because by construction you cannot mix the object graphs you would have more guarantees. The con is that it is strictly less expressive than the current proposal. This is a a subtle point. The way in which it is less expressive is that because you can no longer actually mix the two object graphs with strong references as you normally would, if you have a cycle between the inner realm and the outer Realm, you can no longer keep that cycle alive - like you can make the cycle leak all the time, but you can no longer make it be garbage collected as any inter-realm cycle would be. Meaning you you lose this this nice lifetime automatic life time management. If you need a cycle of live objects across Realms like a proxy one side that keeps its its Target alive on the other side. And the target itself on the other side points to a proxy that points back to a Target on the other side like a like a normal cycle. Because these are not actual references, the GC has no visibility into what the liveness, what the reachability property is. It just thinks there is no reference. So if you don't manually keep it alive it'll just collect half the cycle and the only way to really keep it alive is you just like pin it and you make it live forever. So that's kind of crappy. It precludes a certain use case that Fires live life cycles across the realm I admit I don't fully understand the use cases around that but caridy and team says to me that that use case is used by (?) close. So that's the primary drawback. SYG: So that's about it. I'm sorry I don't really have more to report. We're still having internal discussions with several different teams, or treating this in the usual like PM problem-solving way, where we identified the use case. In this case the Amp script like use case where you want to run some code that is kind of trusted. Because this is not an actual security isolation mechanism. and see what technology perhaps the current proposal its best before that use case. And as we come back with more actual decisions, hopefully, you know we can I can give a more official Chrome position on Realms. We have and I kind get maybe a couple of minutes. I'll to add up more details are @@ -257,11 +256,11 @@ SYG: One thing I forgot to mention is that from a chrome security architecture p CP: Yeah, we have mentioned before we were okay looking for a new name if anyone has any idea or any any proposal for that we're open. -MM: So first of all, I wanted to agree strongly that simply saying security boundary is very far from nuanced enough. I wrote a document to clarify these matters - "security taxonomy for understanding language based security" - That ill put into the notes that's already in some threads on this proposal and others. The key thing there is confidentiality and integrity are very separate concerns and rather than simply a binary is or is not a security boundary. It is an Integrity boundary. It is not the confidentiality boundary nobody at any point ever imagined that that Realms would be a boundary for protecting against a meltdown or spectre or other side channels. Okay. Now the thing about the object graph leaking between raw realms that are are in contract; that's absolutely true. That's very very tricky to get right. Agoric at one point when we were trying to do that in a more ad hoc way repeatedly got it wrong, So I completely agree that that's very hard to get right. The right way to address that is with a membrane between the Realms but the problem with combining the membrane with the realms proposal is that now, we don't a universal membrane abstraction. A membrane Creator is not currently a a understood as a reusable abstractions, it's understood as a reusable pattern, and that's why there's so many membrane implementations. So I think that it's important if Realms are don't have any additional mechanism that they they should have strong advice to only use it with an additional installation mechanism like membranes with Realms. The other thing is Is the cycle problem is already solved by membranes. The reason why we one of the reasons why we separated weak maps from weak refs when we first proposed weak Maps is because weak Maps as the mechanism for crossing membranes still allows cycles that cross membrane boundaries to still be collected. In fact, they can cross multiple membrane boundaries and still be collected because of the way weak Maps work. +MM: So first of all, I wanted to agree strongly that simply saying security boundary is very far from nuanced enough. I wrote a document to clarify these matters - "security taxonomy for understanding language based security" - That ill put into the notes that's already in some threads on this proposal and others. The key thing there is confidentiality and integrity are very separate concerns and rather than simply a binary is or is not a security boundary. It is an Integrity boundary. It is not the confidentiality boundary nobody at any point ever imagined that that Realms would be a boundary for protecting against a meltdown or spectre or other side channels. Okay. Now the thing about the object graph leaking between raw realms that are are in contract; that's absolutely true. That's very very tricky to get right. Agoric at one point when we were trying to do that in a more ad hoc way repeatedly got it wrong, So I completely agree that that's very hard to get right. The right way to address that is with a membrane between the Realms but the problem with combining the membrane with the realms proposal is that now, we don't a universal membrane abstraction. A membrane Creator is not currently a a understood as a reusable abstractions, it's understood as a reusable pattern, and that's why there's so many membrane implementations. So I think that it's important if Realms are don't have any additional mechanism that they they should have strong advice to only use it with an additional installation mechanism like membranes with Realms. The other thing is Is the cycle problem is already solved by membranes. The reason why we one of the reasons why we separated weak maps from weak refs when we first proposed weak Maps is because weak Maps as the mechanism for crossing membranes still allows cycles that cross membrane boundaries to still be collected. In fact, they can cross multiple membrane boundaries and still be collected because of the way weak Maps work. -SYG: So that's it. So weak maps. Let me see if I understand correctly. Maybe I misunderstood your point. Neither weak maps nor weak refs can solve the cycle problem problem. If there is a boundary where you cannot have strong references across the room balcony, which current -- +SYG: So that's it. So weak maps. Let me see if I understand correctly. Maybe I misunderstood your point. Neither weak maps nor weak refs can solve the cycle problem problem. If there is a boundary where you cannot have strong references across the room balcony, which current -- -MM: That's incorrect. I bet it's directly graphic is correct for weak references. That is not correct for WeakMaps. +MM: That's incorrect. I bet it's directly graphic is correct for weak references. That is not correct for WeakMaps. SYG: he problem is that there's no way to like it's the converse problem like you you cannot tell that the GC Tracer about a reference that is not there to keep the cycle alive. @@ -277,7 +276,7 @@ MM: Okay, I understand. There's nothing shor. I can say in response, but there's CP :Mark just to add to that like we need to work on these, but I do believe there are options on the table that we can look at that might allows us to do the membranes on top of these hard boundary between Realms and let's this offline. -JWK: I stand for Mark, I think the membrane built-in mechanism is very important for developers to use it with less effort. Although Shu says this is a footgun that developers might think is a security boundary, we can make the hard boundary as a default to make it less error-prone. If we are giong have to have some hard boundary mechanism, it should be the default but not an enforcement. By exposing some options we can still have some direct access to enable some more advanced usages. +JWK: I stand for Mark, I think the membrane built-in mechanism is very important for developers to use it with less effort. Although Shu says this is a footgun that developers might think is a security boundary, we can make the hard boundary as a default to make it less error-prone. If we are giong have to have some hard boundary mechanism, it should be the default but not an enforcement. By exposing some options we can still have some direct access to enable some more advanced usages. LEO: Yeah, just to expand two things here on what Shu and Mark has mentioned. I agreed the membrane is not the use case directly the immediate use case, but actually like the path towards functional use cases. It is important as Mark has mentioned like there is a pattern and not a but single abstraction. I think one of the ideas here is actually if we can work on top of that to create some abstraction that can be used in general in we should be discussing that as like a step forward, like homework, and in the same way one of the use cases this would solve, and I don't think I've made it clear before that and I also play to expand is, like like which is Salesforce used. but I believe we'd see use case for other mostly Enterprise to come in into using the web. (?) Which is like app Marketplace. The same way the browsers to use extensions, we do have an app marketplace where clients can use a mixture of their compartmentized components in their application and that the company itself is the system. This is a pattern that Salesforce uses today. This is a part of me that like other companies need to use on the go where you have like an at-scale set of clients and Etc. We should expand these use cases more. When we are not talking about security the integrity remains like really a fundamental aspect of what we use and we need to use use and we hope to achieve with this Realms. Once again as Caridy has mentioned we need to work through like how to tackle this. I believe there are chance there is a chance that we can get the best of both worlds here. @@ -303,7 +302,7 @@ SYG: Well, I think this kind of goes back to Kevin's point, which is if you're h BFS: I'm not arguing, I'm inquiring, I don't understand what they are trying to get motivation-wise here when we saw a presentation kind of taking the opposite approach an hour ago. -SYG: I'll let KOT chime in here. I think think the key importance between the trusted types program and the concerns about Realms is that trusted types is explicitly about filtering out stuff for code that you already trust to run, whereas realms could be easily Miss construed to be designed for code that you don't trust that you run inside a realm. Whereas as it is from the security folks point of view it in fact does not guarantee that right because of such as well +SYG: I'll let KOT chime in here. I think think the key importance between the trusted types program and the concerns about Realms is that trusted types is explicitly about filtering out stuff for code that you already trust to run, whereas realms could be easily Miss construed to be designed for code that you don't trust that you run inside a realm. Whereas as it is from the security folks point of view it in fact does not guarantee that right because of such as well BFS: The code example on the D3 CSV example did have the potential to be exploited because you're trusting something that's enforced(?) by a potentially mutable API. We could get into details here and more. Just I don't understand any adoption path that they're aiming towards with these and I don't understand the mandate of a security boundary except as kind of a mandate. There's no expectation for people to achieve it. @@ -314,22 +313,23 @@ SYG: No contention is a good way to put it there's I don't think there's some fi BFS: So one thing that may be a use case as well that I haven't been hearing about is in general hot module reloading or customization of modules. The way people are doing it now they are doing it the same process because they have no real way to do it across an encapsulation boundary and it would be good just while you're talking to get some feedback on how they expect people to encapsulate that boundary for loading rather than having, you know, every version of every app in a hot reload under the same same. DE: I think this cycle problem is very inherent. It comes up whenever you're bridging multiple different places where code is being executed. For example, when WebAssembly and Javascript interact, they have different object graphs that could point to each other. Sometimes you can use WeakRefs. We decided to add WeakRefs to meet some use cases, but they don't handle the cycle problem. So we're working on the Wasm GC proposal to allow a shared object graph between Webassembly and JavaScript with cycle collection. It is similarly important that we make multiple places for JavaScript code to run so that they'd be able to have rich references to each other in some way or other. Maybe it's possible to be this over a hard boundary, but there needs to be a way to handle this cycle problem because it continues to reoccur in different places. + ### Conclusion/Resolution -Discussion continues internally at chrome and elsewhere. +Discussion continues internally at chrome and elsewhere. ## Intl Locale Info for stage 2 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-locale-info) - [slides](https://docs.google.com/presentation/d/1ct7h9pLHmXCwojGlReNjAT9RgysqLk_3lyUcllnOQYs/edit#slide=id.p) - -FYT: Okay, so, my name is Frank Tang work for Google on v8 internationalization team. So today I have three proposals to talk about this meeting. So two of them will be back to back right now and the other one for our stage one proposals. They Advanced Mobile tomorrow for about time zone. So the one I first talked about squat until Locale in for in this particular, You would like to advance to stage two one sec Frank. I had neglected to make sure that we have no takers. So the motivation of this proposal is to try to expose our local information For example the “week” data. What does that mean? In different locales systems in particular when you render a calendar to the first day of the week to be different some for example in UK. I think the start was Sunday and us You'll start Monday or vice versa one way or the other I guess. In the U.S. I think usually people consider Saturday is a started a of weekend and those Sunday and of we can but in a lot of other part of the world, I think Israel and a lot of Muslim country. They start the weekend on Friday and end on Saturday, so those kinds of information and also with the particular year, which we can consider the first week the end. how many days minimum stay in that first week. Which hour cycle is used in the Locale and what kind of measurement system are used in that Locale Locale so this do You have been discussed and tc39 meeting September last year and then stage two one and I'm come to ask for the stage two proposal. +FYT: Okay, so, my name is Frank Tang work for Google on v8 internationalization team. So today I have three proposals to talk about this meeting. So two of them will be back to back right now and the other one for our stage one proposals. They Advanced Mobile tomorrow for about time zone. So the one I first talked about squat until Locale in for in this particular, You would like to advance to stage two one sec Frank. I had neglected to make sure that we have no takers. So the motivation of this proposal is to try to expose our local information For example the “week” data. What does that mean? In different locales systems in particular when you render a calendar to the first day of the week to be different some for example in UK. I think the start was Sunday and us You'll start Monday or vice versa one way or the other I guess. In the U.S. I think usually people consider Saturday is a started a of weekend and those Sunday and of we can but in a lot of other part of the world, I think Israel and a lot of Muslim country. They start the weekend on Friday and end on Saturday, so those kinds of information and also with the particular year, which we can consider the first week the end. how many days minimum stay in that first week. Which hour cycle is used in the Locale and what kind of measurement system are used in that Locale Locale so this do You have been discussed and tc39 meeting September last year and then stage two one and I'm come to ask for the stage two proposal. FYT: Here are some prior Arts that are listed here. It just very briefly shows you that it have some JavaScript libraries already doing this kind of thing, and Java and C libraries that are already doing that. In particular most Mozilla internals UI internal Library, which is not exposed to web developers, but it's mainly for Mozilla internal UI, can have the thing for a while and there's a need to expose it to the web developer. And we see that in several of these JS Libraries, some of them to pretty good jobs, some they're not that great. But it seems like like there's a lot of Need For This -FYT: the progress so far so in stage zero. We have some design choices that expose this information as a proposal to consider. I put together a profile to try to see what makes sense and currently we stuck it out a draft to use design option two, which means For option called function call as a getter while in hello Cal and just return object to expose those information. I think the other proposal we drop on the option drop is that each of them has a function called. I could be too much. So that was drop can be seen in that particular URL. I will show you here in high-level. Just roughly show you what it looks like still with the thing. It will return an object and have those value and depend on the Locale will return different values here. For example, Arabic or turn the direction or normally what Arabic Locale the text the global direction for RTL. And similarly for d for week you may have some different value. (re: slides - So apologize I somehow didn't capture that. I think as I mentioned this getter should be attached to Intl.Locale instead of Intl itself.) So this is a proposed change in the Function. so have spec'd here. Also for the unit in form, which mean that CLDR in the Unicode standard we have three different kind of system for the unit info Matrix In this area we could have some discussion of if it is useful or not. I think we may see you have some discussion whether this is used for not whether we can do to change it. There's some discussion in this area but we think this probably should be able to discuss this in stage 2. And also the week info I think is basically the change: that we're going to return the object to expose this value. So for example a calendar widget they can use this to render the calendar calendar or calendar application can use it. Yeah, the `.defaults` part there is not really clear spec out yet. So we still try to figure out what kind of information should be exposed here and whether it is why way to expose because that could be a burden to the implementation and we have to access different kinds of objects to get this information. We're still trying to figure it out. But the idea is that we have some way through exposure (?). +FYT: the progress so far so in stage zero. We have some design choices that expose this information as a proposal to consider. I put together a profile to try to see what makes sense and currently we stuck it out a draft to use design option two, which means For option called function call as a getter while in hello Cal and just return object to expose those information. I think the other proposal we drop on the option drop is that each of them has a function called. I could be too much. So that was drop can be seen in that particular URL. I will show you here in high-level. Just roughly show you what it looks like still with the thing. It will return an object and have those value and depend on the Locale will return different values here. For example, Arabic or turn the direction or normally what Arabic Locale the text the global direction for RTL. And similarly for d for week you may have some different value. (re: slides - So apologize I somehow didn't capture that. I think as I mentioned this getter should be attached to Intl.Locale instead of Intl itself.) So this is a proposed change in the Function. so have spec'd here. Also for the unit in form, which mean that CLDR in the Unicode standard we have three different kind of system for the unit info Matrix In this area we could have some discussion of if it is useful or not. I think we may see you have some discussion whether this is used for not whether we can do to change it. There's some discussion in this area but we think this probably should be able to discuss this in stage 2. And also the week info I think is basically the change: that we're going to return the object to expose this value. So for example a calendar widget they can use this to render the calendar calendar or calendar application can use it. Yeah, the `.defaults` part there is not really clear spec out yet. So we still try to figure out what kind of information should be exposed here and whether it is why way to expose because that could be a burden to the implementation and we have to access different kinds of objects to get this information. We're still trying to figure it out. But the idea is that we have some way through exposure (?). FYT: so what happened at the January 14th meeting: we already agreed to bring this for stage two advancement. Notice as I mentioned there are still some areas we're not quite sure on for stage two is that entrance materials that will have an initial spec? And the idea is that the committee does expect a feature to be developed and eventually included. It doesn't mean that all the details are narrowed down to the high-level scope here. So this is my understanding about what stage 2 means. I believe that has been met as a criteria because we have the initial spec. Probably still have some details which need to be modified again. Of course here is the thing we have found for stage 1 which already passed so - any question or for discussion about this. @@ -337,9 +337,9 @@ MF: So I want to preface this comment with, I'm certainly not an expert on this FYT: yeah, so this area about unit info on the I think you're absolutely right. We have some discussions with CLDR folks. Yeah, we have some issues in this area. So it is possible during stage two. We may want to reconsider whether we have to reduce the scope of all this. You are absolutely right. This is the area that I think there are other people who expressed some concern. It could be subject to change you can stage two, from my understanding. -MF: Yeah. I guess it's something that we could address in stage 2, but it also seems like what you've shown here is going in a direction that I don't see being too useful. I don't know whether maybe you'd want to split this part out and kind of try to address it more deeply. What I guess I'm trying to understand is what the goal is. Are you trying to provide an API that has that kind of depth? +MF: Yeah. I guess it's something that we could address in stage 2, but it also seems like what you've shown here is going in a direction that I don't see being too useful. I don't know whether maybe you'd want to split this part out and kind of try to address it more deeply. What I guess I'm trying to understand is what the goal is. Are you trying to provide an API that has that kind of depth? -FYT: I think it didn't get my point. I'm fine to change this so that this part could be removed. That's what I'm saying. +FYT: I think it didn't get my point. I'm fine to change this so that this part could be removed. That's what I'm saying. SFC: So Frank’s proposal is based on what's currently standardized in Unicode technical standard 35, UTS 35, and UTS 35 only specifies these three sort-of coarse groups. Now there's been a lot of work on this subject. There's already a stage 1 proposal called Smart Units and Unit Preferences. So I think that you know one direction we could go here is to say that, okay, we're going to go ahead add the the UTS 35 style, the coarse three category measurement systems, or we can continue doing this work over in the unit preferences proposal that's already at stage 1. Because it is definitely a large and challenging space. I have teammates who have been working on this space over much of 2020 and it's definitely a challenging problem and without any simple solutions. With a simple solution, there's a risk of being too simple and not being correct, and being too complicated could be difficult to use correctly. I think that's sort of the nature of this unitInfo getter. I think that we could drop this from this proposal and continue working on that later. But I also think that it has some value since it's conforming with the existing Unicode standard. @@ -349,11 +349,11 @@ FYT: Yeah, I agree with you. That's why I saying it could be dropped during stag MF: Is that something that we should consider before stage two, though? If it seems like we're generally unhappy with that API other than for the consistency of providing the results according to UTS 35, or whatever, then we just proceed without it by default? like that seems like the right thing to do. -FYT: Are you proposing conditional advancement to Stage 2 by removing that is that why you're proposing? +FYT: Are you proposing conditional advancement to Stage 2 by removing that is that why you're proposing? MF: What I would do if I was championing this proposal is I would ask for the proposal to move forward to stage two without the measurement API, `unitInfo` -BT: ZB agrees is on the queue. for what it's worth. +BT: ZB agrees is on the queue. for what it's worth. SFC: We discussed this in the 402 meeting in December and I also generally agree with the sentiment that unitInfo should be part of the unit preferences proposal. I have a slight preference for not including it. @@ -365,22 +365,24 @@ BT: I'm not hearing any objections. FYT: Anyone can second that? -SFC: I second. +SFC: I second. FYT: Michael how about you? How do you feel? MF: I'm happy with that. + ### Conclusion/Resolution -Stage 2 without the unit info part. +Stage 2 without the unit info part. ## `Intl.DisplayNames` for stage 2 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/intl-displaynames-v2 ) - [slides](https://docs.google.com/presentation/d/11Ch4Y9yYzMJjznX478Y0QbbCGiOAXbOzLjpYnMH9eck/edit#slide=id.p) -FYT: Yep. So this a display name V2 for stage 2. So a little background so intl.displayNames has been proposed and moved to stage 4 in September 2020, and this is the version 2, which is an enhancement from that. Basically what happens is you missed a (?) time. They are some issue that we cannot agree and there's some difficulty so we just drop that for the first version and the talk about later and so around all of us last year. We put together the proposal for Fortune to to capture some of the more important more important issues in life. all over promise that our version one and the same things that September we move to stage one already. And so this one is talking about proposed to move to safety to but with one things there's a huge scope wreck reduction in between the stage one and right now +FYT: Yep. So this a display name V2 for stage 2. So a little background so intl.displayNames has been proposed and moved to stage 4 in September 2020, and this is the version 2, which is an enhancement from that. Basically what happens is you missed a (?) time. They are some issue that we cannot agree and there's some difficulty so we just drop that for the first version and the talk about later and so around all of us last year. We put together the proposal for Fortune to to capture some of the more important more important issues in life. all over promise that our version one and the same things that September we move to stage one already. And so this one is talking about proposed to move to safety to but with one things there's a huge scope wreck reduction in between the stage one and right now FYT: so the motivation for this API is to enable developers to get human translation language regions script and other display info on the fly I which is commonly used for also provide a straightforward API to why the functionality instead of some work around way to get things down. @@ -402,7 +404,7 @@ SFC: I just wanted to address, after discussing with other 402 members (ZB in pa WH: [Referring to the slide about what to do if type is "calendar"] What is the *type* nonterminal on line 4a? -FYT: So in the UTS 35 there is grammar describing the grammar of the Locale and one of the how to say that one of the tokens, or one of the non terminal is "type". [displays tr 35 http://unicode.org/reports/tr35/#Unicode_locale_identifier ] +FYT: So in the UTS 35 there is grammar describing the grammar of the Locale and one of the how to say that one of the tokens, or one of the non terminal is "type". [displays tr 35 http://unicode.org/reports/tr35/#Unicode_locale_identifier ] WH: Okay, so it's an alphanumeric from three to to eight characters. @@ -416,7 +418,7 @@ FYT: Any other questions? BT: The queue is empty. -FYT: Okay, so I'd like to ask to advance to stage two. +FYT: Okay, so I'd like to ask to advance to stage two. BT: We have consensus. @@ -425,6 +427,7 @@ BT: We have consensus. Stage 2 ## Chartering Security TG + Presenter: Michael Ficarra (MF) - [proposal](https://github.com/tc39/Reflector/issues/313) @@ -446,9 +449,9 @@ ZB: so atmosphere microphone this year or something. So at Mozilla we fairly oft MF: It may be the case that these topics even within the committee overlap a lot and need to have good communication. I don't know if that's the case though. I think maybe we could get through the whole presentation and revisit this topic after, not just within the mission section here. -MM: The term "security" is a broad umbrella term. I think it's a fine umbrella term for this TG. And as we go forward we will be making distinctions because that's crucial to working things through, different mechanisms will serve one part of a distinction and not others. I think privacy very much falls under that umbrella, particularly falls under the overall set of confidentiality concerns, preventing leakage of information, which is very different than integrity concerns. But yeah, I mean if my privacy is violated I consider my security violated so I think it does fall under the umbrella term. +MM: The term "security" is a broad umbrella term. I think it's a fine umbrella term for this TG. And as we go forward we will be making distinctions because that's crucial to working things through, different mechanisms will serve one part of a distinction and not others. I think privacy very much falls under that umbrella, particularly falls under the overall set of confidentiality concerns, preventing leakage of information, which is very different than integrity concerns. But yeah, I mean if my privacy is violated I consider my security violated so I think it does fall under the umbrella term. -MF: I imagine that there are topics that we would consider privacy topics that would be appropriate for this group group to address. But I also imagine that there are privacy topics that I wouldn't consider part of this group. This is a complex relationship that I just haven't fully considered so I don't know what the right thing to do is here. +MF: I imagine that there are topics that we would consider privacy topics that would be appropriate for this group group to address. But I also imagine that there are privacy topics that I wouldn't consider part of this group. This is a complex relationship that I just haven't fully considered so I don't know what the right thing to do is here. MM: I think iit can be the group to consider her privacy topics that people want to bring to the group and try to figure out whether we want to include it under our umbrella or not. I think that the meta discussion there is in scope. @@ -456,7 +459,7 @@ MF: Sure, I think that's fair. And as we get to later slides, I'll be talking ab WH: I agree with MM’s position here. If you have privacy violations which can leak sensitive data, that's a security issue. So these two are very tightly intertwined. -ZB: I want to stress - because I think it it has been (?) that we can talk about it the different domains of expertise, but I think that one thing that I see in this proposal is that it proposes a certain cultural change in TC39 and how a particular group of interest is meant to, point one, assess an impact of a proposal, point five, maintain best practices for proposals. Independent of whether it's security or not security is a new model of interacting with our dynamic of our proposal advancements and development of the language and I think that this in particular maybe Security Group is going to be the first one to do this, but the impact of that work is going to directly benefit privacy considerations, which are going to be going through exactly the the same model of interactions with TC39. On that level I think is very much aligned and on the level that the WH was listing, and Mark was listing that they do interoperate and impact one another. I do think that there is a strong reason to consider some privacy considerations within this group. I would be very happy to see that represented. I understand that this is not an extension of scope that Michael was hoping for +ZB: I want to stress - because I think it it has been (?) that we can talk about it the different domains of expertise, but I think that one thing that I see in this proposal is that it proposes a certain cultural change in TC39 and how a particular group of interest is meant to, point one, assess an impact of a proposal, point five, maintain best practices for proposals. Independent of whether it's security or not security is a new model of interacting with our dynamic of our proposal advancements and development of the language and I think that this in particular maybe Security Group is going to be the first one to do this, but the impact of that work is going to directly benefit privacy considerations, which are going to be going through exactly the the same model of interactions with TC39. On that level I think is very much aligned and on the level that the WH was listing, and Mark was listing that they do interoperate and impact one another. I do think that there is a strong reason to consider some privacy considerations within this group. I would be very happy to see that represented. I understand that this is not an extension of scope that Michael was hoping for MF: I came into this very open to changes to this proposed mission. This is something I came up with, I ran it through some people on the reflector. I'm happy to have it evolve as we have this discussion and evolve as the group meets and kind of understand its purpose better. I think that the only thing I strongly feel is that as the group continues to operate we have a well-defined mission, even if it may change over time, but something we can point to to say this something we should be doing or working toward or should not. @@ -470,9 +473,9 @@ MF: Something I'm always reaching for when writing secure programs is what kind MF: So let's move on to slide six. How would we organize this group? What I'm proposing is that we have these three roles: a chair, specifically a group so that we don't block the progress of the group on somebody being too busy to do their shared duties; the chair would be responsible for prioritizing the TG agenda, communication and scheduling meetings and refining our scope over time. Second role I propose is that we have a speaker. This role would be for doing the main communication between this new TG and TG1, creating presentations etc, so that it clearly communicates our recommendations and our understandings to TG1. And a secretary so that we record and document everything that the committee outputs, or that the TG outputs. So those are the three roles that I proposed this group have. -MF: Slide seven, This is again, like on slide five, an example of the process we could have. We would have to have the chairs set the actual process, but here's something we could do. We could have monthly meetings and we could gate that on whether there are enough topics on the agenda. We need to figure out the duration. We need to figure out where we published our notes if it's published to tc39/notes or not. We need some mechanism for communication outside meetings. Github discussions might be one way. We should be doing regular status updates for TG1 meetings. Remember TG1 meetings are the ones you're in right now. I'm proposing yearly selection of those leadership positions from slide 6 coinciding with the TG1 election. I think this makes sense as the TG1 editor group also was proposing that their term coincides with the chair term in TG1 and of course deference to TG1 on all matters. So TG3, the security TG, would be only responsible for making recommendations, never for producing standards of their own or making unilateral decisions. +MF: Slide seven, This is again, like on slide five, an example of the process we could have. We would have to have the chairs set the actual process, but here's something we could do. We could have monthly meetings and we could gate that on whether there are enough topics on the agenda. We need to figure out the duration. We need to figure out where we published our notes if it's published to tc39/notes or not. We need some mechanism for communication outside meetings. Github discussions might be one way. We should be doing regular status updates for TG1 meetings. Remember TG1 meetings are the ones you're in right now. I'm proposing yearly selection of those leadership positions from slide 6 coinciding with the TG1 election. I think this makes sense as the TG1 editor group also was proposing that their term coincides with the chair term in TG1 and of course deference to TG1 on all matters. So TG3, the security TG, would be only responsible for making recommendations, never for producing standards of their own or making unilateral decisions. -MF: And then on slide eight. I have a list of people who have, on the reflector thread, expressed an interest in participating in this group. I think this is a fairly strong list. I don't want to speak for any of them. They haven't all explicitly agreed to any of the content that's in these slides, just expressed a willingness to participate in the general concept of a security-focused TG +MF: And then on slide eight. I have a list of people who have, on the reflector thread, expressed an interest in participating in this group. I think this is a fairly strong list. I don't want to speak for any of them. They haven't all explicitly agreed to any of the content that's in these slides, just expressed a willingness to participate in the general concept of a security-focused TG MF: So finally slide 9. These are the concrete things that I'm asking for us to agree to today. Number one consensus from TC39 to create this TG with the scope proposed on slide 3 and their roles proposed on slide six. Number two, the chairs following this approval prescribing a process for selecting TG leadership. From there, the rest of those undecided things can follow. So that's it. That's all I have. @@ -482,29 +485,29 @@ MF: That's a fair point. The mission for number five maintaining best practice r SYG: Yes process questions first a comment. I think given the initial size of the folks and called out on the slide there - three formal leadership positions seem more bureaucracy than needed for the initial size. Wondering if we should scale with growth instead of having a chair plus Secretary plus whatever the third one was. -MF: Someone could have more than one role. I guess if they chose that there's nothing preventing that from happening if we don't have enough people to all fill those roles separately. +MF: Someone could have more than one role. I guess if they chose that there's nothing preventing that from happening if we don't have enough people to all fill those roles separately. SYG: Yes, that's of course possible, but don't see the need for the upfront structure I guess. But that's not really a major concern. -MF: So each of the responsibilities would have to be filled either way, right? Like even if we don't have somebody assigned as speaker somebody would have to create presentations to TG1 and deliver them, whether or not we call them something. +MF: So each of the responsibilities would have to be filled either way, right? Like even if we don't have somebody assigned as speaker somebody would have to create presentations to TG1 and deliver them, whether or not we call them something. SYG: Yes, I'm worried mainly about lengthening elections, but I guess once a year is not too bad, but I don't see the need for three separate roles right now. -SYG: Did you go into what the actual process for arriving at the recommendations are? Is it also consensus? the output of this TG to TG1, what are they? It seems like mostly they were recommendations perhaps on like impact assessment on proposals or these I guess these one-time documents like the security model. How do we arrive at that? What is the process for working within the TG? Is it also consensus? +SYG: Did you go into what the actual process for arriving at the recommendations are? Is it also consensus? the output of this TG to TG1, what are they? It seems like mostly they were recommendations perhaps on like impact assessment on proposals or these I guess these one-time documents like the security model. How do we arrive at that? What is the process for working within the TG? Is it also consensus? MF: That's a good question. I had kind of I guess implicitly assumed consensus up to this point. SYG: I ask because as you alluded to, of course, the topic is very contentious. Different parties care about very very different definitions of security, and I am concerned about our ability to have a unified recommendation as output. There is something to be said for - we do not have a clear and shared understanding of security in TG1. And if we pluck a subset of folks into a separate group and hash it out, are we going to come up with a different result? -MF: Well, I can't guarantee that we would but it seems like we should take the optimistic approach at first, of assuming that we can, and if that doesn't work we should move from there. +MF: Well, I can't guarantee that we would but it seems like we should take the optimistic approach at first, of assuming that we can, and if that doesn't work we should move from there. SYG: Sure and if part of the hypothesis here is that we haven't given security sufficient time in plenary itself to truly hash out the stuff. Maybe we can be more productive if we block out separate time, but I'm kind of on the fence on that. MF: Yeah, that is pretty much the idea. When the topic of security comes up, It's always in relation to some specific proposal and it's based on whomever is speaking at the time's understanding, and we never actually address that difference of understandings directly in TG1 nor do I think it's appropriate for us to do that given given the size of the topic and how contentious it. -SYG: Yeah, that's fine. +SYG: Yeah, that's fine. -PHE: I'm referring or commenting specifically on the agenda slide. I understand the slide is helpful in giving some examples of what the group might take a look at. I'm not at all comfortable with the places where it uses words like "current" and "common" and "today" I think there is - security is not a popularity contest and security is not exclusively about what's being done right now. There is a lot of work being done in JavaScript that is on the edges. The work that we're doing in TC 53 is certainly like that and yet has very real security considerations around the language. There's a great deal of research and academia around security and JavaScript that's relevant that wouldn't qualify as "current", "common". And so I don't particularly care for that aspect of the high-level agenda in that it kind of implies that the focus of this group is the web today versus the full scope of how JavaScript is used and in taking full advantage of all the knowledge and experience that's there. +PHE: I'm referring or commenting specifically on the agenda slide. I understand the slide is helpful in giving some examples of what the group might take a look at. I'm not at all comfortable with the places where it uses words like "current" and "common" and "today" I think there is - security is not a popularity contest and security is not exclusively about what's being done right now. There is a lot of work being done in JavaScript that is on the edges. The work that we're doing in TC 53 is certainly like that and yet has very real security considerations around the language. There's a great deal of research and academia around security and JavaScript that's relevant that wouldn't qualify as "current", "common". And so I don't particularly care for that aspect of the high-level agenda in that it kind of implies that the focus of this group is the web today versus the full scope of how JavaScript is used and in taking full advantage of all the knowledge and experience that's there. MF: Yes, I hear those concerns. We had a bit of this conversation on the reflector. Slide five here is showing how, if I was chairing this group, and I was selecting agenda items to prioritize -- how I would prefer to prioritize work. I would prefer to have our earlier work, the work we work on over the first maybe a year or so, address things that are common and are popular. This isn't to say that we couldn't address these smaller topics or use cases or language theoretical security, just that I think the highest impact we could have starting out is addressing the most used and most common issues. @@ -514,9 +517,9 @@ SYG: I would like to remind folks that we are a standards body and the point of PHE: Sorry if I wasn't clear, I'm not suggesting we should become an industry research group. I'm suggesting that I'm reminding this group that that's from time to time. This group, from time to time, hears from people in Academia who have relevant input to our work and there should be nothing in the chartering of this TG which suggests that we would do otherwise. -SYG: Then I'm confused. I do not understand the first bullet point to mean that we would not consider academic papers or something like that, but that they not be the motivating thing that we do in the group, just like the motivating thing we do in TG1 is not read papers and do things as resulting from that but instead listen to the problems that our participants have and solve those current problems. +SYG: Then I'm confused. I do not understand the first bullet point to mean that we would not consider academic papers or something like that, but that they not be the motivating thing that we do in the group, just like the motivating thing we do in TG1 is not read papers and do things as resulting from that but instead listen to the problems that our participants have and solve those current problems. -DE: I was a little concerned about the framing of the chartering excluding things like the origin model, but it sounded like the the goal here was to say we're not redefining the origin model or deciding whether the origin model is good or not, just considering it as we analyze - just seeing that is the thing that's out there in the world not for us to decide on as we're considering how to analyze the security of the JavaScript features. So to me the scoping sounds good. I think it's important that we expect to continue to have disagreements within the group. I don't think we should have an exercise to determine the JavaScript security model and put all the analysis for proposals on hold until we have agreement about that, because I'm not sure we'll ever have agreement about the JavaScript security model. I mean it would be great if we could but we just have these standing disagreements. +DE: I was a little concerned about the framing of the chartering excluding things like the origin model, but it sounded like the the goal here was to say we're not redefining the origin model or deciding whether the origin model is good or not, just considering it as we analyze - just seeing that is the thing that's out there in the world not for us to decide on as we're considering how to analyze the security of the JavaScript features. So to me the scoping sounds good. I think it's important that we expect to continue to have disagreements within the group. I don't think we should have an exercise to determine the JavaScript security model and put all the analysis for proposals on hold until we have agreement about that, because I'm not sure we'll ever have agreement about the JavaScript security model. I mean it would be great if we could but we just have these standing disagreements. DE: So about the amount of TG administration, just from my past work on internationalisation and outreach groups. It's possible to bring more people into running these groups, but it's pretty hard to recruit a lot of people to do that work. For some groups they've been taken over by somebody else or somebody's co-leading them in others not I'm involved in or - there aren't groups that I started that have like five different people all jumping to be in some management subcommittee. So let's just be realistic about that. @@ -560,17 +563,17 @@ BT:Yes. MM: Whatever they come up with is a suggestion to bring back back the plenary, correct? -MF: I mean I think the chairs are well within their right to just choose this kind of thing. But if they want to run it by plenary, that's fine, too. +MF: I mean I think the chairs are well within their right to just choose this kind of thing. But if they want to run it by plenary, that's fine, too. AKI: Don't worry, Mark, we don't get to do anything unilaterally. MM: selecting the TG leadership could become - I can see scenarios where it's contentious and political and overrides the preferences of some of the stakeholders. I think that's unlikely, but I don't want to just give them a blank check to decide what the process of suggesting selecting the teaches leadership is -BT: So I think just as a practical matter will come back at the next meeting with what we think the process should be and I also don't think it'll be particularly surprising so. if that's okay with you Michael, then I think the next meeting would be great. +BT: So I think just as a practical matter will come back at the next meeting with what we think the process should be and I also don't think it'll be particularly surprising so. if that's okay with you Michael, then I think the next meeting would be great. -MF: Sounds good. +MF: Sounds good. -MM: Okay. +MM: Okay. MF: Thank you everyone and thank you. Whoever it was that was running my sides for me I couldn't do without you @@ -584,15 +587,14 @@ BT: We unfortunately need to move on from this topic. Let me suggest that Istvan ### Conclusion/Resolution - ## Do Expressions + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/bakkot/do-expressions-v2) - [slides](https://docs.google.com/presentation/d/1mPqtucCZvvkcXhm9yhgVMO07UaEG1eMNHHtEDpodA1g/) -[spec text](https://bakkot.github.io/do-expressions-v2) - KG: I brought this up last year, but to recap, I'm picking this proposal up from Dave Herman who hasn't had time to do this sort of work recently. I am presenting a slightly different variant than he would so, please don't attribute my opinions to him. I presented this a few meetings ago and nothing has substantially changed since then except that there is now spec text which was an ask from delegates. By the way, I have a new URL where I have put the spec text just because neither Dave nor I could get me admin access to the actual repository. So please be sure to look here rather than at the actual repository for the next couple of days. Hopefully, I'll get that sorted soon. KG: So the point of do Expressions is that they are an expression which you can put statements into. So for example if you need to scope a variable or - we'll see a few more examples later. This is the approximate syntax. There's a do followed by a block in expression position, and the value of the do expression is the completion value of the list of statements. Now, as a reminder, completion values are already a thing in the language. They are observable using eval. They are just not frequently observed, because who uses eval? They are in some cases quite unintuitive. Usually they are just like the last expression, or the last expression in statement position at any rate, that you evaluated. But there are weird corner cases and we'll talk a bit more about that later. @@ -605,7 +607,7 @@ KG: Now some things that I think are bad. The completion values for loops - peop KG: So a few more cases. There is this question of what do you do about var declarations? Do they hoist out of the block? I have been convinced that the right behavior is that they should hoist out of the block and that is the behavior that I am proposing, with the exception that if you put a var declaration in a do expression in a perimeter position, either as the default or in a computed property for destructuring, this would be an early error. You can't add new variables to the parameter scope just because it's too confusing. Also one last edge case saying that you would not be able to do the sort of nonsense b33 functions in blocks hoisting. Just this would not hoist. This is just a function that is scoped to the block that contains it. There's no magical hoisting of functions. They're hoisted to the top of that block, of course, but not to the top of anything else. -KG: So yeah, I'm just going to go through the last couple of edge cases really quick here. I'm proposing that the completion value for an empty do expression is undefined. I am proposing that break and return and continue across the boundary of the do expression would be disallowed. I know some people have asked for this. I have also gotten a lot of push back for allowing this. A lot of people saying that this absolutely must not be allowed. Under my proposal it's disallowed, you cannot have a break or a continue that crosses the boundary of the do. I'm not going to go into this example, but you can read the slides if you care. I'm not gonna go into this example either, but you can again go into this if you care. The thing I wanted to highlight here was the engines are not completely in agreement on completion values which implies that people aren't relying on them particularly much. I'm also, at a future point, going to propose async do expressions, but that is not currently part of what I am asking for. I'm just asking for synchronous do expressions. You can review the spec text. It's relatively complete. There's a couple of to-dos but I am only asking for stage 2 at this time. It's up on GitHub. Yeah, that was everything I wanted to cover. Do we have a queue? +KG: So yeah, I'm just going to go through the last couple of edge cases really quick here. I'm proposing that the completion value for an empty do expression is undefined. I am proposing that break and return and continue across the boundary of the do expression would be disallowed. I know some people have asked for this. I have also gotten a lot of push back for allowing this. A lot of people saying that this absolutely must not be allowed. Under my proposal it's disallowed, you cannot have a break or a continue that crosses the boundary of the do. I'm not going to go into this example, but you can read the slides if you care. I'm not gonna go into this example either, but you can again go into this if you care. The thing I wanted to highlight here was the engines are not completely in agreement on completion values which implies that people aren't relying on them particularly much. I'm also, at a future point, going to propose async do expressions, but that is not currently part of what I am asking for. I'm just asking for synchronous do expressions. You can review the spec text. It's relatively complete. There's a couple of to-dos but I am only asking for stage 2 at this time. It's up on GitHub. Yeah, that was everything I wanted to cover. Do we have a queue? BT: Yes, we do. @@ -619,7 +621,7 @@ SYG: I think my topic is up next. It was about the var stuff, where you have an KG: Honestly, I had forgotten that we do that. Yeah, I banned them because they were gross, but you're right that there is no technical need and we could allow them. I don't have that strong of feelings. -SYG: Okay. Yeah, not a stage 2 concern. +SYG: Okay. Yeah, not a stage 2 concern. MM: So I like the idea that we're starting off saying anything that looks hazardous or that we can't agree on or that people might be confused by, we start off disallowing. Most of those things we can always incrementally decide to allow them if we start off disallowed. Let me add a suggestion to that pile, which is that some problem cases go away if we say that this construct is for strict code only. For example your nested function. Anything you do in sloppy mode is going to cause confusion anyway, so I suggest that we start off with that being one of the restrictions and that we consider this a strict only construct. @@ -641,7 +643,7 @@ YSV: We reviewed this as a team, and I want to read some of the comments that we KG: It's a great deal like eval, in terms of - it creates its own scope for variables to be declared in, you get the completion values the same way you do from eval. It just restricts what you could write, but in principle, I believe that just switching this out with a strict direct eval would be pretty much identical semantics. -YSV: A few checks would be for forbidding things such as loops. I think that this is something we can nail down and determine just how significant this implementation complexity will be once we get into stage two. +YSV: A few checks would be for forbidding things such as loops. I think that this is something we can nail down and determine just how significant this implementation complexity will be once we get into stage two. KG: Yes. Thank you. And you know, I'm not dead set on having these restrictions. I just think that they will - well, I'll talk about that a little more later. @@ -663,7 +665,7 @@ KG: Okay, so I could be persuaded to allow loops here if a lot of other people s MM: I strongly disagree with WH. I think that banning loops is really essential. -WH: Why? +WH: Why? MM: because of the point KG already made which is that people will form conflicting intuitions of what the obvious answer is, and therefore will cause unnecessary surprises. If you ban them, then people will put an expression statement at the end to have the value be exactly what they intend. And the key thing about that is code is read more often than it is written and the expression statement at the end is a little bit more trouble for the author, but makes it very clear to all readers exactly what's being returned. @@ -719,7 +721,6 @@ KG: So I think it is a sticking point for now. I think it is reasonably likely t TB: Right, Okay. - KG: So I see from MM a clarifying question about yield implying return. I agree that it is technically true that in a generator you can already put a return in arbitrary expression positions by having a yield expression and then having the generator prototype dot return be called on the instance of the generator to inject a return completion into the middle of the generator. I don't think this is a fact that JavaScript devs are likely to be familiar with, and it is also restricted to generators rather than arbitrary functions. So I am very hesitant to generalize from its example. I have a point about this in my README in fact. If you want to say more things MM, go ahead. KG: So I hope that we can clear the queue, but if someone has to leave and they really want to be here for this discussion, please speak up, but otherwise, I think this should hopefully be just be a couple more minutes. The remaining things on the queue seem like they're just agreeing with the decisions in the proposal. So I'm hoping that we can just go straight to asking for stage 2. @@ -747,5 +748,3 @@ BT: Okay. All right. So it seems like it won't get stage 2 today, but we'll see Proposal does not advance. Reason: Conflicting requirements about return and loops need to be resolved. - - diff --git a/meetings/2021-01/jan-27.md b/meetings/2021-01/jan-27.md index cf6e0b8d..d8a3fe53 100644 --- a/meetings/2021-01/jan-27.md +++ b/meetings/2021-01/jan-27.md @@ -1,8 +1,8 @@ # 27 January, 2021 Meeting Notes ------ +----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Ross Kirsling | RKG | Sony | @@ -43,15 +43,15 @@ | Yulia Startsev | YSV | Mozilla | | Chengzhong Wu | CZW | Alibaba | ---- +----- ## Temporal + Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-temporal) - [slides](https://docs.google.com/presentation/d/1HMt7ytn3SOzYk2TUSdXHejkIDpL8Cy5KvZ9Yrz0JdBU) - USA: Hello everyone. I'm Ujjwal and on behalf of the Temporal Champions group, allow me to give you the Temporal update. Temporal is a proposal that is trying to add a modern ergonomic date-time API to JavaScript. We've been working on a load of exciting functionality all the way from a reusable API basically to stuff like that. There's been work from people at Igalia, Bloomberg, Google, and a host of invited experts and more. And then you'd probably ask me what we've been doing for the last two months. Well, not much actually. As we discussed in the previous meeting the proposal has basically been done. We've changed a few things here and there. So, let me get into the details: we iterated on the month representation of in the data structure the data model. This is mostly based on Manish [Goregaokar]'s analysis of non-Gregorian calendars as well as the implementation work that's been going on in the CLDR. If you're using the Gregorian calendar, you're probably not having to do anything. But if you're using any of the calendars that have a rather creative leap month, of like the Hindu calendar (?) We've been doing a bunch of minor changes around type coercion and validation to make the API more stable, and a bunch of editorial changes. Of course, the documentation has been improving and I hope it's been helpful in your review so far. We've also started working on HTML integration. There is a pull request that is linked to in the slide. We can find these files on the GitHub. One of the big things that has been on our mind, and my mind in particular, has been been standardizing this format as we discussed in the last meeting. We've worked hard on adding support for non-Gregorian calendars and we want to have a way to represent that in the string representation across the wire. So, of course we had to come up with a persistent format for ZonedDateTime. But we also didn't want to invent a new non-standard format. As you know, tools like Java and Linux already use a non-standard format that appends a new time zone suffix to the existing standard, and we didn't want to build onto that and make the space even more problematic. So the missing piece here was of course the representation of the time zone and the calendar in this format and also a few other artifacts that were just outdated in our design. We have a draft out right now that extends RFC 3339 that is being discussed by by the IETF CalConnect mailing list and the Calendar working group and the idea is to bring it to IETF in early March and get it in the standard as soon as possible so that it can be normatively referenced from the spec. USA: the current proposal status is that the proposal is frozen and camera ready. The group has unanimously decided that we have basically covered each corner case that we wanted to and that there will be no remaining open questions on our end. So from here on any changes would be in response to your reviews. Please review this proposal. We wanted to go to Stage 3 this meeting but the amount of reviews that we had so far weren't enough to make us confident. We will be holding champions' meetings, but of course since there's no further bikeshedding on our end, these meetings will now be repurposed to host you well and to answer your questions and help with the review, so please join these meetings if you're interested. We plan to propose for Stage 3 in March, just to clear any ambiguity. What do I mean by 'frozen'? There will be more examples in the cookbook. There will be fixed bugs. The polyfill will be improved, the documentation also. We will take this time to expand the test262 tests we have right now and port some of the existing tests to test262. We will prepare to move the polyfill into a new project where it can be made more production ready. And of course, we'll address comments, delegates' and editors' reviews. What we will not do is that we'll no longer discuss any new ideas or any new additions. We will not do any major API changes anymore unless— any API changes, really— unless it's in response to a delegate review that has a serious problem with the API. We will not make any normative changes in the spec text, except again stuff in response to the reviews. @@ -66,19 +66,16 @@ USA: Yes. They are on Thursdays every week. YSV: Thank you. It looks like we don't have any comments or questions. Thank you so much for the presentation and if people are interested in getting a bit more information about Temporal, if people are maybe implementing it, please go to the Temporal meeting. - ### Conclusion/Resolution Proposal is camera-ready, will go for Stage 3 in March. - - - ## Async do expressions + Presenter: Kevin Gibbons (KG) -- [proposal]() -- [slides]() +- proposal +- slides KG: This is a follow-up to my earlier presentation on do Expressions, but it is not the same proposal. So there's a repository for this, which is not yet on the tc39 org, but it has very initial spec text and a readme and so on that you can read. So the problem statement, since this is going for stage 1, is that introducing an async context requires defining and invoking a new function - specifically a new async function, either an "async function" function or a async arrow function, and that's conceptually and syntactically quite heavy for what is a relatively common operation. I would like to make it easier. So my proposal for how to do that is an async do expression, which is a variant of the do expressions that we discussed earlier. @@ -90,13 +87,13 @@ MM: So I think I understand the answer, but I want to make sure: if you want to KG: I would say that there's a couple of other important differences. One is that you're not always in a function context. I think the ability to introduce an async context at the top level of the script is quite valuable for code that I write. And the other is this example with promise.all where, you're right, I could just write these as do expressions that contain an await. But as you said that would pause the execution of the outer function, so these two things in this example on screen would not be happening in parallel and for something like promise.all the intent is generally to, just as you say, fork execution. -MM: Okay, so none of that was an objection. I just wanted to sure I understood the motivation. Thanks. +MM: Okay, so none of that was an objection. I just wanted to sure I understood the motivation. Thanks. KG: Yes, that is the motivation. WH: I guess mine is a similar question. If you're inside an async function, what's the difference between `do` and `async do`? -KG: inside of an async function? I don't think there would be any differences - actually, no, that's not true. The difference is that an async do the return value of the async do would be a promise rather than a regular value so you have to await to get the value out. I don't think it would be particularly useful to use an async do inside of an async function except in contexts like this where you want to do two things in parallel. The primary thing is that the inside of the async do does not pause the execution of the outer function. +KG: inside of an async function? I don't think there would be any differences - actually, no, that's not true. The difference is that an async do the return value of the async do would be a promise rather than a regular value so you have to await to get the value out. I don't think it would be particularly useful to use an async do inside of an async function except in contexts like this where you want to do two things in parallel. The primary thing is that the inside of the async do does not pause the execution of the outer function. MM: Okay. I'm confused by the qualifier in that answer. It seems to me that it's just as useful in an async function, in that cases come up where you want the outer function to proceed with the promise and you want to fork the flow of control. @@ -126,11 +123,11 @@ KG: It would still result in a promise. It would be as if you had done promise d MM: No, it should should also reify thrown exceptions into rejected promises because that's what - -KG: you're quite right. It would be as if you had a do expression with a try catch that had the promise constructor that resolved in the try branch and the reject in the catch branch. You're right. +KG: you're quite right. It would be as if you had a do expression with a try catch that had the promise constructor that resolved in the try branch and the reject in the catch branch. You're right. MM: Good. -DE: Yeah, so I'm skeptical of focusing too much on this performance aspect that SYG and YSV raised. I think if we're thinking about programmers' mental model or folk wisdom, maybe it is more accurately put that there's the meme that promises are heavy and closures are heavy. These ideas both are sort of out there with people micro optimizing for them a little too much maybe. My suspicions are to agree with KG that in most realistic cases you're going to be using this for an actual reason, and the closure overhead won't be dominating. I think it also would not be so hard to explain that the `async do` is like a little `async function`. I think this is a great proposal. It's very useful to encourage the kinds of async programming patterns will be more optimal for doing things in parallel. So I'm in favor of it. +DE: Yeah, so I'm skeptical of focusing too much on this performance aspect that SYG and YSV raised. I think if we're thinking about programmers' mental model or folk wisdom, maybe it is more accurately put that there's the meme that promises are heavy and closures are heavy. These ideas both are sort of out there with people micro optimizing for them a little too much maybe. My suspicions are to agree with KG that in most realistic cases you're going to be using this for an actual reason, and the closure overhead won't be dominating. I think it also would not be so hard to explain that the `async do` is like a little `async function`. I think this is a great proposal. It's very useful to encourage the kinds of async programming patterns will be more optimal for doing things in parallel. So I'm in favor of it. SYG: I agree with DE that and KG that I'm not too worried about the actual performance issues. I was more worried about the memory implications in terms of debugging leaks. @@ -150,9 +147,9 @@ DE: if anything would be permitted in sync do expressions, I mean if we wanted t KG: My intention is to allow await and yield to be inherited from the outer context in regular do expressions. I think that's a big part of the proposal, is that it's just like any other expression and has the same capabilities as any other expression. Anyway, I don't want - -DE: That seems perfectly consistent to me. Sorry for adding confusion. +DE: That seems perfectly consistent to me. Sorry for adding confusion. -TAB: I mean given that the await keyword already has a distinct meaning, but obviously required meaning in async to I also think that's fine to have yield as well. I'm not confused about yield, return would confuse me. +TAB: I mean given that the await keyword already has a distinct meaning, but obviously required meaning in async to I also think that's fine to have yield as well. I'm not confused about yield, return would confuse me. WH: By this argument you should ban `yield` from sync do expressions as well. There’s also similar confusion about what `this` might refer to. @@ -164,20 +161,24 @@ KG: Okay, understood. YSV: Please Kevin go ahead and ask for stage 1. -KG: I'd like to ask for stage 1 for this general problem of introducing an async context in a syntactically and conceptually lighter way, with the specific proposed solution of async do in mind but open to exploring other solutions if async do proves not to be viable. +KG: I'd like to ask for stage 1 for this general problem of introducing an async context in a syntactically and conceptually lighter way, with the specific proposed solution of async do in mind but open to exploring other solutions if async do proves not to be viable. YSV: any objections? WH: Sounds good! YSV: Sounds good to me too. It looks like you have stage 1, congratulations. + ### Conclusion/Resolution + - Stage 1 + ## Class Brand Checks + Presenter: John Hax (HAX) -- [proposal]() -- [slides]() +- proposal +- slides JHX:. So this is the class brand check proposal. This proposal actually come from the discussion of another proposal: ergonomic brand checks for private fields. Specially issue number 13. I think the dcleao raised this issue and I think he have a very strong opinion here here. This also inspired me to re-examine the actual use case from the beginning. @@ -235,7 +236,7 @@ WH: I agree with JHD. SYG: Agree with JHD that this doesn’t obviate `#x in obj`. -CZW: I'm wondering about the arguments that the private fields are different from duck typing. Private fields are unique to the class So how could private Fields be duck typing typing in the case? +CZW: I'm wondering about the arguments that the private fields are different from duck typing. Private fields are unique to the class So how could private Fields be duck typing typing in the case? JHD: That's a fair question. I think that the phrase “duck typing” is probably not the most accurate term here. I'm more thinking that the reason that I check a public property on a thing before I access it is because I want to know that it's there before I access it, and that this is the same motivation: the reason that I would want to do that on a private field. The meaning of duck typing is, “does it quack like a duck, therefore it's a duck”, right? I'm not trying to do that with private fields. I'm just trying to be explicit in my code where I reference the thing I've already checked is there. @@ -292,17 +293,19 @@ YSV: Stage 1? [yes] ### Conclusion/Resolution + - Stage 1, with the explicit understanding that it will not be a replacement for Ergonomic Brand Checks + ## Ergonomic Brand Checks -Presenter: Jordan Harband (JHD) -- [proposal]() -- [slides]() +Presenter: Jordan Harband (JHD) +- proposal +- slides JHD: The proposal is the same as it was back in June when I first asked for stage 3. There have been a series of objections that have been explored between meetings, and in 1 or 2 different incubator calls, and on GitHub. I believe all of the objections have been addressed and that the last point was, would the previous presentation of the class brand checks proposal be a replacement? I continue to believe it would not be a replacement; that it would be a great addition; they would complement each other. So essentially that's where I'm at. The proposal is fully reviewed still because it hasn't changed since all the editors and reviewers last reviewed it; there continues to be a need for it; and I would like to ask for stage 3. -MM: I support stage 3. +MM: I support stage 3. WH: I support stage 3. @@ -338,7 +341,7 @@ JWK: I'm curious. Does it generally mean if a class is partially initialized, fo BFS: There's nothing abnormal about it. -CZW: I'm still concerned in the partial initialization case that how could probably in the partial instance? Private in can only detect the partial instance but not - there is no way to recover from it. So it has to be a fatal error in the case. +CZW: I'm still concerned in the partial initialization case that how could probably in the partial instance? Private in can only detect the partial instance but not - there is no way to recover from it. So it has to be a fatal error in the case. BFS: Sure, so, let's go back to the example with addEventListener. So one thing you can do is if you're concerned that your class may be partially initialized for some reason, inside of your side effects. You can perform a check to see how far initialization occurred and remove things such as the own event listener that exists on it. So we can't make partial initialization impossible, nor can we make it recoverable - the key is we want to make it detectable. So that's what this is allowing us to do without causing errors by trying to access things that don't exist. @@ -350,7 +353,7 @@ BFS: We don't actually have those semantics ironed out so I'm going to say no. SYG: I think one of my got deleted; I had two. so I'll go through the first one which was I think on technical merits I am strongly in favor of designing building block features, which I see as a sign of composability, to express higher level intentions. That there may be opinions that it's not tailor fit for a higher level intention. I want to strongly disagree that that is a sign of a feature that is not that is a negative sign of the future. I don't think that it's a negative sign with a feature that it is low level. So that's the first point and I think Bradley cover covered very well the difference in use case. and how both are useful. And my second point is that I thought we had just agreed several delegates had just agreed to stage one kind of contingent on, that the class dot has instance of proposal is not a replacement for this one. And now it seems like there are blocking concerns on this one because the other one exists, something seems off to me here with the process. -BFS: Yes, I'd agree. It feels like you get more capabilities from this proposal than has instance. So I would prefer it if it comes down to one or the other. +BFS: Yes, I'd agree. It feels like you get more capabilities from this proposal than has instance. So I would prefer it if it comes down to one or the other. JHX: it's just one thought I want to say it becomes a political issue. Can we focus on the technical? @@ -434,19 +437,20 @@ YSV: Final Call. [silence] -YSV: Okay. Now I will say that we have stage 3. Thank you everybody. Thank you everyone for your patience with that. +YSV: Okay. Now I will say that we have stage 3. Thank you everybody. Thank you everyone for your patience with that. ### Conclusion/Resolution + - Stage 3 ## Extend TimeZoneName Option Proposal for stage 1 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/FrankYFTang/proposal-intl-extend-timezonename/) - [slides](https://docs.google.com/presentation/d/1CABEQP_U-vCUxGKXbJmaZKvJZHEdFZZtAHGAOnRbrCY/edit?usp=sharing) - -FYT: Okay. Hi everyone. My name is Frank Town Walk by Google on the be a internationalisation team and today we'll talk talk about a proposal extend the Ecma for to to this plane and sorry. Until I can afford to save time format so sorry. Could someone mute? Yeah. Thank you the motivation of proposed. So is that we tried to extend the option in the Intl data format for better better support of time option. Sorry. There's someone still have a lot of noise that was typing to give me a little down. Could you you meet please? Thank you. So currently and until data format format. We have different style for time zone name: long or short. This proposal Basically just adding four other new options - short GMT, long GMT, short wall, and long wall. But what does that mean? so if I run a very simple script that the show is so just for looped into this six different option and either the Intl data format, or you can actually called the dates to Locale time string with that. You will see currently the English Locale the short will show PST the long will show Pacific Time send client, but they are time people. You may want to see a GMT offset or something we call wall time. the PT is an abbreviation for specific time (?). So real use case on the web right now, I think this is probably server-side rendering for example, this example show you the NPR news. They were using ET instead of EST, right or EDT because they simply just want to say this is eastern time or MT, mountain time. Sometimes whenever for example the right hand side we have this financial result release. They just want to say eastern time. This is what we call a long wall. So instead of the Eastern Standard Time it was Eastern. This is another example of what will happen if this display in Chinese, this this is a traditional Chinese. So the current the first two is whatever currently already offer in a coma for two and the lower floor was showing you the what my look like in the traditional Chinese Locale. So for example GMT in some other locale maybe will be localized or have a wrapping pattern around that, but it will show the reference related to GMT. +FYT: Okay. Hi everyone. My name is Frank Town Walk by Google on the be a internationalisation team and today we'll talk talk about a proposal extend the Ecma for to to this plane and sorry. Until I can afford to save time format so sorry. Could someone mute? Yeah. Thank you the motivation of proposed. So is that we tried to extend the option in the Intl data format for better better support of time option. Sorry. There's someone still have a lot of noise that was typing to give me a little down. Could you you meet please? Thank you. So currently and until data format format. We have different style for time zone name: long or short. This proposal Basically just adding four other new options - short GMT, long GMT, short wall, and long wall. But what does that mean? so if I run a very simple script that the show is so just for looped into this six different option and either the Intl data format, or you can actually called the dates to Locale time string with that. You will see currently the English Locale the short will show PST the long will show Pacific Time send client, but they are time people. You may want to see a GMT offset or something we call wall time. the PT is an abbreviation for specific time (?). So real use case on the web right now, I think this is probably server-side rendering for example, this example show you the NPR news. They were using ET instead of EST, right or EDT because they simply just want to say this is eastern time or MT, mountain time. Sometimes whenever for example the right hand side we have this financial result release. They just want to say eastern time. This is what we call a long wall. So instead of the Eastern Standard Time it was Eastern. This is another example of what will happen if this display in Chinese, this this is a traditional Chinese. So the current the first two is whatever currently already offer in a coma for two and the lower floor was showing you the what my look like in the traditional Chinese Locale. So for example GMT in some other locale maybe will be localized or have a wrapping pattern around that, but it will show the reference related to GMT. FYT: so during the stage 0 remember, this is - we're asking to advance to stage one. Okay, so it's not to stage two, but during ecma 402 discussion working group discussion. They are actually originally couple other additional options that were proposed but Mozilla had some concern about payload size. If we have options. We may need to increase more data to be included. The browser so we do some study therefore after that. We actually remove some of the original possible values that cldr data actually provides, but we think that could be a little bit too much. But for what we currently have proposed, for options for the short GMT and long GMT for each Locale. @@ -463,12 +467,15 @@ SFC: I support stage 1. RPR: Thank you, Shane. Okay, so we've had one message of support. So we just do a final check - any objections to stage one? [pause] No objections. Congratulations Frank you have stage 1. Thank you. ### Conclusion/Resolution + - Stage 1 + ## Brand checking + Presenter: Daniel Ehrenberg (DE) - [proposal](https://es.discourse.group/t/strong-brand-checking-in-javascript/557) -- [slides](https://docs.google.com/presentation/d/1-zhONcg-vHS2klj9O9r7JWzeip-OqSIXnzDnXB7iGXE/edit) +- [slides](https://docs.google.com/presentation/d/1-zhONcg-vHS2klj9O9r7JWzeip-OqSIXnzDnXB7iGXE/edit) DE: Okay. So brand checking in JavaScript, this is a short presentation for an informal discussion. I don't have a big concrete proposal to make. So what is brand checking? We were using this term in earlier presentations this meeting. Brand checking is just a piece of TC39 jargon to be checking whether an object has an internal slot, which is the same thing really as a built-in private field. So internal slots are used to store the state of JavaScript objects that are built into the language or platform. So when do brand checks occur? They mostly occur right before using that internal slot. So before reading or writing one of these pieces of internal state, if you have an arbitrary object, you need to check that that exists and throw the appropriate exception if it doesn't exist. So why do brand checks exist? Most of the time that brand checks are used to check that this internal state can be used safely. So this is in contrast to a fully object-oriented design. These are the equivalent of private fields not public fields, so that means that the class or the language standard in this case is able maintain invariants about them making them safer to use. So there's many many different functions and methods in the JavaScript standard that do these brand checks. It's not at all rare, so lots of functions that are on prototypes so like Date.prototype.getFullYear() or Map.prototype.get(), these all at their first step check that they're working on what they what they think they are, that this value is has a particular brand or a particular Internal slot. It's also used on arguments to functions. For example, the TypedArray constructors, if you call it on a TypedArray or an ArrayBuffer then it will have a certain behavior. And that's done by checking the internal Slots of the argument. If you do JSON.stringify on a Number wrapper or a String wrapper those will be unwrapped due to this particular slot. Promise.resolve does so in a kind of an optimization way; if you pass in a real promise, then it won't create an extra layer of indirection. Array.isArray is a brand check but a special kind of brand check that I'll talk about later. And one example from web platform is postMessage: when you postMessage or do anything that uses the HTML serialization algorithm, such as write into IndexedDB, then there are certain classes that are built into the platform that are brand checked for, and serialized in a predictable way, such as Maps. @@ -512,7 +519,7 @@ MM: I was against it, but I didn't block consensus because overall JHD’s accou JHD: One thing that I regret is that I didn't come up until much later with us an alternative suggestion of making `Symbol.toStringTag` properties be brand-checking accessors, which would have resolved those concerns, but now we’re in a different place. -DE: I have my opinion, I guess I made it clear earlier in this presentation, but I don't know. I mean that's that's water under the bridge now. I guess people probably depend on the extensibility of `Symbol.toStringTag` at this point. I'm not happy about that. But yeah, so I think because brand checking is useful and because we've been consistently providing it for new classes that were added. And I think we should just make a direct API for it. and is the idea for this proposals long been called *Type*.is*Type*. and I have the *type* in italics to make it clear that that's not part of the name. So for example Map.isMap would be a static method that takes an object and tells you whether it has a map data internal slot. Actually this obviously isn't the final spec text because it doesn't handle the non object case. But the idea would be that rather than having ad hoc ways to test the brand, we have a built-in way that is easy to use. +DE: I have my opinion, I guess I made it clear earlier in this presentation, but I don't know. I mean that's that's water under the bridge now. I guess people probably depend on the extensibility of `Symbol.toStringTag` at this point. I'm not happy about that. But yeah, so I think because brand checking is useful and because we've been consistently providing it for new classes that were added. And I think we should just make a direct API for it. and is the idea for this proposals long been called _Type_.is*Type*. and I have the _type_ in italics to make it clear that that's not part of the name. So for example Map.isMap would be a static method that takes an object and tells you whether it has a map data internal slot. Actually this obviously isn't the final spec text because it doesn't handle the non object case. But the idea would be that rather than having ad hoc ways to test the brand, we have a built-in way that is easy to use. DE: That's it. Discussion? Should we make a stage one proposal? Does anyone want to champion this? @@ -526,7 +533,7 @@ DE: I want to make a note about Type.isType. At some point when James Snell brou JHD: I agree but I also would be happy to have that discussion within stage 1 on github. -MM: I think that Dan's discussion of membranes really created a lot of confusion that I would like to clear up. When you say map dot isMap of M, and M is a proxy or a membrane and Map is your own local Map constructor - in your proposal, the membrane has no opportunity to intervene and any intervention it did would fail the job. So going back, the thing about this is not that the membrane mechanism makes a special case for `this`, it's that there's nothing it can do with regard to the other parameters that helps because map.isMap is not invoking isMap on the membrane. I'm invoking it on a proxy from the member and invoking it on your own local static. +MM: I think that Dan's discussion of membranes really created a lot of confusion that I would like to clear up. When you say map dot isMap of M, and M is a proxy or a membrane and Map is your own local Map constructor - in your proposal, the membrane has no opportunity to intervene and any intervention it did would fail the job. So going back, the thing about this is not that the membrane mechanism makes a special case for `this`, it's that there's nothing it can do with regard to the other parameters that helps because map.isMap is not invoking isMap on the membrane. I'm invoking it on a proxy from the member and invoking it on your own local static. DE: Sorry for my error here. I think this could be addressed by the membrane system overwriting your local Map.isMap to unwrap the membrane. We do a lot to maintain this patchability, and support for membranes specifically. @@ -544,7 +551,7 @@ DE: Mark and I will follow up in the SES call about this. SYG: This is more of a clarifying question. the `Type.isType` strawperson - currently you are saying it does not behave like `Array.isArray` with proxy forwarding, right? -DE: Exactly. +DE: Exactly. BFS: This seems to have some kind of public/private relation with `class.hasInstance`. Maybe these should be combined in some way as my initial thoughts. If we don't think it can be combined this does seem like a useful feature that we could try to move forward with. I could help if needed. @@ -579,12 +586,15 @@ DE: Okay, Thanks. Yeah, I'll be happy to talk to you more so we could figure out DE: I want to repeat the call for champions or collaborators. I think JHD expressed interest offline, but anybody else who wants to work together on this, it would be great. ### Conclusion/Resolution + - Did not seek advancement; JHD/BFS and others will bring a proposal in the future seeking stage 1. + ## Relative indexing method + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-relative-indexing-method) -- [slides](https://docs.google.com/presentation/d/1UQGlq8t1zfAFa6TPvPpO9j6Pyk4EOv62MFQoC2NshKk/edit?usp=sharing) +- [slides](https://docs.google.com/presentation/d/1UQGlq8t1zfAFa6TPvPpO9j6Pyk4EOv62MFQoC2NshKk/edit?usp=sharing) SYG: This is not going for stage 4, despite the agenda item title, because when I added it I was not yet aware of the web compat issue. @@ -602,7 +612,7 @@ SYG: That is correct. And that's my understanding. I'm fairly confident that I d JHK: Yeah, I want to mention that to the sugar library have an "at" method on the array prototype and from the first version and to to 1.3 0.9. they will have an issue. If any websites use sugar JS at method, and use the feature like the provide like the loop mood or get multiple items because their "at" method supported, so it will be break on the if we land like that. Kevin has has a reply. - Yeah, sorry. It's not enough for them to just have the method to cause breakage because if they're installing the method the way one would normally install a method by doing array.prototype.a equals function that code will continue to work fine for no break. If so, what are they doing in the Chrome Canary which already have the at the code well have different Behavior. +Yeah, sorry. It's not enough for them to just have the method to cause breakage because if they're installing the method the way one would normally install a method by doing array.prototype.a equals function that code will continue to work fine for no break. If so, what are they doing in the Chrome Canary which already have the at the code well have different Behavior. KG: So do you know how they're setting the method such that it's breaking? @@ -612,16 +622,16 @@ SYG: Thanks for speaking about the sugar JS thing. I don't know that Library. I SYG: I will report back next meeting to see if there's any updates on the fixes and what we should do if there's more compat issues. - ### Conclusion/Resolution + - Not advancing, waiting on outreach to bricklink. -## EraDisplay for Stage 1 -Presenter: Shane Carr (SFC) -- [proposal]() -- [slides]() +## EraDisplay for Stage 1 +Presenter: Shane Carr (SFC) +- proposal +- slides SFC: So I'll be giving this presentation about eraDisplay for stage 1. I want to give special thanks to Louis-Aime for helping me with this presentation. He's an invited expert who's been contributing to our Ecma 402 discussions. Much of the content is from him. So thank you very much for that. @@ -657,31 +667,32 @@ RPR: Any objections? RPR: There are none. Congratulations. You have stage 1. Thank you. - ### Conclusion/Resolution + - Stage 1 + ## Alleviating the cost of spec complexity -Presenter: SFC and ZB -- [proposal]() -- [slides](https://docs.google.com/presentation/d/142N-BWVV4zWkNogciRMsJk3LAs_EZjnKJDH6CIjD6fg/edit#slide=id.p) +Presenter: SFC and ZB +- proposal +- [slides](https://docs.google.com/presentation/d/142N-BWVV4zWkNogciRMsJk3LAs_EZjnKJDH6CIjD6fg/edit#slide=id.p) -SFC: So alleviating the cost of growth. This is an open-ended discussion topic, but the structure of this presentation is we're going to first lay out a problem that we're trying to solve in ecma 402 in TC39 task group two and ZB will lay out the problem and then we're going to go over our proposed solution to that problem and discuss how it might relate to other standards bodies including this standards body, how it might affect task group one of TC39. But the first part of this presentation is going to be focused on ecma 402 and then we can discuss the implications of this elsewhere. So that’s my little introduction and now I'll turn it over to ZB to go over the problem statement. +SFC: So alleviating the cost of growth. This is an open-ended discussion topic, but the structure of this presentation is we're going to first lay out a problem that we're trying to solve in ecma 402 in TC39 task group two and ZB will lay out the problem and then we're going to go over our proposed solution to that problem and discuss how it might relate to other standards bodies including this standards body, how it might affect task group one of TC39. But the first part of this presentation is going to be focused on ecma 402 and then we can discuss the implications of this elsewhere. So that’s my little introduction and now I'll turn it over to ZB to go over the problem statement. ZB: Thank you, Shane. Okay, so just as a reminder and to get you all in our universe: Ecma 402 is a specific subgroup of TC39 with the specific goal of lowering the cost of making JavaScript-based apps work worldwide—so lowering the cost of internationalization. This is another interesting area for our group: we are trying to build APIs in a way that empowers non internationalization experts to write well internationalizable code without much hassle. So without having to learn to be internationalization experts. So those two goals are the core of what we're doing and also the strategy of what we use most of the time is that we are trying to provide lower level building blocks for user user lab components rather. And Building end-to-end Solutions, like how some libraries may approach their API design. So those are the three objectives for 402 proposals. -ZB: So, what's the problem? (next slide please) The problem is that we feel like there is an increased tug of war right now between community-driven feature proposals that expand the scope and the long-term costs of growth of the standard. We see it more and more as one of the things that happen is that over last year ECMA-402 was very successful. So we provided all the foundational building blocks, which means that a lot of users who previously would just use client-side libraries for older internationalisation needs now rely on ECMA-402 to for the foundational pieces, and then come to us and say, you know, if you only add this one or two things then I basically will have all my needs fulfilled–and those one or two two additional things. There’s a long tail, of course. +ZB: So, what's the problem? (next slide please) The problem is that we feel like there is an increased tug of war right now between community-driven feature proposals that expand the scope and the long-term costs of growth of the standard. We see it more and more as one of the things that happen is that over last year ECMA-402 was very successful. So we provided all the foundational building blocks, which means that a lot of users who previously would just use client-side libraries for older internationalisation needs now rely on ECMA-402 to for the foundational pieces, and then come to us and say, you know, if you only add this one or two things then I basically will have all my needs fulfilled–and those one or two two additional things. There’s a long tail, of course. ZB: We spent some time trying to analyze like what is the coast of growth? Like what is the reason why we wouldn't want to just add all the possible features that anyone ever requests and this is I think shared this is General shared with the whole TC39, but we identify the API surface anything we add to ecmascript has to be maintained forever API quality as we make mistakes, they accumulate over time and lower the the quality of the specification and in particular one variant of this API quality deterioration is that we can make an optimal decision and time T1 but it will come in a conflict with the optimal decision at time T2 leading to an inconsistency in the spec. A good example is trying to standardize something around errors or calendars and then you know two years from now when Temporal is stabilized, maybe this there's going to be a better API design to be done. So If we don't extend APIs cook now we will be in a better position to extend it better later. But of course that's unpredictable at all that. That may never happen. So it's really hard to evaluate then there is a cost of deployment as we increase this spec size. We are increasing the amount of data and algorithms that have to be carried by all implementers and potentially. We also raised the barrier to entry the larger the API surface is the lower the likelihood of someone coming up with a new implementation or adding a new implementation (developing a new implementation) costs more, not just maintenance. ZB: So the payload thing is an interesting consideration for ECMA 402 in particular because compared to most of the TC39 specification API proposals, most of our proposals come with the payload because one of the values we are bringing is lowering the payload for a website. So the amount of data is a number of data tables that we're going to ship. So do we have a table for number formatting for date formatting, or plurals or something, multiplied by the number of locales. So facing a trade-off between JS engine size and the Ecma for to compatibility implementers may cut the number of locals which would hurt the internationalisation or selectively pick pieces of ECMA 402 and say “we are not going to implement some timezone formatting” or some relative time format because we want to keep our implementation small and that leads to fragmentation of the ecosystem. Both approaches may also increase fingerprint-ability because they make it easier to detect that you're on a mobile Chrome versus desktop Chrome because they ship a different number of locales on mobile versus desktop. -ZB: We also categorized like two types of growth that we see one is a classic API extension, which is usually just a new Option argument toggle something like error display that changes presented. This is sure between TG1 and TG2 because those extensions carry the cost and risk of a new API, but usually if there is motivation to add it, we are in an unambiguous, fairly good, position to make a decision whether we want to add this feature. What is more interesting are the new APIs that usually bring weight and this is fairly unique to ECMA-402, that the API increases the size of the implementation that every implementer has to to carry. +ZB: We also categorized like two types of growth that we see one is a classic API extension, which is usually just a new Option argument toggle something like error display that changes presented. This is sure between TG1 and TG2 because those extensions carry the cost and risk of a new API, but usually if there is motivation to add it, we are in an unambiguous, fairly good, position to make a decision whether we want to add this feature. What is more interesting are the new APIs that usually bring weight and this is fairly unique to ECMA-402, that the API increases the size of the implementation that every implementer has to to carry. -ZB: So this is this is basically the problem scope as we see it right now, and we started looking into how we can solve it in within ECMA-402 so that our decisions are not, you know, don't differ depending on who is vocal at a given meeting and who is there the person making the final decision we were trying to make it a little bit more objective. So here's the framework that we thought about it. Shane, the microphone is yours. +ZB: So this is this is basically the problem scope as we see it right now, and we started looking into how we can solve it in within ECMA-402 so that our decisions are not, you know, don't differ depending on who is vocal at a given meeting and who is there the person making the final decision we were trying to make it a little bit more objective. So here's the framework that we thought about it. Shane, the microphone is yours. -SFC: Okay, thank you. So I'll talk about what I mean when I say high barrier to stage two and three with new entrance criteria. I'll go over the additional criteria that we have started to apply to ECMA 402 proposals within the Ecma 402 task groups. So this applies only to TC39 task group two, but the hope is that we can apply these when we have new proposals come through. Through a test group two we can shape the discussion around these requirements. So the First new requirement is for prior art. In Ecma 402 we see our job as bringing features i18n experts have already solved to JavaScript developers, not to invent new solutions to those problems. We often referenced CLDR and ICU and unicode as prior art the data and algorithms specified sealed your and unicode are of Variable quality and in order to be adopted by ECMA 402 the prior art must be considered best i18n practice by consensus of the ECMA 402 standards committee. So what this means is I think it is pretty straightforward. I think that this is a concern that is probably more ECMA 402 specific. There may be certain aspects of this that are also relevant to TC39 but in terms of ECMA-402 we see ourselves less as inventing new solutions and more as curating them for users on the web. +SFC: Okay, thank you. So I'll talk about what I mean when I say high barrier to stage two and three with new entrance criteria. I'll go over the additional criteria that we have started to apply to ECMA 402 proposals within the Ecma 402 task groups. So this applies only to TC39 task group two, but the hope is that we can apply these when we have new proposals come through. Through a test group two we can shape the discussion around these requirements. So the First new requirement is for prior art. In Ecma 402 we see our job as bringing features i18n experts have already solved to JavaScript developers, not to invent new solutions to those problems. We often referenced CLDR and ICU and unicode as prior art the data and algorithms specified sealed your and unicode are of Variable quality and in order to be adopted by ECMA 402 the prior art must be considered best i18n practice by consensus of the ECMA 402 standards committee. So what this means is I think it is pretty straightforward. I think that this is a concern that is probably more ECMA 402 specific. There may be certain aspects of this that are also relevant to TC39 but in terms of ECMA-402 we see ourselves less as inventing new solutions and more as curating them for users on the web. SFC: The second entrance criterion is that the functionality is difficult to implement in user land. And what the criterion says is that features in Intl must bring something to the table that a third party library wouldn't be able to do with the same level of efficiency and performance. So the champion can cite a heavy Locale data dependency and complex algorithms satisfy this criterion. What this means here is we don't see our job as providing a very large surface of APIs that do something that clients can do. We already see ourselves as providing–this goes into the providing building blocks that ZB was talking about earlier–We want to empower third-party libraries and applications to use ECMA-402 to perform the internationalisation of their components, but we don't want to be heavily opinionated on how they do that. @@ -703,13 +714,13 @@ SFC: Yes, and that's what the second paragraph here is. So this is the second re WH: Yeah. I just want to make sure that the presumption is clear. You don't presume something to be best practice just because it is in ICU/CLDR/Unicode. -ZB: Yes, and definitely in particular with ICU, which is a really really heavily battle-tested ibrary, we sometimes take lesson from ICU is to not do this the way ICU did. But I take it as an action item on us to clarify a little bit further the answer that I gave to your question, and it should be in our contribution guide. +ZB: Yes, and definitely in particular with ICU, which is a really really heavily battle-tested ibrary, we sometimes take lesson from ICU is to not do this the way ICU did. But I take it as an action item on us to clarify a little bit further the answer that I gave to your question, and it should be in our contribution guide. WH: Yes. I think we agree that it's just that, but I could see other people reading the text and interpreting it in other ways. ZB: right, and I want to point out that as far as I understand no matter how hard we tried this is going to be a bit vague and a little bit up to the human interpretation at the end of the road. There always is a possibility for two people to disagree whether something is sufficiently justified to become part of the standard. -WH: Yeah, I know. +WH: Yeah, I know. FYT: Yeah. I do want to mention that - I would like to mention that 402 is not only client side, Javascript is server side too. @@ -721,7 +732,7 @@ YSV: Hi, so actually this is a great segue into what I wanted to talk about. I t SFC: Yeah, thanks for bringing that up Yulia, and I want to I agree - I think this broad appeal slide is the criterion that's most applicable in general. When you say that there's you know proposals that are required because they have a high impact on a very small number of users and are critical for invariants, that's the that's sort of what's reflected in the second paragraph of this of this criterion, which is in our case we say “critical for a multilingual web” and in TC39's case if they wanted to adopt a similar set of criteria would be - you know, you can figure out how you want to phrase that, "critical to enforce invariants on the web platform" or something like that, right? So that's sort of this spirit here. I also want to talk about broad appeal as - I think this is kind of maybe one area where our goals and ECMA 402 are differing a bit from the goals in test group one because for example, I what I think of, you know, some of the the new features that are proposed for putting on the array prototype or other like convenience functions and even to a lesser extent Temporal do not necessarily satisfy this criterion. I guess it’s more like number two, difficult to implement in user land, like Temporal I think would qualify by these criteria, because temporal is a large enough surface that it is difficult to implement in user land. But we also have moment.js for example, but it has a large payload people don't like including it because it increases their application size. I think that's a really good justification that Temporal would qualify under these requirements, but I do feel that we sometimes discuss proposals in task group one that don't necessarily satisfy these two bullet points: “difficult to implement in user land” and “broad appeal.” One thing I want to clarify from this group is, if this group thinks that these are good requirements. Maybe these are not good criteria. And you know, I think they are, ZB thinks they are but I think it's important for us to be clear that we are applying these to proposals within our task group. -SFC: Do you have anything to add to that ZB? +SFC: Do you have anything to add to that ZB? ZB: Nope, I am looking forward to hearing positions from TC39 on what we're trying to do and how does this sound from the parent group? @@ -731,19 +742,19 @@ SFC: I'd say the first. We're trying to codify and actually write down some of t SYG: Okay, that sounds good to me. I agree with what YSV was saying before. Some of these are less relevant for TG1. If nothing else some of the metrics are very difficult to apply to just the general base language design. There's you know, there's no locale data to include, and the binary/code size increases from new features are like unnecessary tax that there is really no way to get around it. So I feel like stuff like broad appeal, those are criteria that we already do apply in judging proposals in TG1, though I guess more ad hoc. I think what I would like to see here is - I don't know, more thoughts on the actual metrics that you want to propose because other than that, it's just like we discuss some more and then you say how you feel. How else would you apply this? -SFC: Yeah, I think that this is a step in the right direction. It's hard to get a very exact quantitative measurement on broad appeal, because if you do come up with some concrete set of metrics to measure broad appeal, there's going to be some proposals where the spirit of them should actually satisfy, but they don't. For example If you look at number of npm module downloads or something, you're going to get some features that maybe have a lot of downloads, but don't have the broad appeal the way that we mean, in the spirit of that requirement, and you're going to have some features that maybe don't have the npm downloads, but do have broad appeal in the spirit we mean. So I see this as as evolving and we may iterate, I expect that we will iterate on these bullet points to quantify them in every way we can, but I think that saying that it has to be completely quantifiable is just going to be really hard to enforce and might be counter to the spirit of what we're trying to achieve. +SFC: Yeah, I think that this is a step in the right direction. It's hard to get a very exact quantitative measurement on broad appeal, because if you do come up with some concrete set of metrics to measure broad appeal, there's going to be some proposals where the spirit of them should actually satisfy, but they don't. For example If you look at number of npm module downloads or something, you're going to get some features that maybe have a lot of downloads, but don't have the broad appeal the way that we mean, in the spirit of that requirement, and you're going to have some features that maybe don't have the npm downloads, but do have broad appeal in the spirit we mean. So I see this as as evolving and we may iterate, I expect that we will iterate on these bullet points to quantify them in every way we can, but I think that saying that it has to be completely quantifiable is just going to be really hard to enforce and might be counter to the spirit of what we're trying to achieve. SYG: Oh, yes. I was saying the opposite of “it should be quantified”. I was very worried that it would be Quantified for the same will be hard Quantified and be like the this hard entrance Criterion for the same same reason that you said, where some of these proxy metrics are in fact, they could be misleading and could be used to tell a different story than what actually the proposal is about. So I was saying in the beginning of this presentation, you said something about coming up with metrics as well, and I was wondering - it seems like the most practical way to apply this is as a checklist of like topics that we made sure to discuss in the context of the proposal in question before we we could go to stage advancement, less than like, “please produce some numbers and see if you pass the bar.” -ZB: I share this sentiment; I don't have a good answer to how to remove the personal motivation out of it. In my ideal world I would like us to establish a framework almost like a scientists conducting an experiment, you know be unbiased, assume you're wrong, everything is unproven until proven. And in our case it will be like unless you cannot be successful without extending the API, then you have to extend the API. I would like all the champions to take a side of let's try not to, and the goal of exploration is to find a way to that avoids extending an API, but it seems to me like the culture of the community right now is the opposite, and I don't know how to to approach that. +ZB: I share this sentiment; I don't have a good answer to how to remove the personal motivation out of it. In my ideal world I would like us to establish a framework almost like a scientists conducting an experiment, you know be unbiased, assume you're wrong, everything is unproven until proven. And in our case it will be like unless you cannot be successful without extending the API, then you have to extend the API. I would like all the champions to take a side of let's try not to, and the goal of exploration is to find a way to that avoids extending an API, but it seems to me like the culture of the community right now is the opposite, and I don't know how to to approach that. -SYG: I have a response to that I think but it could result in quite a lengthy discussion. I want to do a check on time if I'm safe to raise it or if I should just go on. +SYG: I have a response to that I think but it could result in quite a lengthy discussion. I want to do a check on time if I'm safe to raise it or if I should just go on. RPR: We do have time. We've got another nine minutes. SYG: So I want to preface this by timeboxing what I'm about to bring up for discussion to maybe just like no more than 10 or 15 minutes. One of the interesting things in the web standard space, if you look at how TC39 operates versus how other web standards bodies operate, some people of course have different opinions on how well we work versus how well other web standards bodies work, something that's very unique to TC39 is we do all our design design up front. That we get together in a room. We don't really prototype unless the champion is motivated enough to do some very early stage prototyping ahead of time during stage 1 or stage 2. We don't really produce any artifacts ahead of stage 3. There's some notable exceptions like the playground stuff with records and tuples. That seems pretty awesome. But that certainly is the exception not the rule. And I think one fallout of how we work, by designing everything up front and then going to stage three and then everybody implementing it, it's really difficult to take a scientific approach as ZB wants for any number of these metrics because our time horizon is like two years. We ship it and then we have to instrument the browser or whatever, and see the uptake, see the how they're used and abused and so on. Other web standards bodies sometimes do things a little bit differently where they don't design everything up front. They incubate and they rapidly iterate in The Upfront stages by doing these origin trial things, for example what Chrome does. We might have different opinions about that that maybe that is not a good way to do things, but that is a way to get more feedback. From your stakeholders and your partners that might care about your a feature rather than just kind of debate things in the room and cite examples of “here some use cases” and “here's the maps of who might use it” and then like six months later implement it and see what happens right like we could significantly change are working more. I'm not suggesting we do that, just presenting a data point. Any thoughts on that? - YSV: I have thoughts on that. I agree with how you characterized how we do our design that we do a lot of the work up front. I think I disagree with the idea that we can't get a better sense of how a feature might do ahead of time without shipping, and I think that's really what the research group is about. So Maybe that's something that can fit in with this, actually utilizing the research group as a part of the suggestions made here. We do have a scientist working with us who's helping us design surveys. I think there is room for that. +YSV: I have thoughts on that. I agree with how you characterized how we do our design that we do a lot of the work up front. I think I disagree with the idea that we can't get a better sense of how a feature might do ahead of time without shipping, and I think that's really what the research group is about. So Maybe that's something that can fit in with this, actually utilizing the research group as a part of the suggestions made here. We do have a scientist working with us who's helping us design surveys. I think there is room for that. SYG: Yes, I think there is room for improvement without the full “Let's get an implementation and put in the metrics” kind of burden, and then certainly I wouldn't want to impose that extra burden. for stage two or three. There's a serious reason we work this way, which is it takes in the beginning staffing and resources to implement all the new features. @@ -753,23 +764,24 @@ SFC: Yeah, thank you for raising that Yulia. I'll definitely make sure that all DE:The presentation that Shane and ZB gave was great, and I'm happy to be talking explicitly about these things. At the risk of kind of continuing with it, I want to defend our current development process a little bit. I don't think it's accurate to say that we don't tend to implement during development. For a lot of features that we design in TC39, a polyfill or transpiler implementation is the best, most accessible way to prototype and get real developer feedback, for something like optional chaining and its Babel implementation. You can get all the feedback you need from that, and the native browser implementation doesn't make a big change. With this rapid iteration and prototyping, with origin trials, there remains a risk, especially if it happens in an opaque way with the feedback all controlled by a single vendor and without any standards committee work before shipping that feedback of just certain groups, which you could call partners is emphasized over the feedback of the other stakeholders. I'm happy that we have high-level discussion here and that we include different different people even if they're not they're not enfranchised according to kind of opaque processes of just one company and I think that's how we should continue doing things. Gathering feedback is difficult any way you cut it. Let's do more prototyping of things as well! -ZB: So I want to add to this, the thing that Yulia said. I often find myself on the side right now on ecma 402 where I am more pushing against additions of new APIs and extension of APIs. It's not a role I ever want to serve. It doesn't make me happy, but two criterias that I'm always trying to evaluate against is, how core is it to the domain of internationalization? How core is this proposal versus how edge case it is, like how like if this is something that you know one problem I love apps is going to use then maybe doesn't belong in the core spec and not every browser and engine should carry it. And it is a hard evaluation to make, I don't think we can apply really hard numbers to that. But this is something that that I'm doing and especially the if the API is bringing data, then I'm trying to imagine like where the data should be located in a perfect world from the logistical perspective, should it be part of the app that you're loading over the wire, or should it be part of the environment and every browser every user should have it on their computer all the time? So this is one consideration that I think is Ecma 402 specific. The other that I am doing and Shane mentioned that it's also maybe not so user specific is this concept of, is it possible to implement it in the userland. Will this Library be somehow inferior? Because it's a library rather than part of the standard API. I'm trying to apply that, and I can give you an example (just food for thought, because I think we're running out of time) but something I think a lot about in the context of the second criteria that I mentioned is there is a request right now to add language negotiation. And it's a hairy domain there are some open interesting questions about whether there is one algorithm for language negotiation, or there are multiple depending on the need, but the question also is can you have a user library in npm that is providing language negotiation. Is it hard? No it's not, it's a fairly small algorithm. It's a couple loops. So should it be in the standard library now? The appeal can be broad, you can claim that language negotiation is something that is often done wrong. So if we expose as we kind of help people use the right algorithm rather than being experts. But if we pointed out a good library that does it we would achieve the same goal. So does it belong in the standard or should we push it away because it can be implemented in the standard Library without any data? And we can expose the data that is needed and like low level building blocks and I can see the response being, “well, language negotiation is one of the core internationalisation operations, so a good standard Library should have it.” I don't know how to resolve that. But I think that this is a good example of considerations that I'm trying to use as a litmus test against the criteria that we are setting like what are the criteria are actually helping us make a decision here or is it still like, you know, opinion versus opinion? +ZB: So I want to add to this, the thing that Yulia said. I often find myself on the side right now on ecma 402 where I am more pushing against additions of new APIs and extension of APIs. It's not a role I ever want to serve. It doesn't make me happy, but two criterias that I'm always trying to evaluate against is, how core is it to the domain of internationalization? How core is this proposal versus how edge case it is, like how like if this is something that you know one problem I love apps is going to use then maybe doesn't belong in the core spec and not every browser and engine should carry it. And it is a hard evaluation to make, I don't think we can apply really hard numbers to that. But this is something that that I'm doing and especially the if the API is bringing data, then I'm trying to imagine like where the data should be located in a perfect world from the logistical perspective, should it be part of the app that you're loading over the wire, or should it be part of the environment and every browser every user should have it on their computer all the time? So this is one consideration that I think is Ecma 402 specific. The other that I am doing and Shane mentioned that it's also maybe not so user specific is this concept of, is it possible to implement it in the userland. Will this Library be somehow inferior? Because it's a library rather than part of the standard API. I'm trying to apply that, and I can give you an example (just food for thought, because I think we're running out of time) but something I think a lot about in the context of the second criteria that I mentioned is there is a request right now to add language negotiation. And it's a hairy domain there are some open interesting questions about whether there is one algorithm for language negotiation, or there are multiple depending on the need, but the question also is can you have a user library in npm that is providing language negotiation. Is it hard? No it's not, it's a fairly small algorithm. It's a couple loops. So should it be in the standard library now? The appeal can be broad, you can claim that language negotiation is something that is often done wrong. So if we expose as we kind of help people use the right algorithm rather than being experts. But if we pointed out a good library that does it we would achieve the same goal. So does it belong in the standard or should we push it away because it can be implemented in the standard Library without any data? And we can expose the data that is needed and like low level building blocks and I can see the response being, “well, language negotiation is one of the core internationalisation operations, so a good standard Library should have it.” I don't know how to resolve that. But I think that this is a good example of considerations that I'm trying to use as a litmus test against the criteria that we are setting like what are the criteria are actually helping us make a decision here or is it still like, you know, opinion versus opinion? -RPR: Is there any closing statement or summary of where you're at? +RPR: Is there any closing statement or summary of where you're at? SFC: Yeah, I think we had an effective discussion on these bullet points. I appreciate that. Thank you for bringing up other standards bodies. Thank you for the feedback from all the delegates. Now I'll leave it up to this body. The chairs, the editors, or whoever wants to, if there's anyone from this body who wants to take up the mantle on adding these requirements to task group one stage advancement criteria. I think this is more of a public service announcement that we're doing this in TG2 and I'm happy to continue this discussion offline if there's interest in codifying this elsewhere. So thank you everyone very much. - RPR: Okay, thank you, Shane and ZB. Okay, good. So we'll move to DE, who has a small announcement. Just following on on from the JSON modules yesterday, I think. + ### Conclusion/Resolution + - Topic remains open for discussion. ## JSON Modules Revisit + Presenter: Dan Ehrenberg - [proposal](https://github.com/tc39/proposal-json-modules) - DE: I'm not sure if we had editor reviews or appointed reviewers for JSON modules. I think we should consider it conditionally advance to stage three pending those reviews and I want to call for reviewers. So, any reviewers? Non-editor reviewers? Or do we want to say the editor review is sufficient? I think we can come to consensus on anything, but it would be kind of unusual to not have not have delegate reviews. JHD: I can do delegate review, as by the time it's going for stage 4 I'll no longer be an editor. @@ -786,22 +798,20 @@ KG: I'm sorry. We didn't have an issue where we track that, but I certainly revi BSH: I was one of the main dissenters which is why I think I should do it at one point. Yeah, so I think it's still good, but we can review it from last I didn't do it officially. I think for a small proposals, one reviewer I think we've done that in the past. I've some memory of doing it in the past. past. So I think so. So do we have consensus on considering this conditionally Advanced to stage three pending Bradley's review -YSV: I could also volunteer as a second reviewer. I've been looking at this. You can put me down. +YSV: I could also volunteer as a second reviewer. I've been looking at this. You can put me down. AKI: You had consensus without having this conversation. Is that correct? The earlier the earlier conversation we concluded consensus. DE: Yeah, apparently all of us forgot about delegate reviews. Thanks for being flexible about considering this conditional advancement anyway. -RPR: Okay, good. So we have consensus and thank you to Brad and Yulia for volunteering to be the delegate reviewers for JSON modules. - +RPR: Okay, good. So we have consensus and thank you to Brad and Yulia for volunteering to be the delegate reviewers for JSON modules. ### Conclusion/Resolution -- JSON modules is conditionally stage 3 pending reviews from BFS and YSV. +- JSON modules is conditionally stage 3 pending reviews from BFS and YSV. ## PSA about Blink's new "developer signals" requirement - SYG: more than 3 minutes as part of the blink shipping process one of the recent changes. Is that with every new feature jet that ships in the intent to ship for those that may not not know when we ship a new We send out an intent to ship a new feature and this is done for both JS features and web features and other browser vendors do this as well for their engines one of the new requirements in the blink side is that each content including for JS features be accompanied by some of evidence for developer signal. This could be a practitioner saying “I like this,” this could be a practitioner saying “I don't like this,” or “I wouldn't use this” and so on. Selfishly, it would be easier for me if we kind of of remember to discuss these signals during the proposals themselves as an entrance criteria to stage three. I'm not proposing a change. I think we already talked about how developers would feel about these things, but this is a PSA that you will be good for us as a group to remember to touch on the developer signal part of a proposal at some point before stage 3. YSV: Just to clarify. Is a proposal that for example, we all send intents to prototype when something reaches stage 3 and let developers comment? Because effectively that is what we're agreeing to when something reaches stage 3. And not that for example, the Mozilla standards position ie - what mozilla thinks explicitly. @@ -812,7 +822,7 @@ YSK: That sounds like a great suggestion. Let's talk about it offline. I'll expl SYG: Sounds good. -DE: One thing we could do in committee is we can note in our stage three conclusions, because we have many developers here, some quotes from developers in the committee how they feel about the feature and then you could link to that from your intent. +DE: One thing we could do in committee is we can note in our stage three conclusions, because we have many developers here, some quotes from developers in the committee how they feel about the feature and then you could link to that from your intent. SYG: That would be awesome. That would be you doing my work for me and I would deeply deeply appreciate that. @@ -828,8 +838,8 @@ YSV: like–please. JHD: The point of getting stage 3 is to get to stage 4, and in the language; we can't get to stage 4 unless it’s implemented; we can't implement it unless we meet whatever engines’ criteria happen to be. So if there's new criteria, I don't think we even need to update the process document - if we're interested in actually getting it shipped then it seems like that's a criteria we should be bending over backwards to try to satisfy, and adding the stuff we've already thought about, hopefully, which is “developer interest” to the notes - it seems like a simple solution. -SYG: I don't want to suggest that this is a hard line, that if there's no positive developer sentiment we will not ship something. It is another input to the shipping process. These will always be traded off in a holistic way for the entire proposal. All right, that was it for me. +SYG: I don't want to suggest that this is a hard line, that if there's no positive developer sentiment we will not ship something. It is another input to the shipping process. These will always be traded off in a holistic way for the entire proposal. All right, that was it for me. ### Conclusion/Resolution -- No official conclusion. +- No official conclusion. diff --git a/meetings/2021-01/jan-28.md b/meetings/2021-01/jan-28.md index 14181317..74fbec93 100644 --- a/meetings/2021-01/jan-28.md +++ b/meetings/2021-01/jan-28.md @@ -1,7 +1,8 @@ # 28 January, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Mathias Bynens | MB | Google | @@ -43,9 +44,10 @@ | Tantek Çelik | TEK | Mozilla | | Chengzhong Wu | CZW | Alibaba | ---- +----- ## RegExp set notation for stage 1 + Presenter: Mathias Bynens (MB) - [proposal](https://github.com/mathiasbynens/proposal-regexp-set-notation) @@ -63,7 +65,7 @@ MB: The second example we're using intersection do match spans of word or identi MB: the final example on this slide, we're matching non-script specific combining marks. So it's easy to match all combining marks, it's just a single property escape, but if you want those that are not specific to any scripts, with this proposal that would become an easy task, once again using intersection. So again, for more examples, feel free to look at the slides from last time or the repository which contains all of the examples as well. -MB: And there is one new bit of information and I wanted to share because if I read the room correctly there was a lot of positive sentiment when we presented this, but there were also two concerns that I want to address. The first one was about backwards compatibility and I wanted to make it very clear that it's explicitly a goal of this proposal to not break backwards compatibility. So concretely we don't want to change behavior of any regular expression pattern that's currently not an exception. And we think we can actually get there by doing the following two things. So first we're going to limit this functionality to just regular expressions with the Unicode flag, the `u` flag, enabled. And second we can also limit this new syntax to a new escape sequence such as for example, \UnicodeSet{...} and then the new syntax would only work the way we just suggested within those braces again. This is just an example we could use another identifier instead of `UnicodeSet` here, but the main idea is that this addresses all the backwards compatibility concerns because \U throws an exception in Unicode regular Expressions. it does not throw an exception in non Unicode mode, which is why we cannot support non-Unicode mode, but I think that's fine because we just require both the flag and we have this new syntax to kind of gate keep the new functionality which works out nicely because it also addresses another concern that was raised last time which is we were pondering the idea of maybe introducing a new regular expression flag for this functionality to avoid the backwards compatibility issue. With this approach we don't really need a new flag. We can just hide it behind this specific syntax, so that the new syntax only takes effect in that specific case and nowhere else. +MB: And there is one new bit of information and I wanted to share because if I read the room correctly there was a lot of positive sentiment when we presented this, but there were also two concerns that I want to address. The first one was about backwards compatibility and I wanted to make it very clear that it's explicitly a goal of this proposal to not break backwards compatibility. So concretely we don't want to change behavior of any regular expression pattern that's currently not an exception. And we think we can actually get there by doing the following two things. So first we're going to limit this functionality to just regular expressions with the Unicode flag, the `u` flag, enabled. And second we can also limit this new syntax to a new escape sequence such as for example, \UnicodeSet{...} and then the new syntax would only work the way we just suggested within those braces again. This is just an example we could use another identifier instead of `UnicodeSet` here, but the main idea is that this addresses all the backwards compatibility concerns because \U throws an exception in Unicode regular Expressions. it does not throw an exception in non Unicode mode, which is why we cannot support non-Unicode mode, but I think that's fine because we just require both the flag and we have this new syntax to kind of gate keep the new functionality which works out nicely because it also addresses another concern that was raised last time which is we were pondering the idea of maybe introducing a new regular expression flag for this functionality to avoid the backwards compatibility issue. With this approach we don't really need a new flag. We can just hide it behind this specific syntax, so that the new syntax only takes effect in that specific case and nowhere else. MB: So that's what happened since the last presentation and everything else is still the same. I also want to reiterate that there is a lot of precedent for this kind of functionality in other languages and regular expression flavors. We have this table in the repository and readme. Yeah, I couldn't even fit the whole thing on this slide. So if you're interested in more you can check it out there. Note that the last column there, symmetric difference, that is not something we're pursuing for this proposal. We just included it in the table for completeness's sake. And so that's a high-level overview of what this proposal is about. I believe we meet all the stage one entrance criteria. All the information is in the repository itself — that's the source of truth. There's also of course this slide deck which summarized it and the last slide deck, but it contains the same information. With that we would like to ask for stage 1 so that we can move the repository to the TC39 organization on GitHub. And that's it. Any concerns with stage one? @@ -97,7 +99,7 @@ MB: One part of the presentation I want to reiterate that maybe you missed is, w WH: I'm definitely not opposed to stage 1, but I don't like the concept that we can't express concerns about syntax in stage 1. -MB: Well, I think it certainly can be done and it's good feedback (both yours and Michael’s). I just think it should not necessarily block stage one, but it sounds like we agree on that. +MB: Well, I think it certainly can be done and it's good feedback (both yours and Michael’s). I just think it should not necessarily block stage one, but it sounds like we agree on that. WH: Yes. @@ -105,16 +107,16 @@ BT: All right, the queue is now empty. Are you ready to decide on stage one? Any MB: All right. Thank you. - ### Conclusion/Resolution -Stage 1 +Stage 1 ## Revisiting RegExp escape + Presenter: Jordan Harband (JHD) -- [proposal]() -- [slides]() +- proposal +- slides JHD:I don't have a presentation here, and this isn't a concrete proposal. About 5 or 6 years ago, a proposal for `RegExp.escape` was brought to the committee. Essentially it's a function that takes a string and escapes common regex characters and it's heavily used in userland. There are a number of modules that do this and they have hundreds of millions of downloads, or tens of millions of downloads or something like that, some absurdly high number. The response from the committee was that there was concern was that if you did not know the context in which the input string was intended to be used, meaning inside a character class or you know, a pattern and things like that, that you could not accurately escape things. So the committee said instead of this, let's look into a template tag function that takes an entire string that represents the entirety of a regex and escapes the interpolated parts, and the tag function can know the context, and do the correct interpolation or error checking or whatever. However, in the intervening time no proposal has been brought forth. An npm package was created but user feedback on that was that it's very confusing and nobody actually wanted that as a solution and everybody continues to use the exact same solution that was originally presented and the npm packages that represent that to fully solve their problem with none of the concerns that the committee has brought forth, so as a result, userland and developers have paid a cost for 5 years now because the committee thought that we had a better solution that they don't in fact think is better. Interest has been renewed in the proposal but we are now facing the possibility that node and/or browsers will possibly ship something like this if we fail to do so. Not a template tag function, but basically the exact same thing presented in the proposal. It was also pointed out to me that there's a function called CSS.escape, which is basically identical semantics, or an identical concept but different semantics for CSS, obviously, and this also works really well for folks that need that functionality. So I wanted to have the conversation and see if there is in fact committee interest in revisiting this proposal at which point, I suppose we could give it stage 1 today and I could bring it back in the future, but I wasn't explicitly asking for that; I can bring it back in a future meeting and seek stage 1 at that time. But a number of the folks who had concerns the first time around are still in this room, and so I was hoping that any concerns about this proposal could be brought forth. @@ -132,9 +134,9 @@ DE: What injection vulnerability? MM: So the core idea behind injection vulnerabilities typically is a quoting confusion where for example somebody is accepting data, splicing the data into a surrounding context that's supposed to represent, typically a program. The term injection vulnerabilities is almost exclusively used for when the surrounding language is a Turing Universal programming language, but it's not necessarily, and the end the intent of the splicing was that the data be considered within the larger program to be date but the escaping with screwed up such that the source and then the provider of the untrusted data, let's say realizes that the quoting was screwed up in some manner such that they can inject logic into the language that's being spliced into that's exactly the case with the regex literal. You might be a if you use Google the reg ex badly splicing it into a larger string to make a regex out of then you can create an opportunity for the attacker providing the data to cause the Itself to be matching something other than what you intended it to match. -??: So Dan is also asking like what is the specific injection vulnerability in this case, but I think you answered that as well. +??: So Dan is also asking like what is the specific injection vulnerability in this case, but I think you answered that as well. -KG: I still do not understand Mark's point. So this specific proposal is to escape these sets of characters that have a special meaning in regular expressions, and there's perhaps some debate to be had about exactly which characters. I understand the injection vulnerability, and in fact, that is why I'm in favor of this proposal because this proposal makes it easier to sanitize user data for use in a regular expression and Mark seems to be opposed on the basis of this vulnerability and I don't understand what vulnerability he is pointing at. +KG: I still do not understand Mark's point. So this specific proposal is to escape these sets of characters that have a special meaning in regular expressions, and there's perhaps some debate to be had about exactly which characters. I understand the injection vulnerability, and in fact, that is why I'm in favor of this proposal because this proposal makes it easier to sanitize user data for use in a regular expression and Mark seems to be opposed on the basis of this vulnerability and I don't understand what vulnerability he is pointing at. MM: If you take the resulting escaped string and you splice it into a larger string supposed to represent that supposed to represent a regex in the wrong place, in a context other than the one it was escaped for, then it will be interpreted by the regex in other than the way you intended. @@ -166,7 +168,7 @@ JHD: The yeah, the repo, I linked in the IRC channel is the same one that was pr BSH: OK thanks. -JRL: My topic specifically the complexity of the `RegExp.escape` function versus the `tag`. The tag is a considerably more complex function in order for users to understand what it's doing. They have to be familiar with complex syntax of tag functions. They have to be aware of the difference between the static content inside of the tag, and the dynamic data we're passing to the tag. So the the complexity for users to actually understand what is happening with `RegExp.tag` is considerably higher than the string input and string output of `RegExp.escape` and I think the reason so many people are reaching for `escape` is because it's simple to understand and it's simple to use,. So I don't think `tag` is the perfect answer here just because of the complexity for use versus `escape`. I think we can make `escape` safe no matter where it's used inside of a regex, the same idea that Michael just expressed. So it's just whether or not we can find that out and make it easy enough for users to actually use. +JRL: My topic specifically the complexity of the `RegExp.escape` function versus the `tag`. The tag is a considerably more complex function in order for users to understand what it's doing. They have to be familiar with complex syntax of tag functions. They have to be aware of the difference between the static content inside of the tag, and the dynamic data we're passing to the tag. So the the complexity for users to actually understand what is happening with `RegExp.tag` is considerably higher than the string input and string output of `RegExp.escape` and I think the reason so many people are reaching for `escape` is because it's simple to understand and it's simple to use,. So I don't think `tag` is the perfect answer here just because of the complexity for use versus `escape`. I think we can make `escape` safe no matter where it's used inside of a regex, the same idea that Michael just expressed. So it's just whether or not we can find that out and make it easy enough for users to actually use. JHX: I'm not sure why tag solution is too complex. In my opinion the tag solution which just to give you the regex. It's much easier to use for the average programmers and I agree with Mark that if you use escape, it's easier for average programmers to make mistakes and have security issues. @@ -174,17 +176,17 @@ JHD: So the original author of the proposal talked to a lot of users about the t MM: So I first want to agree that this is something we should look into. I think we're all agreed that phrased as an area of investigation and showing that we're actively interested in solving the problem, that I'm happy for it to go to stage one on that basis. I think that Mike Samuel's repository is at least as good a starting repository to repurpose for this purpose as the one that you mentioned. I would say it's better. I And if you take a look at the first page of Mike Samuel’s readme on makes it very very clear that to write the equivalent of regex dot escape using the tag function is trivially straightforward. So it might be that the other tag function that JHD is referring to which I'm not familiar with was confusing for weird reasons, but Mike Samuel's is not. I completely reiterate what Hax just said - the tag is straightforward to use. It's complex to build and that's why it's important that it's built by experts. But once it's built its really trivially straightforward to use. -JHD: So we can talk about the repo stuff later. I mean, that one is a published among the like the package so it's not really a proposal, But certainly that is a very useful reference implementation and investigative tool that we should process. +JHD: So we can talk about the repo stuff later. I mean, that one is a published among the like the package so it's not really a proposal, But certainly that is a very useful reference implementation and investigative tool that we should process. -MM: I will want anything that gets labeled as the thing that went to stage one to not be biased in favor of regex escape over the tag. +MM: I will want anything that gets labeled as the thing that went to stage one to not be biased in favor of regex escape over the tag. JHD:Yeah, if let's say that we decide today that this concept should be stage 1 today before transferring the repo in all the way to TC39. I would update the contents to to frame it correctly as a problem investigation and then possible solutions of which the tag function. -MM: okay good. good. Thank you. +MM: okay good. good. Thank you. DE: Yeah, I'm happy this is coming back to committee because it seems like a really important problem. My intuition having heard about it before was that the template tag might be higher level and easier to use, but if from talking to developers we find that the escape method is easier to use, and we figure out a form of escaping is not subject to injection attacks - and I'm still I'm so curious to learn what injection attacks people are concerned about - then that seems like it could be a good option. This seems like it could be a subjective trade-off that we can use all kinds of evidence to get at. Thanks for pushing this forward. -BFS:Let me see if I correctly understand what Mark Miller was suggesting about the injection attack. Let me see if I can express it as an example. If you have this Escape method and you pass a string to it what you get out of it if you take what you get out of it and immediately make it a regular expression with that then that's guaranteed to be safe because you've escaped everything. The problem is what if you take that and then you concatenate it with a bunch of other strings and pass that into - or you put it inside of a tag to a template literal and expand it in there and it fits there. And I think what Mark was trying to suggest is what if you expanded in the context where you have an open square bracket, and then you expand this context of this variable contents of this variable that's been escaped and then a closed square bracket. That clearly isn't doing what it was supposed to do, because in the larger regular expression you've stuck it into it's now trying to interpret this string is a set of a set of characters. I think that what Mark was saying is it would be good if we could somehow push users down the directions of you're less likely to make that mistake. Is that fair to say that's what you were going for Mark? +BFS:Let me see if I correctly understand what Mark Miller was suggesting about the injection attack. Let me see if I can express it as an example. If you have this Escape method and you pass a string to it what you get out of it if you take what you get out of it and immediately make it a regular expression with that then that's guaranteed to be safe because you've escaped everything. The problem is what if you take that and then you concatenate it with a bunch of other strings and pass that into - or you put it inside of a tag to a template literal and expand it in there and it fits there. And I think what Mark was trying to suggest is what if you expanded in the context where you have an open square bracket, and then you expand this context of this variable contents of this variable that's been escaped and then a closed square bracket. That clearly isn't doing what it was supposed to do, because in the larger regular expression you've stuck it into it's now trying to interpret this string is a set of a set of characters. I think that what Mark was saying is it would be good if we could somehow push users down the directions of you're less likely to make that mistake. Is that fair to say that's what you were going for Mark? MM: Yeah. Let me let me confirm exactly that's that's that's that's the kind of That I'm talking about is what Kevin's asking for is very concrete example where we show how the data gets misinterpreted when spliced into the larger string, which I don't think will be hard to construct, but I don't not prepared to try to construct it on the fly, but I think it's exactly what you're talking about is the form that I'm concerned about. And, I want to point out, this is why we invented tag template literals. The primary use case was to introduce what used to be called quasi parsers is for the literal part that could then do context-sensitive escaping for the data that was in the substitution holes so that we could be, across languages, safe against injection attacks and then to have the first language that we're trying to introduce an escaping solution in avoid the general-purpose framework that we created for the purpose seems ludicrous. @@ -198,21 +200,22 @@ BT: Okay, so JHD is asking for a stage one approval for investigating this probl MS: No objections, but JHD, are you willing to be the champion? -JHD: Yes, I will be championing and anyone else who's willing to help is more than welcome to do so. +JHD: Yes, I will be championing and anyone else who's willing to help is more than welcome to do so. BT All right. Sounds like we have stage one for this investigation. Thank you everybody. Thank you. ### Conclusion/Resolution -Stage 1 +Stage 1 ## Index From End Syntax + Presenter: John Hax (JHX) -- [proposal]() -- [slides]() +- proposal +- slides -JHX: Okay, this proposal is proposed to add a syntax carats of I it's a character one is which just means the last element of an array. The syntax is borrowed from C sharp 8, but keep the minimal. C#, ^1 will return a.indexof object,bBut this proposal try to keep it minimal. So it's only valid syntax in the square bracket. So it's roughly same as length minus i. The precise to manage his the length minus the number. You can use 1n in the bracket.. +JHX: Okay, this proposal is proposed to add a syntax carats of I it's a character one is which just means the last element of an array. The syntax is borrowed from C sharp 8, but keep the minimal. C#, ^1 will return a.indexof object,bBut this proposal try to keep it minimal. So it's only valid syntax in the square bracket. So it's roughly same as length minus i. The precise to manage his the length minus the number. You can use 1n in the bracket.. JHX: the motivation - so I actually have two motivations. Indexing and to revive slice notation. This paragraph is copied from the original section of The Proposal asking for the ability to write negative 1 one instead of a rebuttal. Because the people from the ruby or python they really like the syntax like that, but we can't have it in JavaScript because we already have the negative I already have the semantics here it will access the string property of -i. @@ -224,11 +227,11 @@ JHX: Let's talk about the cost. Basically we think the syntax has a higher cost, JHX: About adoption costs, with syntax you need to transpile but no polyfill, with the method you don't need to transpile but you do need a runtime polyfill. So they're similar. And the web compatibility, the syntax to do not have the compatibility issue the as massive as the the real me of proposal the might have some room yes or no, but I have a report out yesterday.Here's the code. You can try to yourself. The first one is about sugar JS and sugar JS have is have Loop here. And it supports the multiple elements you can get the array here and from the the first version to 1.3.9 version. They all have the problems. That means this the same code if you run it in the Chrome Canary, which has the at method, it will give you different results and the Core JS is similar as sugar in that it have the older string prototype at proposal and in Chrome Canary it give you two question marks. -JHX: Oh. I think we maybe we're lucky to not get impact reports about these two cases, but there is no risk here. And so we already know there are at risk [for the at method], and especially the string case is subtle because the content of the string can come from the server or from the user inputs so maybe only some of your users will see the broken pages, maybe only the international users. Will have stuff that. +JHX: Oh. I think we maybe we're lucky to not get impact reports about these two cases, but there is no risk here. And so we already know there are at risk [for the at method], and especially the string case is subtle because the content of the string can come from the server or from the user inputs so maybe only some of your users will see the broken pages, maybe only the international users. Will have stuff that. JHX: so this is the summary of the first part. I think the syntax has better ergonomic, it's much more general or and have a much simpler arguments arrange. I think this is a very important because if the if the value is calculated, if you have a much complex argument range it is very easy to make a mistake when you cross the negative line here and do not have the negative zero edge case to infer the Semantics. basically, it follows the well known "[i]". And basically if the motivation is to let people write - to solve just this problem, I think the syntax solution we were best subject. -JHX: The next part is about slice notation. This is current proposal which just the same as slice method but has syntax here. In the last meeting this proposal did not get stage one, there are many concerns here like whether is was to add a simple syntax here. Well, but think that the most important block is 2 is raised by Shu that they are inconsistent here, that the negative one actually mean differencing in these contexts. I agree this is a problem because actually it means the I here if he's an actively intervenes differencing. So if we replace the negative index with the character I the problem is gone. And it's actually a better solution, a better version of their current slice method because the negative 0 case in much worse in the slice method. +JHX: The next part is about slice notation. This is current proposal which just the same as slice method but has syntax here. In the last meeting this proposal did not get stage one, there are many concerns here like whether is was to add a simple syntax here. Well, but think that the most important block is 2 is raised by Shu that they are inconsistent here, that the negative one actually mean differencing in these contexts. I agree this is a problem because actually it means the I here if he's an actively intervenes differencing. So if we replace the negative index with the character I the problem is gone. And it's actually a better solution, a better version of their current slice method because the negative 0 case in much worse in the slice method. JHX: This is a simple example. we expanded empty array here because it's lost in items. But actually you got two one, two, three. It doesn't mean you write code like (?) and item. In most cases people just to write code but they want the last "n" items. @@ -238,7 +241,7 @@ JHX: I'll finish quickly. So yes, there are many discussions here and you can re BT: Okay, so we have a couple of clarifying questions. Let's address the first two. Can you just clarify why you think this is mutually exclusive with in some sense the stage 3 .at() proposal? There will be no discussion of this. That's just to clarify why you made that statement. -JHX: I'm not sure what the question is. +JHX: I'm not sure what the question is. BT: I think there is some concern that this is being presented as an alternative to tht"e "a proposal. @@ -246,9 +249,9 @@ JHX: I think the problem here, Is that the original program at proposal before w BT: so okay, so I think like we can say that this is not a proposal that includes not pursuing the stage three proposal, hopefully that satisfies those clarifying questions. -LEO: I'm sorry just as a quick suggestion Hax. I really like this proposal, but I think it's really a bad take if you tried to compare it exclusively with other proposals here. I think it's a very interesting problem that we can discuss in this queue, but like trying to compare with slice notation and this is a bad take for this proposal itself. Like you already have a very nice argument for this proposal alone, and I think I'm supportive of it. +LEO: I'm sorry just as a quick suggestion Hax. I really like this proposal, but I think it's really a bad take if you tried to compare it exclusively with other proposals here. I think it's a very interesting problem that we can discuss in this queue, but like trying to compare with slice notation and this is a bad take for this proposal itself. Like you already have a very nice argument for this proposal alone, and I think I'm supportive of it. -BT: Okay, so we have Gus first. +BT: Okay, so we have Gus first. GCL: yeah, I just wanted to say I think the problem space being presented here is definitely worth pursuing, but I think I and a lot of other people are nervous about sort of the depth into which the specific solution was presented, which is to say that in terms of stage 1 this seems reasonable, but you know the specific syntax and semantics and stuff would not be agreed to at this point. That's all. @@ -262,7 +265,7 @@ LEO: I just said that I really like this proposal by itself. We can probably dis JHD: Yeah, so I got a partial answer in IRC, but I want to hear your take on it Hax. Given that `.at` exists and this proposal is only operating in a world in which that already exists, what exact problem is this solving? I'd love to hear that elaborated on. -JHX: Yeah, it's a similar problem but in a better form because it did not have the negative zero edge case, and it do not use the negative index - actually do not use the concept of negative index because for example if you use the index syntax tax and and put in two and positive two apps, it doesn't return what you think. I would argue that negative zero, normally people will expect it. It should be length and it should return undefined, Not the first element. +JHX: Yeah, it's a similar problem but in a better form because it did not have the negative zero edge case, and it do not use the negative index - actually do not use the concept of negative index because for example if you use the index syntax tax and and put in two and positive two apps, it doesn't return what you think. I would argue that negative zero, normally people will expect it. It should be length and it should return undefined, Not the first element. JHD: okay, so just to be clear you're saying that normal people will normally have an expectation at all about what `-0` would do, and then you're also saying that you want to be able to use the output of `indexOf` directly and have it do the expected thing? @@ -270,7 +273,7 @@ JHX: Yeah. I mean, I mean the negative index is works in most cases, but have a JHD: Okay, I guess I'll add a new queue topic about `-0`. -RBN: C#'s negative index notation is slightly different they actually introduced an index type that has special handling for indexing into an object, So that actually becomes a value on its own, and while I'm not necessarily proposing that that this proposal introduces that because there's a whole mess of other complexity that gets involved there, it is useful for things like slice notation proposal. Asa matter of fact, I proposed this actually on the slice notation proposal, there is an issue where I've discussed the C# hat syntax for negative offsets. and Waldemar, yes C# does have `^` as an operator, but it's just like in ecmascript. It's an infix operator and C# allows it as a prefix operator for introducing a range type on its own. +RBN: C#'s negative index notation is slightly different they actually introduced an index type that has special handling for indexing into an object, So that actually becomes a value on its own, and while I'm not necessarily proposing that that this proposal introduces that because there's a whole mess of other complexity that gets involved there, it is useful for things like slice notation proposal. Asa matter of fact, I proposed this actually on the slice notation proposal, there is an issue where I've discussed the C# hat syntax for negative offsets. and Waldemar, yes C# does have `^` as an operator, but it's just like in ecmascript. It's an infix operator and C# allows it as a prefix operator for introducing a range type on its own. WH: Can you use prefix `^` anywhere in C#? @@ -278,23 +281,23 @@ RBN: Yes, anywhere you could have an expression. Prefix `^` creates an index typ BT: We have four minutes on this topic, and we really can't extend because we're already running up against time for this meeting. So let's try to be super concise and if you don't have stage 1 concerns consider maybe dropping your topic. We may have to skip some. -BFS: So whatever we go forward with length or whatever we want to call this. We need to be sure that it properly interoperates with the rest of how the object model Works, particularly Getters and Setters are my concern, you know returning on normal values for those we can't really do what does and return undefined if The goal is to kind of make this a special kind of property access I'm totally not comfortable with it. That's all. +BFS: So whatever we go forward with length or whatever we want to call this. We need to be sure that it properly interoperates with the rest of how the object model Works, particularly Getters and Setters are my concern, you know returning on normal values for those we can't really do what does and return undefined if The goal is to kind of make this a special kind of property access I'm totally not comfortable with it. That's all. -JHX: Yeah, I think as my first little slide sad if you use the glance of the real life, they use this abstract operation. So it's it work harder getter of the Nets. +JHX: Yeah, I think as my first little slide sad if you use the glance of the real life, they use this abstract operation. So it's it work harder getter of the Nets. SYG: The first one is. We already touched upon the clarification. The presentation was set up to make it seem like this is a mutually exclusive alternatives to the at proposal and I want to reiterate that really can't be the case. So on so that's it. I interest of time we don't need to go more into that. On the technical parts, my main concern for this kind of syntax and I'll expand this item to also include. what I consider stage one concern - so the syntax has presented right now is non composable in the way that Waldemar highlighted. In C# it's a unary operator that produces a new type of thing, that captures the - it produces a new type of value that says I am an index from the end. Whereas here we're introducing this thing that looks like a unary operator but in fact, it's kind of fused with the brackets which then also suggested some kind of property key, but it is not. So there's a lot of kind of difficulties and designing this syntax to be more composable and I would - for stage 1 if the scope of this proposal remains just this kind of non composable syntax for from-the-end indexing. I would have serious concerns with stage one, but given the utility of something like this for array accesses and array-like accesses and the desire that users want slice notation as well and this proposal calling out slice notation. I would really like to see this proposal expanded to include slice notation so that there could be one proposal with a unified syntax for both slice and this syntactic approach to index from the end. And I would urge Hax to work with Sathya, the current champion of the slice notation proposal, which is not abandoned. He said revive, but I don't think that the slice proposal has been abandoned, it just hasn't been updated since when it was presented. -BT: So to go directly to the stage 1 discussion, it sounds like Shu you would be against advancing system to stage one unless scope is expanded to include slice notation, which I think practically would mean merging this proposal effectively or championing some sliced like Proposal with Sasha's that you're thinking true. +BT: So to go directly to the stage 1 discussion, it sounds like Shu you would be against advancing system to stage one unless scope is expanded to include slice notation, which I think practically would mean merging this proposal effectively or championing some sliced like Proposal with Sasha's that you're thinking true. -SYG: Yes that captures it accurately. I think if we are to meet the syntax bar of adding a new kind of indexing syntax. We should get ahead of known use cases, which slice is point packs. +SYG: Yes that captures it accurately. I think if we are to meet the syntax bar of adding a new kind of indexing syntax. We should get ahead of known use cases, which slice is point packs. JHX: I think I plan to co-champion the slice notation proposal. -BT: So essentially we don't need to advance this proposal separately to stage one and we can try to address this syntactic index from end problem in the context of the existing stage one proposal for slice notation. +BT: So essentially we don't need to advance this proposal separately to stage one and we can try to address this syntactic index from end problem in the context of the existing stage one proposal for slice notation. BT: Should I do that? -BT: I would say, Hax, work with Sathya on that. This like a useful exploration and documentation of yet another facet of this problem space. So it seems like it would be a good document to store and that repo. but I think the takeaway from the committee is, you know, talk with Sathya and figure out you know what to do with this document where to check it in and then how to address. This is the takeaway from a committee. +BT: I would say, Hax, work with Sathya on that. This like a useful exploration and documentation of yet another facet of this problem space. So it seems like it would be a good document to store and that repo. but I think the takeaway from the committee is, you know, talk with Sathya and figure out you know what to do with this document where to check it in and then how to address. This is the takeaway from a committee. WH: Brian, you haven't gone through all the comments. Your summary of the “takeaway from the committee” is inaccurate because not everyone has gotten a chance to speak [there are still something like five people left to go on the queue]. @@ -305,16 +308,17 @@ WH: Yes. I'm also saying you should not present what you said as the “takeaway BT: Yes, there are other interesting viewpoints that should be gathered as part of the slice proposal. Okay, so that I think we have to move on unless there are other really important points here. Hax I would recommend taking us a screenshot of the queue, some of that you might want to follow up offline. JHX: Okay, if there are any concerns, please raise an issue in the repo. Thank you. + ### Conclusion/Resolution -Not advancing on its own, Hax to talk to Sathya about advancing this as part of the slice syntax proposal. +Not advancing on its own, Hax to talk to Sathya about advancing this as part of the slice syntax proposal. ## Array find from last for stage 1 -Presenter: Wenlu Wang (KWL) -- [proposal]() -- [slides]() +Presenter: Wenlu Wang (KWL) +- proposal +- slides KWL: Hello everyone. I'm Wenlu Wang from Microsoft. I have some experience of implement proposals in typescript. I will talk about the draft proposal: array find from the last. The Proposal will add two methods into the Prototype both of array and typed array to allow us find some us find some elements or its index. Whether reverse the order by customized call back. This proposal tries to address two major concerns: semantics and performance. For semantics we want it to be simple and clear. Now, we have index all end in last index of but they can only compare by value. We also have find and findIndex but they iterate through. on the first to the last we need something to iterate from the last to the First with the ability to compare using our own customized callback for the performance already and we want to find an element from the last. We can reverse the array before find before we find the index but reverse method is not a musical. So which means we have to close the array before that might be an overhead one the array contains a lot of elements and it's a bit complicated to find index because the array reversed and special attention to handle the negative 1 that will need some conclusion as a result. @@ -324,9 +328,9 @@ KWL: There are also some similar ways to do that. Why don't we use reverse? We h BT: Great. We've got a couple questions on the queue. -TAB: Well, yeah, but one is just the naming comment that just came up when referring to the direction - the string functions use "end", not last. I know that there is a precedent with like lastIndexOf using last but that's talking about like the index layer but this function the way it's used at least refers to the direction and we should be consistent. +TAB: Well, yeah, but one is just the naming comment that just came up when referring to the direction - the string functions use "end", not last. I know that there is a precedent with like lastIndexOf using last but that's talking about like the index layer but this function the way it's used at least refers to the direction and we should be consistent. -KG: Yeah, I disagree. I think that last is the obvious president. I don't think that the string precedents make it clear that they're talking about a direction. padEnd is not talking about a direction. It's talking about the end of the string. I think lastIndexOf is the obvious precedent. Also like five different people have come up with this proposal over the past like two decades and every time someone has come up with this proposal they have used "last". So I think that is the obvious name and we should go with that. +KG: Yeah, I disagree. I think that last is the obvious president. I don't think that the string precedents make it clear that they're talking about a direction. padEnd is not talking about a direction. It's talking about the end of the string. I think lastIndexOf is the obvious precedent. Also like five different people have come up with this proposal over the past like two decades and every time someone has come up with this proposal they have used "last". So I think that is the obvious name and we should go with that. SYG: I just wanted to agree with Kevin that I think the lastIndexOf thing is the obvious precedent. I find the proposal reasonable and I like it. The symmetry point was compelling with indexOf. But unfortunately, I think the obvious names we might have an uphill battle there with web compatibility, so I hope you are for rounds of possible renaming. @@ -347,32 +351,33 @@ BT: Any objections to stage one? [no] Thank you and good luck with naming. KWL: Thank you. ### Conclusion/Resolution -Stage 1 +Stage 1 ## Defer module import eval + Presenter: Yulia Startsev (YSV) - [proposal](https://github.com/codehag/proposal-defer-import-eval) -- [slides](https://docs.google.com/presentation/d/17NsxHzAC2RlP5rB3wrns9O2Z-NduSpcm2_GOVo2TnKE/edit#slide=id.p) +- [slides](https://docs.google.com/presentation/d/17NsxHzAC2RlP5rB3wrns9O2Z-NduSpcm2_GOVo2TnKE/edit#slide=id.p) -YSV: Hi. My name is Yulia Startsev, I work at Mozilla and I want to open this presentation with a bit of a joke. It's the most famous Canadian aphorism. An aphorism is a saying. If you're not from North America, you might not be familiar, but there's one that's really famous American one: "as American as American Pie", which is roughly “as American as can be”. The Canadian version of this saying "as Canadian is possible given the circumstances". That is the starting point of this proposal. So I'm going to change that aphorism a little bit to be "as performant as possible given the circumstances". That is sort of a one-sentence sum up of what I'm hoping to do or what I'm hoping to investigate with this. +YSV: Hi. My name is Yulia Startsev, I work at Mozilla and I want to open this presentation with a bit of a joke. It's the most famous Canadian aphorism. An aphorism is a saying. If you're not from North America, you might not be familiar, but there's one that's really famous American one: "as American as American Pie", which is roughly “as American as can be”. The Canadian version of this saying "as Canadian is possible given the circumstances". That is the starting point of this proposal. So I'm going to change that aphorism a little bit to be "as performant as possible given the circumstances". That is sort of a one-sentence sum up of what I'm hoping to do or what I'm hoping to investigate with this. YSV: So what are the circumstances? We're talking about large mature code bases where a large part of the code large part of the cost is already module evaluation. We're talking about something that's been implemented, it's working correctly, but we have a problem with performance. And this is a common case where you've built an application and now you're trying to fine-tune its performance. Alternatively there is a significant reason why they want to retain their behavior and they want to prevent any kind of even minor changes to the semantics of that code base. So another way to look at this is that there's a need to make things more performant without necessarily loading modules asynchronously, which you can do with dynamic import we have that functionality another requirement here is that users who want to use this are okay with sacrificing some of the performance that they would get from using something like dynamic import for the benefit of ergonomics and this synchronous Behavior. YSV: Now here's a really contrived example of what this might look like. We've got an import `someMethod` from `my-module` and we're using it in some rarely used code, but it's definitely called at some point and it's nested. So we've got this nested set up of multiple methods, and eventually we're calling some method. Okay, this is our small application. We are a young startup and it's all working great, but our codebase grows and performance becomes an issue. So now we want to take a look at this and it's like, oh, we're not really using some method very often, so let's dynamically import that. So we create ourselves a way to lazify `someMethod` and there are different ways that you can do this and now we've got what we want - I should take a step back and say we're explicitly looking at startup performance here. So we want the application to start faster. We create this lazy method, we use dynamic import, but this has some implications for the code that uses this method. That means that every function that is relying on something from SomeMethod, this lazified SomeMethod now becomes a sync and requires awaits in order to retain the same behavior and the behavior isn't isn't quite exactly the same. as it was before we're now working with everything being promisified which may or may not be an issue when it comes to data races or something. Similarly when we look at this, and let's say we're a new programmer coming to the project. It's been a few years since this performance work was done and we're looking at this and we're trying to do some other kind of fine tuning or were trying to fix a bug and we're investigating all the way up the code base, one thing that might be asked is: is this a meaningful await? because when we're awake, And stuff it's usually because there's been some kind of fork where we want to we have different blocks of execution that are happening in an async function. But this the purpose of this change that we made, to make everything async, was actually a performance fine-tuning. It wasn't a change to the basic semantics of the program though it may have had such effects. -YSV: The question that I'm posing is, is there an alternative can we do something for this case that might work better here than what we currently Have? And I think maybe there is. That's what I want to propose. So a high level way to look at this is our goal is to improve startup performance without sacrificing readability. Now startup performance, I think is pretty clear. We're doing the same amount of work, but we're spreading it out in into different places where that work can be done without necessarily undermining the user's experience for the user's expectations. Readability is a little more subtle. I'm using readability here in a much more general sense. We're talking about readability where you read the code and understand it. Of course, this is one aspect here where we are taking code that was initially sync and we're making it async for this specific performance tuning, but also we're talking about maintainability. We're talking about the cost for doing such a refactoring and other related issues to work around applying such a performance tuning to a code base. +YSV: The question that I'm posing is, is there an alternative can we do something for this case that might work better here than what we currently Have? And I think maybe there is. That's what I want to propose. So a high level way to look at this is our goal is to improve startup performance without sacrificing readability. Now startup performance, I think is pretty clear. We're doing the same amount of work, but we're spreading it out in into different places where that work can be done without necessarily undermining the user's experience for the user's expectations. Readability is a little more subtle. I'm using readability here in a much more general sense. We're talking about readability where you read the code and understand it. Of course, this is one aspect here where we are taking code that was initially sync and we're making it async for this specific performance tuning, but also we're talking about maintainability. We're talking about the cost for doing such a refactoring and other related issues to work around applying such a performance tuning to a code base. -YSV: Okay, so to take a look at what might be possible in terms of this work. We're talking about delaying the work of the module. And the question naturally will be, when? Whe do we start deferring the work? How do we start differing that work? The first place which would be the most obvious place to look is going to be before load. We have that functionality. We have it in the form of dynamic import now one might ask. Well. Why can't we make Dynamic Imports sync the answer there is we would break run to completion semantics. We would break a lot of stuff on the web - basically it's a terrible idea and we're never going to do that. What do we do at a parse? Well, actually the reason we don't want to do it before parse is the same reason why we don't want to do it before load because parsing builds the module graph and we need to load all of the modules in order to know what that module load graph is in fact on the system that I've been looking at. One of the problems is that we don't do the load and parse step. We don't have the full module graph and we're doing this asynchronous work and it hasn't been a very fun time. +YSV: Okay, so to take a look at what might be possible in terms of this work. We're talking about delaying the work of the module. And the question naturally will be, when? Whe do we start deferring the work? How do we start differing that work? The first place which would be the most obvious place to look is going to be before load. We have that functionality. We have it in the form of dynamic import now one might ask. Well. Why can't we make Dynamic Imports sync the answer there is we would break run to completion semantics. We would break a lot of stuff on the web - basically it's a terrible idea and we're never going to do that. What do we do at a parse? Well, actually the reason we don't want to do it before parse is the same reason why we don't want to do it before load because parsing builds the module graph and we need to load all of the modules in order to know what that module load graph is in fact on the system that I've been looking at. One of the problems is that we don't do the load and parse step. We don't have the full module graph and we're doing this asynchronous work and it hasn't been a very fun time. YSV: So that leaves us with before evaluate and this is what we are going to focus on in the context of this proposal. So the proposed API, I tried two different forms of it. We've got one using import attributes which were discussed with the import assertions, but haven't been formally proposed yet, and the other one you'll see below with adding a new keyword. I want to ignore the new keyword for now because I think import attributes together with import assertions helps tell the story better, but I'm not actually very opinionated about what the syntax here will be. -YSV: So the proposed semantics we're going to load and parse all the modules, the children of a deferred module are treated as part of this deferred graph. So we're going to have our regular module graph and we're introducing a new concept of a deferred graph. The interaction, I'll go into that in a little bit more depth in the next couple of slides, but if a child is eagerly loaded it's treated the same way as it is currently treated within our module loading semantics. The thing that's deferred is evaluation. We only evaluate on first use and you see an example here where a method is being evaluated at first use. +YSV: So the proposed semantics we're going to load and parse all the modules, the children of a deferred module are treated as part of this deferred graph. So we're going to have our regular module graph and we're introducing a new concept of a deferred graph. The interaction, I'll go into that in a little bit more depth in the next couple of slides, but if a child is eagerly loaded it's treated the same way as it is currently treated within our module loading semantics. The thing that's deferred is evaluation. We only evaluate on first use and you see an example here where a method is being evaluated at first use. -YSV: Okay. So let's quickly go through what this looks like in terms of the module graph. Here's our simplified module graph. We're not looking at any Cycles or anything like this and nothing is lazy. This is your regular standard module graph. So let's turn one of those edges lazy. What happens to the rest of the graph? Now, please note that the edge that is pointing to a module that has an eager edge like a regular eager edge to its child. What happens is we end up with a lazy sub graph and that lazy sub graph has these eagerly loaded modules. What will happen here is, we've got an invariant right now in how module loading works. So the invariant is that a given child of a parent will finish evaluating before the parent finishes evaluating. This is maintained with Top level await and it's something that is so far true Always. How does this interact? Well what this is introducing is its introducing a new concept. What happens here is that the lazy sub graph will parse and it will load before the parent module completes, but it will not evaluate until the parent module calls it for the first time. However, all of its children, which are eager, will follow the same rules that are set up with our current module loading scheme. so, the children of the lazy parent will - the eager Children of the The Lazy parent will still evaluate before the lazy parent finishes. So yeah, basically this slide. And if we have two interacting graphs where we have a shared module, which is used by an eager graph and also used by a lazy graph, even if the edge is lazy that eager edge will still be loaded eagerly. And finally if we have a lazy subgraph of a lazy graph, it all works recursively. The lazy subgraph will not be run until it's called by its parent. So it all - that's basically my thinking of how this will be shaped. +YSV: Okay. So let's quickly go through what this looks like in terms of the module graph. Here's our simplified module graph. We're not looking at any Cycles or anything like this and nothing is lazy. This is your regular standard module graph. So let's turn one of those edges lazy. What happens to the rest of the graph? Now, please note that the edge that is pointing to a module that has an eager edge like a regular eager edge to its child. What happens is we end up with a lazy sub graph and that lazy sub graph has these eagerly loaded modules. What will happen here is, we've got an invariant right now in how module loading works. So the invariant is that a given child of a parent will finish evaluating before the parent finishes evaluating. This is maintained with Top level await and it's something that is so far true Always. How does this interact? Well what this is introducing is its introducing a new concept. What happens here is that the lazy sub graph will parse and it will load before the parent module completes, but it will not evaluate until the parent module calls it for the first time. However, all of its children, which are eager, will follow the same rules that are set up with our current module loading scheme. so, the children of the lazy parent will - the eager Children of the The Lazy parent will still evaluate before the lazy parent finishes. So yeah, basically this slide. And if we have two interacting graphs where we have a shared module, which is used by an eager graph and also used by a lazy graph, even if the edge is lazy that eager edge will still be loaded eagerly. And finally if we have a lazy subgraph of a lazy graph, it all works recursively. The lazy subgraph will not be run until it's called by its parent. So it all - that's basically my thinking of how this will be shaped. -YSV: Okay, so there are a couple of known issues here and I want to go through them and discuss them a bit. The very first is top-level await. You'll notice that we have this data module dot JS, and we've got main dot JS. Data module.JS is doing a top-level await to fetch some data, some JSON somewhere but inside of the main JS are used data function is sync. Think this is this would have worked before if we had done the work upfront and evaluated data-module.js eagerly, but now it's lazy. So what's going to happen? Well, we can do a couple of different things. We can choose to say that you are not allowed to have async modules and can do that with an assertion. We can say during parsing that this is a synchronous edge that leads to a synchronous graph and all modules in that subgraph are synchronous. Using this method, this case of an async module executing in the context of a sync function will be made impossible. +YSV: Okay, so there are a couple of known issues here and I want to go through them and discuss them a bit. The very first is top-level await. You'll notice that we have this data module dot JS, and we've got main dot JS. Data module.JS is doing a top-level await to fetch some data, some JSON somewhere but inside of the main JS are used data function is sync. Think this is this would have worked before if we had done the work upfront and evaluated data-module.js eagerly, but now it's lazy. So what's going to happen? Well, we can do a couple of different things. We can choose to say that you are not allowed to have async modules and can do that with an assertion. We can say during parsing that this is a synchronous edge that leads to a synchronous graph and all modules in that subgraph are synchronous. Using this method, this case of an async module executing in the context of a sync function will be made impossible. YSV: Alternatively we can do something else, and I've spoken with a few people already and this is where this idea comes from. Another company has, in their custom loader, implemented a solution. The way that they solve this problem is that they simply treat async modules as eager. So if you have a lazy graph that pulls in an async module lazily, it's just treated as eager. We ignore any “lazy” or lazily eager edge for an async module - it's just initialized eagerly. @@ -386,7 +391,7 @@ YSV: On the frontend, this is referred to as code-splitting, and I want to conte YSV: To illustrate this, I want to raise some interesting work that's been happening in react which is the suspense component. This example illustrates that these two technologies have a place side by side both in for example cli based application code and client side code. React has a concept of hooks, which works very similar to what I showed you before in vue. It also has this concept of “suspense” and “lazy”. This is an experimental piece of technology that's in React, I think since 2018, they've been working on it for a long time. While it looks a lot like what we saw before, it works fundamentally differently. One of the things that it achieves is an assurance that race conditions do not occur. It forces synchronous async. -YSV: On the next slide, we have a simplification of the technique. If you look into the react code base they do something similar, but takes much more time to show since it is spread over the oebase. So here's the same code that was on the slide. we have this getUserName and we're doing a JSON.parse here. Now. This should raise lots of eyebrows in the committee. "fetchTextSync". What's this doing? It relies actually on this Promise down here at the end of the program and it's calling this function fetchTextSync, which is doing a couple of things. +YSV: On the next slide, we have a simplification of the technique. If you look into the react code base they do something similar, but takes much more time to show since it is spread over the oebase. So here's the same code that was on the slide. we have this getUserName and we're doing a JSON.parse here. Now. This should raise lots of eyebrows in the committee. "fetchTextSync". What's this doing? It relies actually on this Promise down here at the end of the program and it's calling this function fetchTextSync, which is doing a couple of things. YSV: We have a couple of checks. We've got a couple of maps that are keeping some data for us if a cache has a given URL we just return the cache and then it gets really fun if we have a pending URL. We Throw it. What are we throwing? So then we create a promise and when the promise resolves we delete the pending entry and we set the cached entry. We fire that off, we fire and forget it and then we've got pending.set. We set an entry in the pending map and then we'll throw the promise. Okay, this is absolutely creative. I think it's very interesting. Here we have an infinite Loop of trying to run this infinite loop. That's where your looked at earlier and we are repeatedly throwing the task which is this code we, are running it multiple times until we get the promise that we wanted. So we it's very interesting. I really like this. This is super creative. I didn't know JavaScript had co-routines! (sort of) Rather this is termed algebraic effects but that’s a detail. @@ -402,17 +407,17 @@ YSV: Yeah, fantastic. That would be superb. BFS: Somewhat in a similar vein to Mark. If you were to import star it doesn't seem like you actually need to perform parsing and linkage at the time. The reason why linkage is being done is because of a choice to have these effectful bindings. I think being able to completely avoid parsing is useful on its own and so if we go forward, I think you could achieve that. -YSV: That's fantastic. Thank you. +YSV: That's fantastic. Thank you. -GCL: This proposal, the problem space seems legitimate. But I don't think it would be appropriate to move forward with anything that discourages the use of top-level await like it's a first-class language feature and I think any solution that comes from this should be 100% compatible with that. As the proposal is written currently I don't think it would work, but I'm sure there is something clever that could happen in the future. +GCL: This proposal, the problem space seems legitimate. But I don't think it would be appropriate to move forward with anything that discourages the use of top-level await like it's a first-class language feature and I think any solution that comes from this should be 100% compatible with that. As the proposal is written currently I don't think it would work, but I'm sure there is something clever that could happen in the future. -YSV: So I'm just going to go back to this slide here. Which I think would be the right solution here. So I did I did start off by saying that we could throw if there's async we could enforce the things are sync, but I think this is the right solution that we treat async modules as eager, so they'll still follow the rule, they'll still fulfill the invariant that children are executed ahead of their parents. And I think that this is actually a very elegant solution to exactly that problem. +YSV: So I'm just going to go back to this slide here. Which I think would be the right solution here. So I did I did start off by saying that we could throw if there's async we could enforce the things are sync, but I think this is the right solution that we treat async modules as eager, so they'll still follow the rule, they'll still fulfill the invariant that children are executed ahead of their parents. And I think that this is actually a very elegant solution to exactly that problem. GCL: Yeah. That's definitely one approach. I just wanted to make sure that this was a very explicit concern. YSV: Okay, very good. I completely agree with that. Happy to do that. -BFS: So the way the proposal is written it has these kind of effectful bindings, which, we do have effectful bindings on the Global actually in various environments, but it also entangles the bindings which I'm kind of uncomfortable with, so accessing one binding does populate a different binding. That's - I don't have a solution. I'm just uncomfortable, I'm not going to block or anything. +BFS: So the way the proposal is written it has these kind of effectful bindings, which, we do have effectful bindings on the Global actually in various environments, but it also entangles the bindings which I'm kind of uncomfortable with, so accessing one binding does populate a different binding. That's - I don't have a solution. I'm just uncomfortable, I'm not going to block or anything. YSV: I acknowledge that this is an issue that I don't have a good solution for. I mostly just have a stubborn position on that. I think that we might want to still do this in spite of it, but if we can come up with a solution here then I'm very very open to exploring that. @@ -420,7 +425,7 @@ SYG: One thing that Chrome engineering has explored internally without changing YSV: I'm happy to consider other locations. This is sort of a first stab about where we can where we can put the defer point. If you have suggestions - so we don't have this optimization. We do have a lazy mechanism within the browser, but it's quite old. It's attached to our old module system and it's not something that we want to replicate on the web. -SYG: Yes. I remember that mechanism, but the lazy linking stuff is not productionized. It is an optimization people were thinking about, so I'll connect you and the folks were thinking about optimizing ESMs on the Chrome side, in particular about laziness, and see if there's anything interesting there. +SYG: Yes. I remember that mechanism, but the lazy linking stuff is not productionized. It is an optimization people were thinking about, so I'll connect you and the folks were thinking about optimizing ESMs on the Chrome side, in particular about laziness, and see if there's anything interesting there. YSV: That would be great. That would be awesome. @@ -430,9 +435,9 @@ YSV: as I understand the problem of doing this lazy evaluate you could do things SYG: Cool. Okay, so that matches my understanding of the problem. That problem seems pretty fundamental to any kind of deferred evaluation, unless we do something extremely heavy weight like, you know purity annotations or something, which I don't see as realistic. So I just want to get your take on - right, so it sounds like I got my answer and your take on the this problems is, since it is fundamental we need to the those kind of Bindings that have these knock on evaluation side effects very explicit. And that's your solution. Not any. Not something like Purity annotations. -YSV: Yes, so I actually considered Purity annotations and I discussed it with a few colleagues the problem with Purity annotations is that they'd become like they limit what the module would be able to do and that becomes very unwieldy and will encourage certain kinds of patterns that we may not actually want to see I think that so the there has been research done on this such as parallel JS, which colleague brought up to me, and I need to look into that in depth, but I don't think that that's the right solution here because it would make lazy modules fundamentally different from eager modules, and I don't think we should do that. +YSV: Yes, so I actually considered Purity annotations and I discussed it with a few colleagues the problem with Purity annotations is that they'd become like they limit what the module would be able to do and that becomes very unwieldy and will encourage certain kinds of patterns that we may not actually want to see I think that so the there has been research done on this such as parallel JS, which colleague brought up to me, and I need to look into that in depth, but I don't think that that's the right solution here because it would make lazy modules fundamentally different from eager modules, and I don't think we should do that. -RPR: Solving this problem is critical for seeing real use of ES modules in production. As a reminder to everyone, even though ES modules are used widely as an authoring format it is still quite rare to see them used as a runtime format in production. I think that lack of this feature, or this being an unsolved problem is a reason. In the Bloomberg Terminal we heavily rely on a similar technique for doing synchronous just-in-time loading of modules and it's because we've seen the exact problem that Yulia described at the start, where you build a large code base with lots of eager static dependencies and then you discover "Oh, no, we need to speed it up" and you need to get rid of that evaluation time. The evaluation time is the key thing to eliminate. So I support this proposal. +RPR: Solving this problem is critical for seeing real use of ES modules in production. As a reminder to everyone, even though ES modules are used widely as an authoring format it is still quite rare to see them used as a runtime format in production. I think that lack of this feature, or this being an unsolved problem is a reason. In the Bloomberg Terminal we heavily rely on a similar technique for doing synchronous just-in-time loading of modules and it's because we've seen the exact problem that Yulia described at the start, where you build a large code base with lots of eager static dependencies and then you discover "Oh, no, we need to speed it up" and you need to get rid of that evaluation time. The evaluation time is the key thing to eliminate. So I support this proposal. JHD: I support stage 1 solving the problem, but I wanted to be quite clear. I am horrified by the prospect of side-effecting variable references. So I can't imagine how I could be convinced to support stage 2 if that was part of the proposal. @@ -442,31 +447,30 @@ MM: I want to emphasize a qualifier that's like JHD’s qualifier, which is stag BT: All right. Thank you Mark. - - ### Conclusion/Resolution -Stage 1 +Stage 1 ## Intl LocaleMatcher for Stage 1 + Presenter: Shane Carr(SFC) -- [proposal]() -- [slides]() +- proposal +- slides -SFC: I'll be filling in for Long here on this presentation button to Locale matcher long as a delegate who is a member of the task group 2 and is unable today because of personal reasons to give this presentation. So I'll fill in for them to present LocaleMatcher. +SFC: I'll be filling in for Long here on this presentation button to Locale matcher long as a delegate who is a member of the task group 2 and is unable today because of personal reasons to give this presentation. So I'll fill in for them to present LocaleMatcher. -SFC: This is a this is a very popularly requested feature in the in the insole specification as a way to resolve give given a set of languages that they that the user understands which is typically the accepts language header and a list of languages that the application supports resolving those two match to a Locale that you should actually display to the user. user. You can see some of the some of this written here and as motivation, we currently support local matching in ICU in Intl as but it's a lower level feature. That is sort of transparently part of number format and date-time format and the other formatters and the desire of Long's proposal here is to surface that so it can also be used for selecting from translation resources and Things like that. So this is a very early proposal of what this API could look like. We could for example have an array of requested locales and available locales and then it would return a new string with some options. And then there's several options for what the algorithm could be. long has cited several prior Arts here including in particular the one I'm most familiar with which is UTS35 language matching. and this is an example call site. So I do want to highlight a couple potential issues with this there. I believe ZB also on the call, he can attest to this a little bit but there's not a universally accepted algorithm for how to do language matching. There are some popular ones that are down here, but the exact algorithm I think is sort of up for debate and there's some other issues with that. +SFC: This is a this is a very popularly requested feature in the in the insole specification as a way to resolve give given a set of languages that they that the user understands which is typically the accepts language header and a list of languages that the application supports resolving those two match to a Locale that you should actually display to the user. user. You can see some of the some of this written here and as motivation, we currently support local matching in ICU in Intl as but it's a lower level feature. That is sort of transparently part of number format and date-time format and the other formatters and the desire of Long's proposal here is to surface that so it can also be used for selecting from translation resources and Things like that. So this is a very early proposal of what this API could look like. We could for example have an array of requested locales and available locales and then it would return a new string with some options. And then there's several options for what the algorithm could be. long has cited several prior Arts here including in particular the one I'm most familiar with which is UTS35 language matching. and this is an example call site. So I do want to highlight a couple potential issues with this there. I believe ZB also on the call, he can attest to this a little bit but there's not a universally accepted algorithm for how to do language matching. There are some popular ones that are down here, but the exact algorithm I think is sort of up for debate and there's some other issues with that. SFC: I believe this proposal is good for stage one though, because these are questions that - I think the fact that we get so many requests for Locale match. I think it means we should at least investigate these problems and I think that there's a lot of room here in this proposal to investigate these problems. This proposal could morph into exposing some of the lower-level building blocks like region containment and things like And I think that's that's a great direction for this proposal to go. So the purpose of asking for stage one is, we have pretty clear motivation. We know that there's a need for this on the web platform and then the purpose of asking for stage one is to investigate this space and arrive at a solution that everyone can agree on. -SFC: ZB have anything you want to add? Are you on the call? +SFC: ZB have anything you want to add? Are you on the call? ZB: Yes, I am. I don't think I have that much to add. My personal hope is that a lot of that is solvable by user land libraries and the real question is going to be more tied to what we talked about yesterday, which is whether the commonality of the use case justifies addition of an API, that could be done by a library, especially in the light of having multiple different potential algorithms to solve it, but I think that it's absolutely a good space for us to explore stage one. RPR: Okay, so we've had some support. Is there anyone wanting to go on the queue? -SFC: Just for informational purposes are their delegates here who have worked a lot with the Accept-Language header because this proposal as well as some other related feature requests and Ecma 402 sort of involve different ways to apply or evolve this header and it would be great if we could engage in those conversations while we're in stage 1 with I think that's that's going to be one of the big focuses. Are people familiar with what the Accept-Language header is? +SFC: Just for informational purposes are their delegates here who have worked a lot with the Accept-Language header because this proposal as well as some other related feature requests and Ecma 402 sort of involve different ways to apply or evolve this header and it would be great if we could engage in those conversations while we're in stage 1 with I think that's that's going to be one of the big focuses. Are people familiar with what the Accept-Language header is? RPR: How about you explain it for us? @@ -474,11 +478,11 @@ SFC: I'll go ahead and pop up my web inspector. So if I go into the network tab [technical difficulties] -SFC: Okay, so I just made this request to the GitHub page and if I go into the header pain I can go ahead and scroll down to the request headers and the pretty much every request that I send in addition to carrying your cookies and Aries and all those kinds of things. accepts language header, which is right here. This tells you the languages that the page should respond to. I'm not very interesting: in my accept language header I just have “en-US,en” but there's a lot of people, especially multilingual users, who are going to have a longer, more complex list here. One of the purposes of this proposal is to Say, okay. Well given this accept language list. Let me resolve this to resources for the app. So if you have say French Spanish and English all in here in a certain order and your app supports Spanish English and Portuguese then maybe you'll select Spanish if Spanish is ranked higher than English even though you don't don't support French. And that's the kind of thing - ZB has a lot more experience in this area than I do so, so I'm not going to pretend to give an authoritative discussion on this, but think that it would be great if we can engage other people who have worked with these types of headers because I think there's a lot of interesting area to explore here. +SFC: Okay, so I just made this request to the GitHub page and if I go into the header pain I can go ahead and scroll down to the request headers and the pretty much every request that I send in addition to carrying your cookies and Aries and all those kinds of things. accepts language header, which is right here. This tells you the languages that the page should respond to. I'm not very interesting: in my accept language header I just have “en-US,en” but there's a lot of people, especially multilingual users, who are going to have a longer, more complex list here. One of the purposes of this proposal is to Say, okay. Well given this accept language list. Let me resolve this to resources for the app. So if you have say French Spanish and English all in here in a certain order and your app supports Spanish English and Portuguese then maybe you'll select Spanish if Spanish is ranked higher than English even though you don't don't support French. And that's the kind of thing - ZB has a lot more experience in this area than I do so, so I'm not going to pretend to give an authoritative discussion on this, but think that it would be great if we can engage other people who have worked with these types of headers because I think there's a lot of interesting area to explore here. ZB: If we have a couple more minutes I can also try to mention some of the open questions. That are going beyond accepted languages that are related to language negotiation, but that's also listed in the issues of the proposal. So interested parties can also go there. -RBR: We do have still another 11 minutes on the time box. Would you like to share anymore? +RBR: We do have still another 11 minutes on the time box. Would you like to share anymore? ZB: Yeah, so I'll try to give people some insight into what the considerations we are thinking about is in some scenarios. You want to negotiate down to one language or one Locale because you want to display your website in some language right, like you need one but in some other cases you may want to negotiate down a list of locals. that you have in some fall back because you may have some resources in the first preferred local to the user ones, but some resources only in the second or third so you're negotiating down not to a single local but to a fallback chain of locals? Another interesting consideration, is that there are some open ended questions that are not arbitrarily resolvable about whether a person prefers an imperfect match. Let's say that someone says, I speak Canadian French and Canadian English - so is South African English better or Swiss French, right. So there are some imperfect matches and how far those two are from each other. Some people will say I'm bilingual and I basically speak English and German and both languages are fine for me. Whichever you have, show it to me. Some people will say I do speak English and German, but my English is perfect and my German is, you know, very bad, or the reverse, and the distance between them is really important. So when you're designing those algorithms there is there's a lot of questions around that and then like what you are going to do with the resulting list has a huge impact on how you want to design this algorithm. If the result is that you're gonna display a date, then regional differences are much more important even than linguistic differences. Because your date format is more uniform between English and French in Canada, than between Great Britain and the United States. If you're going to display texts, my language is more important and the script is more important. In some cases the distance between my first and second language is not as important. In some cases it's crucial and it's prohibitive for a person to understand some complex nuanced information about whether their payment should be cancelled or whether they should accept the transfer request in their second language. So there are a lot of new ones decisions to be made and one of my personal concerns, so - we're diving away from experts' conversations to like ZB's thoughts - is that presenting an API in Ecma 402 that makes it easy to negotiate in, one of those ways is going to kind of hide this complexity and make it very tempting for people to just plaster this solution because it provided by JavaScript even if this is not the right solution for the problem they're trying to solve. So, I am of an opinion that since there is a breath of potential algorithms and solution to how to sort and order and filter I prefer to - we should be more guiding people into, or ensuring that we expose building blocks for people to design their own algorithms for their own purposes, but I can strongly agree with the sentiment that having a language matcher negotiation in Ecma 402 seems like a very tempting Generic solution and I don't know exactly what the odds are beat will be. I hope that stage one is exactly what we're going to be discussing, but this is the space that we're going to be working on around this proposal. @@ -491,23 +495,23 @@ RBR: or conversely any objections to stage one? No objections means consensus. C SFC: Thank you. ### Conclusion/Resolution -Stage 1 +Stage 1 ## Inclusion working group updates -Presenter: Mark Cohen (MPC) -- [proposal]() -- [slides]() +Presenter: Mark Cohen (MPC) +- proposal +- slides MPC: This is a status update on the work of the TC39 inclusion group as of today. So first off, I'm going to give a brief overview of what the inclusion group is. So we're an ad hoc informal group of TC39 delegates. We are not a chartered working group or TG. So we don't have any official role, but we have been meeting in this sort of improvised capacity fairly regularly for the last few months. There are meetings on the TC39 events calendar which you can check out if you like. I'll have more details on that in the end and our goal is to work on proposals with the aim to proactively improve inclusion in TC39 for the sake of ensuring that all delegates are able to participate to the best of their ability and that new delegates as well can participate to the best of their ability. -MPC: So we have two status updates to provide today. The first one is on nonviolent communication training. This is something that was raised a long time ago within the committee and the inclusion group has kind of taken it up to get it past the Finish Line a brief note before I jump into this. This was entirely done by Dave Poole. I am presenting on his behalf, but he did all of the work with reaching out to trainers. Gathering proposals, all that sort of stuff. So big thank you to Dave. He was not able to be here to present during this time slot. And if you would like to check out the thread with the full details of all the proposals the link is in the slides here. I'll copy it to the notes afterwards. +MPC: So we have two status updates to provide today. The first one is on nonviolent communication training. This is something that was raised a long time ago within the committee and the inclusion group has kind of taken it up to get it past the Finish Line a brief note before I jump into this. This was entirely done by Dave Poole. I am presenting on his behalf, but he did all of the work with reaching out to trainers. Gathering proposals, all that sort of stuff. So big thank you to Dave. He was not able to be here to present during this time slot. And if you would like to check out the thread with the full details of all the proposals the link is in the slides here. I'll copy it to the notes afterwards. -MPC: Nonviolent communication training – the work that we've done so far is first of all research ways that we can bring NVC Things into TC39 meetings. We have a pretty unique format here within this Committee in terms of how we conduct plenaries and how we communicate outside of plenaries as well. So Dave has met with five different trainers from all across the world and and talked with each of them about what TC39 is and given them the details on how we conduct our plenaries how we work as well as the content of that work, and asked for their input on how we might be able to conduct NVC trainings within the committee. And then after those sort of initial meetings, he asked each trainer to provide a quote and a proposal for how precisely to deliver this training to this committee for, you know, approximately 60 to 80 people seems to be our average attendance. +MPC: Nonviolent communication training – the work that we've done so far is first of all research ways that we can bring NVC Things into TC39 meetings. We have a pretty unique format here within this Committee in terms of how we conduct plenaries and how we communicate outside of plenaries as well. So Dave has met with five different trainers from all across the world and and talked with each of them about what TC39 is and given them the details on how we conduct our plenaries how we work as well as the content of that work, and asked for their input on how we might be able to conduct NVC trainings within the committee. And then after those sort of initial meetings, he asked each trainer to provide a quote and a proposal for how precisely to deliver this training to this committee for, you know, approximately 60 to 80 people seems to be our average attendance. -MPC: So the current status: we have four proposals that have been submitted and you can view the details on that GitHub thread and there's one that's still pending that we've asked to be submitted by February 12th. The proposals run a pretty wide gamut of the exact format. So each different trainer had a fairly unique proposal for the format of the training. So there are lots of different styles of workshops, different lengths, different amounts of workshops. Several of the trainers offered optional additional sessions for smaller groups, so we were mostly thinking like the chair group, the Code of Conduct group, there could also be one for just interested delegates who want to get additional training. So there's a lot of flexibility in terms of the format among the different trainers and it'll provide us with, you know, different experiences coming out of the workshop depending on which one we go with. There is also one trainer who is equipped to provide anti-racism training, which is something that the inclusion group intends to bring to this committee after nonviolent communication training. So we're basically intending to use this as like the first step and then proposed anti-racism training once the committee has gone through this. +MPC: So the current status: we have four proposals that have been submitted and you can view the details on that GitHub thread and there's one that's still pending that we've asked to be submitted by February 12th. The proposals run a pretty wide gamut of the exact format. So each different trainer had a fairly unique proposal for the format of the training. So there are lots of different styles of workshops, different lengths, different amounts of workshops. Several of the trainers offered optional additional sessions for smaller groups, so we were mostly thinking like the chair group, the Code of Conduct group, there could also be one for just interested delegates who want to get additional training. So there's a lot of flexibility in terms of the format among the different trainers and it'll provide us with, you know, different experiences coming out of the workshop depending on which one we go with. There is also one trainer who is equipped to provide anti-racism training, which is something that the inclusion group intends to bring to this committee after nonviolent communication training. So we're basically intending to use this as like the first step and then proposed anti-racism training once the committee has gone through this. MPC: The next step is, we are going to compare the proposals from among the various trainers. eventually once we've kind of narrowed down and decided on which one we would like to proceed with we will secure funding, scheduling it, coordinate with the chairs to figure out the exact form to eventually bring the training to the committee. We were aiming we don't have any sort of definitive timeline, but certainly within the year and I believe Dave was saying that we should be able to deliver the training sometime in the summer accounting for all of the various logistical pieces that need to fall into place. @@ -515,13 +519,13 @@ MPC: So yeah that about sums up the NVC training update. If you would like to jo RPR: There's nothing on the queue. -MPC: Okay, lovely. So yes segue into the second topic, which is prototyping Matrix for more accessible real-time committee chat. So currently we're using IRC. There have been a number of issues raised in the past about the continued use of IRC and whether we should and how we might move off of IRC. Most notably was this very large reflector thread which I have linked here in the slides, I will also copy that over to the notes, that provides a lot of context for the kind of foundation of this prototype. +MPC: Okay, lovely. So yes segue into the second topic, which is prototyping Matrix for more accessible real-time committee chat. So currently we're using IRC. There have been a number of issues raised in the past about the continued use of IRC and whether we should and how we might move off of IRC. Most notably was this very large reflector thread which I have linked here in the slides, I will also copy that over to the notes, that provides a lot of context for the kind of foundation of this prototype. -MPC: So the motivation here is removing barriers to participation for TC39 delegates and the so I'll go into depth on the exact barriers in just a moment, but the but the kind of origin of these barriers is that IRC in this day and age has become somewhat Arcane for people who are used to using IRC all the time. It's totally intuitive, it works just fine. There's you know, no problems with using it. But you know, we're at a point in the history of technology now, where not everyone grew up with IRC. And for those who didn't grow up IRC either just by virtue of being younger or perhaps not having been involved. I lived in communities that use IRC widely for people who don't have that experience. It is a really unintuitive tool to use and presents a barrier to participation. So now let's talk about exactly what the barriers are. The biggest ones that we've identified. First of all the structure of IRC. So the various different networks how to join them, how to register on them as a kind of sub point of how nickname management is done. So, you know, unless you most servers use NickServe which is what our server uses freenode, but but you know, you have to know about that if you just kind of naively sign up your user name isn't yours. anybody who wants to claim it? There's also no offline message delivery, which is or at least by default no offline message delivery, which can be really unintuitive. as you know, if you again if you didn't grow up with that experience every other messaging system that you're used to at this point. You can just send a message to somebody and if they're offline, they'll see it when they sign in similarly. There's no message history by default. So if you are offline you miss everything that was said in a channel so the kind of commonly accepted solutions to these things are either use a bouncer which is a fairly time-consuming process. So if you don't already know how to do it, if you're starting from zero, you'd have to acquire some sort of cloud compute server. Learn what a bouncer is and how to administer it, install it and then probably debug a bunch of like Network rules. You could, if you don't want to go that route you could pay for a hosted service, which does provide a relatively decent experience. I know a lot of people on this committee myself. IRC cloud and that provides an okay experience, but I think and this is detailed more in the reflector thread. I think we should agree that kind of expecting people to lay out an expense and get it reimbursed by their employer is not necessarily a great idea. You know, it's certainly possible for a lot of people but we should consider that, you know a not everybody has the kind of financial Freedom or trust to do that and be not everybody on this committee is sponsored by like a large corporation who will just no question reimbursed that sort of thing and then the third solution in very heavy air quotes is to just use IRC as is and accept all of these barriers or well, I guess not really the first one but except that, you know, you won't be able to deliver a message to a fellow offline delegate and you won't be able to receive any if you yourself offline. Except that you're just going to miss the entire message history of a channel if you're offline. Now, this is possible. I you know, there are many people on this committee who use IRC this way, but it's you know while that is certainly a totally viable choice that somebody can it doesn't really match the like expected defaults today for what a messaging system should do and it also presents its own barrier of you know, So if you are somebody who's new on this committee, and your maybe trying to learn more about about the committee process or you're trying to reach a delegate to ask them about a proposal of theirs or something like that. Then you have to fight the platform basically to get the knowledge that you need. so after evaluating several different chat platforms, including slack Discord and a whole slew of others. You can look on our repository. There's a big spreadsheet of different platforms. +MPC: So the motivation here is removing barriers to participation for TC39 delegates and the so I'll go into depth on the exact barriers in just a moment, but the but the kind of origin of these barriers is that IRC in this day and age has become somewhat Arcane for people who are used to using IRC all the time. It's totally intuitive, it works just fine. There's you know, no problems with using it. But you know, we're at a point in the history of technology now, where not everyone grew up with IRC. And for those who didn't grow up IRC either just by virtue of being younger or perhaps not having been involved. I lived in communities that use IRC widely for people who don't have that experience. It is a really unintuitive tool to use and presents a barrier to participation. So now let's talk about exactly what the barriers are. The biggest ones that we've identified. First of all the structure of IRC. So the various different networks how to join them, how to register on them as a kind of sub point of how nickname management is done. So, you know, unless you most servers use NickServe which is what our server uses freenode, but but you know, you have to know about that if you just kind of naively sign up your user name isn't yours. anybody who wants to claim it? There's also no offline message delivery, which is or at least by default no offline message delivery, which can be really unintuitive. as you know, if you again if you didn't grow up with that experience every other messaging system that you're used to at this point. You can just send a message to somebody and if they're offline, they'll see it when they sign in similarly. There's no message history by default. So if you are offline you miss everything that was said in a channel so the kind of commonly accepted solutions to these things are either use a bouncer which is a fairly time-consuming process. So if you don't already know how to do it, if you're starting from zero, you'd have to acquire some sort of cloud compute server. Learn what a bouncer is and how to administer it, install it and then probably debug a bunch of like Network rules. You could, if you don't want to go that route you could pay for a hosted service, which does provide a relatively decent experience. I know a lot of people on this committee myself. IRC cloud and that provides an okay experience, but I think and this is detailed more in the reflector thread. I think we should agree that kind of expecting people to lay out an expense and get it reimbursed by their employer is not necessarily a great idea. You know, it's certainly possible for a lot of people but we should consider that, you know a not everybody has the kind of financial Freedom or trust to do that and be not everybody on this committee is sponsored by like a large corporation who will just no question reimbursed that sort of thing and then the third solution in very heavy air quotes is to just use IRC as is and accept all of these barriers or well, I guess not really the first one but except that, you know, you won't be able to deliver a message to a fellow offline delegate and you won't be able to receive any if you yourself offline. Except that you're just going to miss the entire message history of a channel if you're offline. Now, this is possible. I you know, there are many people on this committee who use IRC this way, but it's you know while that is certainly a totally viable choice that somebody can it doesn't really match the like expected defaults today for what a messaging system should do and it also presents its own barrier of you know, So if you are somebody who's new on this committee, and your maybe trying to learn more about about the committee process or you're trying to reach a delegate to ask them about a proposal of theirs or something like that. Then you have to fight the platform basically to get the knowledge that you need. so after evaluating several different chat platforms, including slack Discord and a whole slew of others. You can look on our repository. There's a big spreadsheet of different platforms. -MPC: We identified Matrix as the best Target for a potential migration, you know, should we decide to move off of IRC? What we identified is that Matrix is probably the best destination. destination. So Matrix solves all of these barriers, it just gives an out-of-the-box experience that doesn't present any of these barriers that IRC does and on top of that it gives us much better moderation tools. So that's both kind of at the structural level in terms of permissions and you know, assigning different roles to users and at the individual level where it gives a there's a there's a better ability for delegates to kind of protect themselves and moderate their own experience, which was something that was identified again way back in that big reflector thread as kind of an important property for a modern chat platform. It also gives us built in logging. This is a pretty big one. The legal policies TC39 has to abide by all of our technical discussion has to be logged. Doing that on IRC is a pretty big pain. And in fact, there are several channels. That probably should be logged. that aren't or that you know, it's very difficult to access the logs or perhaps their gaps in the logs Etc. The overall point. Is that doing logging on IRC is always kind of a third-party taped together ad hoc solution whereas Matrix if you create a public room on a Home Server that supports it which The Matrix dot-org Home Server does then you can just click a link and see the entire history of the room and it gives us It gives us that completely for free like IRC. It's an open source protocol. There are many open-source clients including many high-quality graphical ones. that kind of match, you know other modern chat Platforms in onboarding experience and interface expectations Etc, but then there are also plenty of you know command line clients or other more advanced things for those who like to configure their clients lot. There's also a federation which is a really nice property. Ernie's I know there are several companies who are Ecma members and participate in this committee who use Matrix internally Mozilla Galia and I think beaucoup, but correct me if I'm wrong on that all use Matrix, and so, you know the TC39 channels can just federates seamlessly with those delegates existing accounts. So that's a nice property and lastly there is an IRC bridge for those who really like their IRC client. We have not prototyped that yet. So we definitely are looking for feedback on that, but it does exist and it is kind of used in the Wild by other Matrix users. +MPC: We identified Matrix as the best Target for a potential migration, you know, should we decide to move off of IRC? What we identified is that Matrix is probably the best destination. destination. So Matrix solves all of these barriers, it just gives an out-of-the-box experience that doesn't present any of these barriers that IRC does and on top of that it gives us much better moderation tools. So that's both kind of at the structural level in terms of permissions and you know, assigning different roles to users and at the individual level where it gives a there's a there's a better ability for delegates to kind of protect themselves and moderate their own experience, which was something that was identified again way back in that big reflector thread as kind of an important property for a modern chat platform. It also gives us built in logging. This is a pretty big one. The legal policies TC39 has to abide by all of our technical discussion has to be logged. Doing that on IRC is a pretty big pain. And in fact, there are several channels. That probably should be logged. that aren't or that you know, it's very difficult to access the logs or perhaps their gaps in the logs Etc. The overall point. Is that doing logging on IRC is always kind of a third-party taped together ad hoc solution whereas Matrix if you create a public room on a Home Server that supports it which The Matrix dot-org Home Server does then you can just click a link and see the entire history of the room and it gives us It gives us that completely for free like IRC. It's an open source protocol. There are many open-source clients including many high-quality graphical ones. that kind of match, you know other modern chat Platforms in onboarding experience and interface expectations Etc, but then there are also plenty of you know command line clients or other more advanced things for those who like to configure their clients lot. There's also a federation which is a really nice property. Ernie's I know there are several companies who are Ecma members and participate in this committee who use Matrix internally Mozilla Galia and I think beaucoup, but correct me if I'm wrong on that all use Matrix, and so, you know the TC39 channels can just federates seamlessly with those delegates existing accounts. So that's a nice property and lastly there is an IRC bridge for those who really like their IRC client. We have not prototyped that yet. So we definitely are looking for feedback on that, but it does exist and it is kind of used in the Wild by other Matrix users. -MPC: So what we've got going on right now is basically a pilot program. We've set up an analogous set of channels to what already exists on IRC. So we have kind of the general TC39 channel, the TC39 delegates Channel and Temporal Dead Zone, and then we also have our inclusion group Channel. I have also set up a channel specifically for feedback and kind of notes on this pilot or prototype program and anybody is welcome to just create a new channel if they you know, if you want to kind of bring a topic over from IRC and see how it works on Matrix. You can just create a new channel through pretty much every Matrix clients. +MPC: So what we've got going on right now is basically a pilot program. We've set up an analogous set of channels to what already exists on IRC. So we have kind of the general TC39 channel, the TC39 delegates Channel and Temporal Dead Zone, and then we also have our inclusion group Channel. I have also set up a channel specifically for feedback and kind of notes on this pilot or prototype program and anybody is welcome to just create a new channel if they you know, if you want to kind of bring a topic over from IRC and see how it works on Matrix. You can just create a new channel through pretty much every Matrix clients. MPC: Okay. Well, this is the last slide. How can I participate if you want to join us there's that Matrix room right there. I'll copy that link over. We also have an inactive channel on freenode. We have a GitHub repository TC39 / inclusion working group and we have calls every other Friday which are on the TC39 events calendar. Please reach out to me if you would like a direct invite to those calls sent to your inbox so you can add it to your personal or work calendars and I believe that'll do it. Apologies for running over time here. @@ -530,66 +534,65 @@ RPR: That was an excellent summary. Thank you Mark. MPC: Glad to do it. ## Incubation Chartering -Presenter: Shu Yu-Guo (SYG) -- [proposal]() -- [slides]() +Presenter: Shu Yu-Guo (SYG) +- proposal +- slides -SYG: So first of all, there are two overflow items from last year, those two are the error cost proposal at stage 2 and the module block proposal, which is now at stage 2. It was at stage one and now stage two. +SYG: So first of all, there are two overflow items from last year, those two are the error cost proposal at stage 2 and the module block proposal, which is now at stage 2. It was at stage one and now stage two. -SYG: I would like to call out and ask for the defer module import evaluation to be included in the next Charter. I would love to discuss it personally, and I think it's a good fit. +SYG: I would like to call out and ask for the defer module import evaluation to be included in the next Charter. I would love to discuss it personally, and I think it's a good fit. -YSV: Yeah, I would be happy to do that, I'm very excited about an incubator all for this. +YSV: Yeah, I would be happy to do that, I'm very excited about an incubator all for this. -SYG: That sounds good to me. So I won't have that and the only other one I had in mind was async do Expressions given that it is earlier than the do. Kevin, do you think there are feedback items that you would like to ask at the incubator call? And would you benefit from an incubator call them? +SYG: That sounds good to me. So I won't have that and the only other one I had in mind was async do Expressions given that it is earlier than the do. Kevin, do you think there are feedback items that you would like to ask at the incubator call? And would you benefit from an incubator call them? -KG: I actually think the design space for async do is fairly constrained, at least assuming that it carries over the decisions from do expressions. +KG: I actually think the design space for async do is fairly constrained, at least assuming that it carries over the decisions from do expressions. -SYG: So, okay, then I am happy to omit that from the next Charter. So currently we have three carry overs which are error cause, module blocks, and then the Deferred module import eval proposal from Yulia. We probably have a time for one or two more depending on how many we can run but three is certainly fine. If no other volunteers speak up. +SYG: So, okay, then I am happy to omit that from the next Charter. So currently we have three carry overs which are error cause, module blocks, and then the Deferred module import eval proposal from Yulia. We probably have a time for one or two more depending on how many we can run but three is certainly fine. If no other volunteers speak up. -DE: Yeah, there's time then well if we don't get through the topic today, then I really want to draw a conclusion on this protocol design issue that is next on the agenda. agenda. So that's sort of conditional on it being left unsolved in the brand kicking topic might be interesting for a group, but I would want to leave that to the volunteer Champions to decide the brand check +DE: Yeah, there's time then well if we don't get through the topic today, then I really want to draw a conclusion on this protocol design issue that is next on the agenda. agenda. So that's sort of conditional on it being left unsolved in the brand kicking topic might be interesting for a group, but I would want to leave that to the volunteer Champions to decide the brand check SYG: That sounds good. We currently have three confirmed and one possible. Are there any Champions or any delegates who would like to volunteer to have a topic be discussed? [no] SYG: So we will be going with three with a possibility of a fourth look out for the email for the new Charter. As a quick recap for folks who are not familiar, these calls are an hour long. I tried to schedule them with a doodle about a week ahead of time and they happen every other week and minutes are published just like meeting notes are published and look for an issue on the reflector for the next coming up topic meeting details times and so on. All right. Thank you very much. -RBR: Thank you, Shu. - +RBR: Thank you, Shu. ## Protocols in JavaScript -Presenter: Dan Ehrenberg (DE) -- [slides](https://docs.google.com/presentation/d/1G8g0MSpMeJJeRNbiC89y2q-nxJ8371JczMaxN8ksjPk/edit) +Presenter: Dan Ehrenberg (DE) +- [slides](https://docs.google.com/presentation/d/1G8g0MSpMeJJeRNbiC89y2q-nxJ8371JczMaxN8ksjPk/edit) -DE: Okay. Thanks. So I wanted to talk about protocols in JavaScript. And the reason I want to talk about protocols is because it came up in some design discussions about Temporal. Temporal lets you design custom calendars and time zones. These are based on methods that get called for this custom behavior. It's important because calendars and time zones are culturally defined and can change over time. JavaScript engines especially with internationalisation included will have pretty good information, but the application may have more appropriate information. So in designing these custom time zones and calendars I've been working with the Temporal champions group to align to my understanding of TC39 convention on how protocols are used. In issues, JHD has raised a different idea of how this should work and I wanted to discuss this more broadly with the committee because I'd like - the proposal should be hopefully concluded on. +DE: Okay. Thanks. So I wanted to talk about protocols in JavaScript. And the reason I want to talk about protocols is because it came up in some design discussions about Temporal. Temporal lets you design custom calendars and time zones. These are based on methods that get called for this custom behavior. It's important because calendars and time zones are culturally defined and can change over time. JavaScript engines especially with internationalisation included will have pretty good information, but the application may have more appropriate information. So in designing these custom time zones and calendars I've been working with the Temporal champions group to align to my understanding of TC39 convention on how protocols are used. In issues, JHD has raised a different idea of how this should work and I wanted to discuss this more broadly with the committee because I'd like - the proposal should be hopefully concluded on. -DE: So two big questions to answer are, should calendars and time zones be required to be subclasses of the built-in Calendar and TimeZone or should it just be anything that conforms to the protocol. The other question is should these methods that custom calendars and time zones can override be named by symbols or strings. So this second one, we already discussed in committee, but there were some claims that the first answer might affect it. You can see the Temporal documentation, with custom calendars the recommended way to subclass Calendar, but really you just need to fill in a few methods. And time zones are similar. There is a recommendation that you set up a class that extends Temporal.TimeZone and then you can override certain methods, but you could also just implement those methods and use the protocol. Just for context, as we discussed earlier, the Temporal proposal is frozen. So the draft is complete. The polyfill is out there and released and it's all ready for review. Please try it out and file bugs. The champion group is not working on new changes, just responding to the review from the committee. So this issue that I want to discuss is very much not core to the Temporal proposal. It's not something that the Temporal champion group has expressed interest in blocking the proposal over. This is something that we should just understand what our conventions are and follow them. So I'm personally especially interested in maintaining consistent conventions. And so any changes that we make from here would be very small and localized. +DE: So two big questions to answer are, should calendars and time zones be required to be subclasses of the built-in Calendar and TimeZone or should it just be anything that conforms to the protocol. The other question is should these methods that custom calendars and time zones can override be named by symbols or strings. So this second one, we already discussed in committee, but there were some claims that the first answer might affect it. You can see the Temporal documentation, with custom calendars the recommended way to subclass Calendar, but really you just need to fill in a few methods. And time zones are similar. There is a recommendation that you set up a class that extends Temporal.TimeZone and then you can override certain methods, but you could also just implement those methods and use the protocol. Just for context, as we discussed earlier, the Temporal proposal is frozen. So the draft is complete. The polyfill is out there and released and it's all ready for review. Please try it out and file bugs. The champion group is not working on new changes, just responding to the review from the committee. So this issue that I want to discuss is very much not core to the Temporal proposal. It's not something that the Temporal champion group has expressed interest in blocking the proposal over. This is something that we should just understand what our conventions are and follow them. So I'm personally especially interested in maintaining consistent conventions. And so any changes that we make from here would be very small and localized. DE: What is a protocol? I think of a protocol as a set of methods with a contract of how to call them. So we have many different protocols in JavaScript. The iteration and iterable protocol is a big one. But some smaller ones are things like in the Set constructor, the add() method of the newly created object is called for each element of the iterable that's passed as an argument. And similarly, Calendar and TimeZone are protocols, they also have concrete classes just like iterator.prototype, like Set or RegExp have concrete classes, but they also work as protocols with these methods that you can subclass that existing thing and override those methods and the methods will be called in a certain way with certain expectations of their return values. -DE: There was the question raised of subclassing or brand checking. So when you make a custom calendar or time zone should you be required to subclass the existing Temporal.Calendar or Temporal.TimeZone? Well if you think about any of these proposals, any of these protocols, none of them have such a subclass check. No protocol does brand checks. Protocols are always about the name of the method, how it's called and how it handles return values. So for an example, we have iterator.prototype, and you should extend iterator.prototype if you make your own iterator, this is what iterator helpers will be based on, but there's no iterator superclass that sets the brand and there are no checks for this brand. Temporal documentation outlines that it's probably easiest to subclass existing calendars, but that's not how the protocol itself works. +DE: There was the question raised of subclassing or brand checking. So when you make a custom calendar or time zone should you be required to subclass the existing Temporal.Calendar or Temporal.TimeZone? Well if you think about any of these proposals, any of these protocols, none of them have such a subclass check. No protocol does brand checks. Protocols are always about the name of the method, how it's called and how it handles return values. So for an example, we have iterator.prototype, and you should extend iterator.prototype if you make your own iterator, this is what iterator helpers will be based on, but there's no iterator superclass that sets the brand and there are no checks for this brand. Temporal documentation outlines that it's probably easiest to subclass existing calendars, but that's not how the protocol itself works. DE: So personally, I think we should stick with this pattern for the calendar and time zone protocols and that's what the temporal proposal does. Okay. So here's a funny example with RegExp where you can extend RegExp or you can make some crazy thing yourself that meets the RegExp protocol and call the RegExp methods on it. So, you know, we've been discussing how actually this whole RegExp subclassing thing is kind of weird. One of my first TC39 meetings argued that we should remove this RegExp subclassing, and recently SYG and YSV have been pushing for this in a more concrete way. And I think that's a great effort. So it's important not to overuse these protocols. Some subclassing is one thing, getting the new.target plumbing, but another thing is calling all of these user defined functions in a way that might not be useful. So Calendar and TimeZone are different from RegExp in that for RegExp we don't have any meaningful use cases for subclassing it, but for Calendar and TimeZone It's really important — we've identified concrete calendars where there's there's real usage of things that aren't yet encoded in all of the standards and it's important to be able to support these these different cultures in code. DE: The solution if you have a protocol that's not useful is to remove the calls to the methods adding brand checks, but still calling methods would not create a simplification for any of these protocols that are removed by the subclassing built-ins removal proposal. So I think that might be a source of confusion. -DE: On kind a lighter topic, whether symbols or strings are used. You can see that in the existing protocols. There's just a variety. Sometimes symbols are used, sometimes strings are used, you know, like the iterator next() method. I think it's fine for symbols or strings to both be used in protocols. I don't think it has to be one or the other. +DE: On kind a lighter topic, whether symbols or strings are used. You can see that in the existing protocols. There's just a variety. Sometimes symbols are used, sometimes strings are used, you know, like the iterator next() method. I think it's fine for symbols or strings to both be used in protocols. I don't think it has to be one or the other. -DE: So protocols and explicit language features. There are different language features that try to encode protocols. One is TypeScript interfaces, which you could say. It's not a TC39 language feature. It's not one that's has runtime semantics, but it's still a construct that people align their protocols around and then we could also have first-class protocols. So you could take as a design requirement using the language to be expressible according to these other concepts of protocols, but they don't provide strong guidance on symbols versus strings or fore brand checking. They don't tend to come with these brands. I wanted to leave these questions to answer when we've gone through the queue, so I'm happy to go through the clarifying questions. +DE: So protocols and explicit language features. There are different language features that try to encode protocols. One is TypeScript interfaces, which you could say. It's not a TC39 language feature. It's not one that's has runtime semantics, but it's still a construct that people align their protocols around and then we could also have first-class protocols. So you could take as a design requirement using the language to be expressible according to these other concepts of protocols, but they don't provide strong guidance on symbols versus strings or fore brand checking. They don't tend to come with these brands. I wanted to leave these questions to answer when we've gone through the queue, so I'm happy to go through the clarifying questions. -JHD: You said no protocol does a brand check. This is false; Promise.then() does have the special behavior if it has the brand? +JHD: You said no protocol does a brand check. This is false; Promise.then() does have the special behavior if it has the brand? DE: In what sense is Promise a protocol? Oh, Promise.prototype.then()? So this seems like a confusion of two things: one thing is the protocol which is like the pattern of having thenables. And then another thing is like an instance of the protocol like concrete promises. So it's normal for an instance of the protocol to do brand checks against itself. JHD: A protocol is typically one method like a thenable. An array-like is something that has a length, or thenable, something that can be stringified to something with a toString, etc. And then some of the things that accept a protocol, meaning one something that has one method; like promises, they coerce it to a branded object as part of the process. A multi-method protocol is basically— there's only a couple patterns. The big one, RegExp, where it definitely does a brand check at times, but it also calls like different methods, and the other one that was pointed out is Set and Map. The Set constructor calls add() which is a protocol but Set.size for example, checks the brand and throws if it lacks the brand so we have a mix of those things and I agree that a mix of those things is problematic. That is why the Set method didn't go forward, but I think that the committee does not actually have a clear consensus on what subclassability means. Meaning it's not simply everything observably calls methods. And so you just override the individual methods you want and you can use all the others. It's not simply like everything uses the brand and all the behavior is controlled through branding or through or passing arguments to the original constructor. It's some hybrid in the middle that continues to cause problems and I see the pattern that is set up with the Calendar and TimeZone as furthering that confusion and that's where I'm coming from here. -DE: So I think the guiding principle really comes back to this protocol versus instance of a protocol thing or instance implementation of a protocol. The examples you gave, when they use internal slots, they're doing so to do a— they're not just checking the slot and then going on, they're using the— they're doing something based on knowing that it is an instance of that, whereas for Calendar and TimeZone, it would be purely cosmetic. +DE: So I think the guiding principle really comes back to this protocol versus instance of a protocol thing or instance implementation of a protocol. The examples you gave, when they use internal slots, they're doing so to do a— they're not just checking the slot and then going on, they're using the— they're doing something based on knowing that it is an instance of that, whereas for Calendar and TimeZone, it would be purely cosmetic. KG: I wanted to push back a little on the definition of protocol that you offered. I would say that protocols are only when you are talking about objects that might do something other than this. So like iterable is a protocol because there are lots of different kinds of things that can be iterable, but for other things for things like this, it's not — the thing that you would be passing through these methods from my understanding isn't ever intended to serve any other function. it's not like you were having some generic collection that in addition has these methods for some reason. So I would not consider the calendar and time zone things that you're discussing to be protocols in the same way that iterable is a protocol or like disposable is a protocol, and for that reason my preference is that Calendar and TimeZone act like grab bags of arguments. Because that's effectively what they're doing. They're providing specific arguments to a function and the most convenient way to do that is to have string named arguments and I think that's perfectly reasonable to do for the specific use case of providing specific values to some function when it's not a generic protocol that you might attach to an arbitrary object. -DE: It's really interesting that you raised that because this whole grab bag of arguments is exactly what the Temporal champions were leaning towards before I went in and said hey, we should use a class for this and the reason is because exactly like you say, it's completely intuitive. That's what you do in JavaScript. You would have an option for an object with a bunch of functions in it. But at that point it makes sense to have a shared prototype. So this was part of the design feedback that I gave to Temporal. +DE: It's really interesting that you raised that because this whole grab bag of arguments is exactly what the Temporal champions were leaning towards before I went in and said hey, we should use a class for this and the reason is because exactly like you say, it's completely intuitive. That's what you do in JavaScript. You would have an option for an object with a bunch of functions in it. But at that point it makes sense to have a shared prototype. So this was part of the design feedback that I gave to Temporal. KG: I forget whether Temporal has the brand checks or not. I am opposed to brand checks. @@ -597,15 +600,15 @@ DE: It doesn't have brand checks. KG: That sounds great to me. Having classes that provide a convenient way of creating these objects when you want default functionality, but allowing users to create objects of their own that conform to that shape and just serve as a bag of arguments, seems great. I'm in favor of that design. I just don't want to call it a protocol and I don't want it to inform how we think about protocols like disposable. -DE: I agree that disposable should be a symbol and - I think I overstated my case in these slides. This is something that we talked about previously, about Calendar and TimeZone primarily serving as a thing that has this calling convention (if we want to avoid the word protocol) whereas disposable, when it's on a shared object then you have this higher chance of namespace collisions. So it's a very practical thing that its Symbol gives you. I also want to acknowledge the point that JHD made earlier, which is that the JavaScript standard library is just very small right now. We don't have very much precedent and we don't need to be just coasting off of precedent. We can do active design here. We should be doing that and I think having these patterns can help us expand the library further and help enable future proposals. +DE: I agree that disposable should be a symbol and - I think I overstated my case in these slides. This is something that we talked about previously, about Calendar and TimeZone primarily serving as a thing that has this calling convention (if we want to avoid the word protocol) whereas disposable, when it's on a shared object then you have this higher chance of namespace collisions. So it's a very practical thing that its Symbol gives you. I also want to acknowledge the point that JHD made earlier, which is that the JavaScript standard library is just very small right now. We don't have very much precedent and we don't need to be just coasting off of precedent. We can do active design here. We should be doing that and I think having these patterns can help us expand the library further and help enable future proposals. -MM: Let me contrast two polar opposite patterns, and I don't know where Calendar falls on— you know, if it's one of them or on a spectrum between them, but there's protocols like iterator as you mentioned where the exposed surface is for the clients to invoke and the built-in mechanisms are not are there. You're not parameterizing some built-in mechanism with behavior. Whereas the RegExp thing that we all don't like anymore, that is one where you're parameterizing the behavior of some methods by providing the behavior of other methods and it's going through a self-send. In the self-send is the means by which the other methods are looked up to parameterize the first set of methods for the parameter on parameterizing with behavior. I have come to really dislike more and more doing it with something that looks like subclassing. For that I'm more and more inclined to use what I'll call the Handler pattern familiar from proxies with handlers where you parameterize the proxy creation with a bag of methods with promises. We probably went through effort to make promises subclassable although the original design did not have that and had instead a Handler-like extension point. We tried to use the subclassing then to do what we could do with the Handler-like extension point and found we could not, and we're now proposing a Handler-like extension point anyway. So the nice thing about the Handler-like extension point is it only exposes the parameterisation to the behavior that needs to be parameterized, and it separates that from exposing that to direct invocations by the clients of the abstraction. So I think that's all I want. +MM: Let me contrast two polar opposite patterns, and I don't know where Calendar falls on— you know, if it's one of them or on a spectrum between them, but there's protocols like iterator as you mentioned where the exposed surface is for the clients to invoke and the built-in mechanisms are not are there. You're not parameterizing some built-in mechanism with behavior. Whereas the RegExp thing that we all don't like anymore, that is one where you're parameterizing the behavior of some methods by providing the behavior of other methods and it's going through a self-send. In the self-send is the means by which the other methods are looked up to parameterize the first set of methods for the parameter on parameterizing with behavior. I have come to really dislike more and more doing it with something that looks like subclassing. For that I'm more and more inclined to use what I'll call the Handler pattern familiar from proxies with handlers where you parameterize the proxy creation with a bag of methods with promises. We probably went through effort to make promises subclassable although the original design did not have that and had instead a Handler-like extension point. We tried to use the subclassing then to do what we could do with the Handler-like extension point and found we could not, and we're now proposing a Handler-like extension point anyway. So the nice thing about the Handler-like extension point is it only exposes the parameterisation to the behavior that needs to be parameterized, and it separates that from exposing that to direct invocations by the clients of the abstraction. So I think that's all I want. DE: It's pretty unclear to me how that could be applied here, because there just isn't much there. A trivial Handler wrapper is the identity function and it makes what you're saying kind of potentially line up with what KG said. We don't necessarily need encapsulation here, but this bag of methods or bag of handlers is the operative pattern. Well, the bag's a different way of explaining it to people. MM: The key thing is that the bag of handlers is provided as a construction argument and is not exposed on the object. -DE: Well, the normal way of using calendars and time zones is just to use the built-in ones. You pass in a string that looks up in the JavaScript engine's list of calendars and time zones, and you just go from there. Passing your own handlers is the superpower feature. +DE: Well, the normal way of using calendars and time zones is just to use the built-in ones. You pass in a string that looks up in the JavaScript engine's list of calendars and time zones, and you just go from there. Passing your own handlers is the superpower feature. MM: The key question is, is the behavior of any of the existing built-in methods to look up and invoke another method on `this` to do a self-send in order to delegate part of their behavior to something that's overridable? @@ -619,49 +622,49 @@ PFC: Yes, it is the case that Calendar and TimeZone methods call other Calendar MF: A lot of what I wanted to say here was covered by KG and I'll just say I agree with KG's positions there and what was talked about with DE. What I would add on to it is is that we had this discussion previously both in the issue tracker on issue 310 and last year in February with the conclusion that these calendars and the time zone objects do not have multiple responsibilities, and multiple responsibilities are the defining factor for why we would choose to use a symbol-based protocol. So I think we should stick with the conclusion that we arrived at then, and what's currently in the proposal, to answer this specific question for Calendar and TimeZone objects. -WH: As object-oriented programming has evolved, plenty of folks have come to the conclusion that subclassing using self-sends is a code smell and I’m in that group as well. I don't care if people do that in their internal projects, but I do not want to promulgate APIs in the language which require subclassing with self-sends. +WH: As object-oriented programming has evolved, plenty of folks have come to the conclusion that subclassing using self-sends is a code smell and I’m in that group as well. I don't care if people do that in their internal projects, but I do not want to promulgate APIs in the language which require subclassing with self-sends. -SYG: What is the word that you said? +SYG: What is the word that you said? WH: “Self-sends”, it was the same term MM was using. -DE: What does it mean? +DE: What does it mean? -WH: The issue is that in subclassing you have API surfaces both for calling your class from the outside and for which of your methods internally call which of your other methods. Those can then be intercepted by overriding and that is a tremendously difficult thing to get right. It's almost always a code smell if you use this for APIs which you might want to change later. It’s fine if you're doing this internally in your own projects, but it's a really bad idea for public APIs for interfacing with other projects. +WH: The issue is that in subclassing you have API surfaces both for calling your class from the outside and for which of your methods internally call which of your other methods. Those can then be intercepted by overriding and that is a tremendously difficult thing to get right. It's almost always a code smell if you use this for APIs which you might want to change later. It’s fine if you're doing this internally in your own projects, but it's a really bad idea for public APIs for interfacing with other projects. DE: Okay. So overall based on what was in this presentation, and what we discussed, how do you feel about the current state of the Temporal proposal? -WH: I don't recall enough of the details of the Temporal proposal. My point is that we should not require subclassing of anything in language APIs. +WH: I don't recall enough of the details of the Temporal proposal. My point is that we should not require subclassing of anything in language APIs. -DE: Okay. Thanks. +DE: Okay. Thanks. -JHD: So yeah, if that's the opinion, that we don't want to encourage subclassing, then why is it a class? We can achieve the same goal of providing default behavior with just a function that spits out an object with the methods, the functions on that object can be `===` every time you call it, you know, there's like lots of ways to handle that. So providing a class means encouraging subclassing. That's the point of the class, so you can extend it. +JHD: So yeah, if that's the opinion, that we don't want to encourage subclassing, then why is it a class? We can achieve the same goal of providing default behavior with just a function that spits out an object with the methods, the functions on that object can be `===` every time you call it, you know, there's like lots of ways to handle that. So providing a class means encouraging subclassing. That's the point of the class, so you can extend it. -WH: No, a class is a means of encapsulation. You can use classes for years without ever subclassing anything. +WH: No, a class is a means of encapsulation. You can use classes for years without ever subclassing anything. DE: I agree with WH here; classes can be any of these things. WH: What I was objecting to is issues that we ran into with RegExp for instance. RegExp is a fine class on its own. It's a bad idea to subclass from it. -DE: So the pattern here in the current Temporal proposal doesn’t encourage subclassing. but the observable semantics don't. It's just a protocol there as exactly as WH says. A class is simply a way of bringing together that bag of state and behavior. This is like the other side of the coin for object-oriented, ad hoc polymorphism to dispatch methods. If we made it a bag just like an object that had these functions then the product would not have a shared prototype only take more memory and have worse performance in inline caches, things like that in practice. JavaScript engines know how to optimize for the prototype case in a way that's more difficult to optimize for the exploded-out object with functions case. +DE: So the pattern here in the current Temporal proposal doesn’t encourage subclassing. but the observable semantics don't. It's just a protocol there as exactly as WH says. A class is simply a way of bringing together that bag of state and behavior. This is like the other side of the coin for object-oriented, ad hoc polymorphism to dispatch methods. If we made it a bag just like an object that had these functions then the product would not have a shared prototype only take more memory and have worse performance in inline caches, things like that in practice. JavaScript engines know how to optimize for the prototype case in a way that's more difficult to optimize for the exploded-out object with functions case. -JHD: So I can't speak to the performance or memory profile of an object with free used functions versus a shared prototype, etc. But I agree that a class is one method of encapsulating an interface, as is an object, as would be a function that spits out an object, but the thing that you also get with a class is implicit encouragement of subclassing. That's what the `extends` keyword is, therefore people do it, and it seems strange to me if we don't want people to subclass these things that we would choose one of the multiple methods of encapsulating the interface that encourages subclassing. And to be clear I'm using the word interface not protocol because it's a set of multiple methods. And to me, that's the difference. +JHD: So I can't speak to the performance or memory profile of an object with free used functions versus a shared prototype, etc. But I agree that a class is one method of encapsulating an interface, as is an object, as would be a function that spits out an object, but the thing that you also get with a class is implicit encouragement of subclassing. That's what the `extends` keyword is, therefore people do it, and it seems strange to me if we don't want people to subclass these things that we would choose one of the multiple methods of encapsulating the interface that encourages subclassing. And to be clear I'm using the word interface not protocol because it's a set of multiple methods. And to me, that's the difference. -DE: Well, I mean if we were to try to avoid using constructors whenever we wanted to discourage people from subclassing that would mean a pretty radical change to the proposal. For example, we have Temporal.PlainDate, which is a constructor, and we're not especially encouraging subclassing of that. It's not really useful to subclass that, there's no particular user-defined methods that get called on it. Comprehensively avoiding constructors for these cases is kind of a big ask. +DE: Well, I mean if we were to try to avoid using constructors whenever we wanted to discourage people from subclassing that would mean a pretty radical change to the proposal. For example, we have Temporal.PlainDate, which is a constructor, and we're not especially encouraging subclassing of that. It's not really useful to subclass that, there's no particular user-defined methods that get called on it. Comprehensively avoiding constructors for these cases is kind of a big ask. WH: I have no issues with classes. I just don’t want to require users to subclass something from a built-in to do what they want. -BFS: So I think one of the key takeaways that's being said is we don't actually want to necessarily discourage subclassing as a whole category of programming behavior here. The things that we've discussed in the past meetings about what problems are with subclassing are specific like WH and MM said. The use case here for Temporal is not actually having any sort of crosstalk or specialized behavior where a shared prototype would act differently than having an object. But, ergonomics-wise, it would be much simpler to use a class and if we do ban the ability to use a class it would behoove us to figure out what are we actually getting out of it. The problem was set out and stuff like that. Are that the class used the class the subclass is actually being called out to buy a base class and that causes some real world kind of questions about things that just don't seem to be present here. Yeah, that's it. +BFS: So I think one of the key takeaways that's being said is we don't actually want to necessarily discourage subclassing as a whole category of programming behavior here. The things that we've discussed in the past meetings about what problems are with subclassing are specific like WH and MM said. The use case here for Temporal is not actually having any sort of crosstalk or specialized behavior where a shared prototype would act differently than having an object. But, ergonomics-wise, it would be much simpler to use a class and if we do ban the ability to use a class it would behoove us to figure out what are we actually getting out of it. The problem was set out and stuff like that. Are that the class used the class the subclass is actually being called out to buy a base class and that causes some real world kind of questions about things that just don't seem to be present here. Yeah, that's it. -WH: I don't think anybody's asking for a ban on subclassing. I just don't want to require it. +WH: I don't think anybody's asking for a ban on subclassing. I just don't want to require it. -DE: Yeah, thanks for your comment BFS. I agree. +DE: Yeah, thanks for your comment BFS. I agree. MM: I will also want to avoid ever communicating the message that packaging up something as a class encourages subclassing. I think we should be actively opposed to that message. -SYG: Yeah, that's basically what I was also going to say, I think it is. There is quite a bit of nuance between— there is quite a big difference between something being a class and implying that you are supposed to subclass it. I think that that is a dangerous position to accept and to take that as a design direction for built-ins and in particular I think my understanding of the unsavoriness of the “self-sends” thing is in particular not that— Let me try to articulate this better. So when you subclass something in languages where there is a distinction between virtual and non-virtual methods like C++ where you have to say if something is virtual and overridable or not when you subclass something, you are still getting some value out of the act of subclassing even if not everything is virtual. The virtual thing is that you as a class designer can explicitly say, these are overridable hookable behaviors in JavaScript. We don't have that direct equivalent, we can factor out methods to be free functions that then never get exposed on the on the object as properties, that you can call in a way to not make every method virtual, but if you have it as a property on an object as a method it's by default virtual. So in JavaScript it so happens that it is conflated that when your subclass something everything is virtual and hookable and we don't have a good way to prevent that with a redesigned. which, so that is unfortunately a non-starter, but it should look like in terms of Good Ol' OO design. I agree with WH. I think we should be careful here. That's just because the language itself has completed everything to be virtual with subclassing doesn't mean that we are encouraging for you to override every single override of the whole thing just because something is subclassable. Does that make sense JHD? +SYG: Yeah, that's basically what I was also going to say, I think it is. There is quite a bit of nuance between— there is quite a big difference between something being a class and implying that you are supposed to subclass it. I think that that is a dangerous position to accept and to take that as a design direction for built-ins and in particular I think my understanding of the unsavoriness of the “self-sends” thing is in particular not that— Let me try to articulate this better. So when you subclass something in languages where there is a distinction between virtual and non-virtual methods like C++ where you have to say if something is virtual and overridable or not when you subclass something, you are still getting some value out of the act of subclassing even if not everything is virtual. The virtual thing is that you as a class designer can explicitly say, these are overridable hookable behaviors in JavaScript. We don't have that direct equivalent, we can factor out methods to be free functions that then never get exposed on the on the object as properties, that you can call in a way to not make every method virtual, but if you have it as a property on an object as a method it's by default virtual. So in JavaScript it so happens that it is conflated that when your subclass something everything is virtual and hookable and we don't have a good way to prevent that with a redesigned. which, so that is unfortunately a non-starter, but it should look like in terms of Good Ol' OO design. I agree with WH. I think we should be careful here. That's just because the language itself has completed everything to be virtual with subclassing doesn't mean that we are encouraging for you to override every single override of the whole thing just because something is subclassable. Does that make sense JHD? -JHD: Yeah, I mean, I understand that. There's some strong push back to my implication that saying that making it a class is telling people to subclass it. I hear that. +JHD: Yeah, I mean, I understand that. There's some strong push back to my implication that saying that making it a class is telling people to subclass it. I hear that. DE: It would be really important for us to redesign if we could JavaScript to have C++ namespace scoping resolution of symbols. Okay, keep going. It was a joke. @@ -675,9 +678,9 @@ JHD: I hope it's not surprising that "use not-JavaScript" is not an acceptable a DE: Sure, but I don't know why you need to validate whether something conforms to the protocol, because something could have functions in those properties, but they might not conform to the protocol because it may not fulfill the contract that those functions have, which is in the documentation, right? -JHD: My question is not how do I ensure my users gave me an object that conforms to the contract. That I agree is a different question. My question is how do I give them a meaningful error message if they gave me wildly the wrong type of thing? They just give me an empty object, that is very different than if they gave me an ill-formed calendar. How do I do that? That's the question and I think it's reasonable that there be a JavaScript answer for that. +JHD: My question is not how do I ensure my users gave me an object that conforms to the contract. That I agree is a different question. My question is how do I give them a meaningful error message if they gave me wildly the wrong type of thing? They just give me an empty object, that is very different than if they gave me an ill-formed calendar. How do I do that? That's the question and I think it's reasonable that there be a JavaScript answer for that. -AKI: We have a lot of replies to this on the queue. +AKI: We have a lot of replies to this on the queue. WH: If it walks like a duck and quacks like a duck, pretend it's a duck. @@ -685,17 +688,17 @@ JHD: So then I'm checking how many calendar or time zone methods? That's the adv WH: Why are you bothering to check that? -JHD: So I can give my users meaningful error messages when they give me the wrong category of thing. +JHD: So I can give my users meaningful error messages when they give me the wrong category of thing. -WH: A user should get an error message if you try to call one of the methods which doesn't exist. So I don't see why you're checking all of them ahead of time. +WH: A user should get an error message if you try to call one of the methods which doesn't exist. So I don't see why you're checking all of them ahead of time. -JHD: So that's true, they would, but because if I'm showing the date later or something, they might not get an error at the time. They give me the object. So it's helpful to eagerly and rapidly provide errors to users at the boundaries of my API, which is why I try and engineer all my APIs to do that. +JHD: So that's true, they would, but because if I'm showing the date later or something, they might not get an error at the time. They give me the object. So it's helpful to eagerly and rapidly provide errors to users at the boundaries of my API, which is why I try and engineer all my APIs to do that. WH: That's not something which fits well with ECMAScript. [Let's move to the next person in the interest of time.] AKI: Next up is BFS. -BFS: Yeah, just to be clear. We don't provide this kind of stuff on any of the other things and ECMA-402 or ECMA-262 like Proxy. We historically with precedent tell people to check the types of all the components of any value. So it may be considered unacceptable currently, but that is the precedent, that we have said we don't really have a way to duck type provided for all the things that we do actually check Built-in. If the request is we need duck typing for all these values that ECMAScript can consume, I think that's a very different conversation than this. +BFS: Yeah, just to be clear. We don't provide this kind of stuff on any of the other things and ECMA-402 or ECMA-262 like Proxy. We historically with precedent tell people to check the types of all the components of any value. So it may be considered unacceptable currently, but that is the precedent, that we have said we don't really have a way to duck type provided for all the things that we do actually check Built-in. If the request is we need duck typing for all these values that ECMAScript can consume, I think that's a very different conversation than this. AKI: We now have SYG up. @@ -707,21 +710,20 @@ SYG: But presumably that generalizes to other kind of data than count right and JHD: I am eagerly checking all those. I see which is a different proposal the anything that isn't brand checkable. I do indeed have to duck type a long list of properties and/or methods. And I do that, so I'm sort of hoping for a better solution than just hard coding a list of N methods and checking that they're all there and functions, which certainly is an option. It's just a very unergonomic one. -DE: I want to say if we add something like this Map.isMap, Type.isType convention, then Temporal.Calendar.isCalendar would check not whether something conforms to the calendar protocol, but whether it's a built-in calendar instance. I think that that's the natural semantics because protocols just don't have a brand. That's that's how I see protocols and how it seems like many of the participants in the discussion do. You've seen them or maybe further protocols the wrong word like the bag of behavior. +DE: I want to say if we add something like this Map.isMap, Type.isType convention, then Temporal.Calendar.isCalendar would check not whether something conforms to the calendar protocol, but whether it's a built-in calendar instance. I think that that's the natural semantics because protocols just don't have a brand. That's that's how I see protocols and how it seems like many of the participants in the discussion do. You've seen them or maybe further protocols the wrong word like the bag of behavior. JHD: So I believe the current design stores the calendar object or time object in an internal slot and then observably looks at methods on it every time it needs to do that. It seems to me if this is an object bag that we would be eagerly extracting functions and storing those and calling them later. Well, because, except for the proxy Handler pattern where we do this, we don't do this. I believe the options objects are not stored, I believe, anywhere else in 262 or 402. The closest thing is the proxy Handler object, but in Temporal— BFS: There are some other ones. -DE: For built-in Calendar and TimeZone, presumably we're not going to make a separate function identity for each of the thousands of time zones or hundreds. I can't remember how many there are. We would reuse the same function identity and that means that there has to be some internal state stored somewhere, by having an object that gives it a container for that state. So if we pulled the methods off it wouldn't work. +DE: For built-in Calendar and TimeZone, presumably we're not going to make a separate function identity for each of the thousands of time zones or hundreds. I can't remember how many there are. We would reuse the same function identity and that means that there has to be some internal state stored somewhere, by having an object that gives it a container for that state. So if we pulled the methods off it wouldn't work. -AKI: All right. I think we got some healthy conversation out of this. There we went way over the time box. +AKI: All right. I think we got some healthy conversation out of this. There we went way over the time box. DE: So we've heard a bunch of interesting feedback and I think the next step for the Temporal proposal will probably be to leave it as-is given the balance of the committee’s feedback. Not drawing or asking for consensus on any strong conclusion about how protocols are done in general, but it sounds like Temporal probably shouldn't change at the moment. So please be in touch on the issues if you have any more thoughts or join the weekly Temporal calls that continue to happen. AKI: Thank you for your time, Daniel. Thank you for the conversation everyone. ### Conclusion/Resolution -Temporal remains with current semantics; future cases to be discussed as they arise. - +Temporal remains with current semantics; future cases to be discussed as they arise. diff --git a/meetings/2021-03/mar-10.md b/meetings/2021-03/mar-10.md index 131bdb79..9889f10d 100644 --- a/meetings/2021-03/mar-10.md +++ b/meetings/2021-03/mar-10.md @@ -1,12 +1,13 @@ # 10 March, 2021 Meeting Notes + ----- Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. You can find Abbreviations in delegates.txt -**In-person attendees:** +**In-person attendees:** -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -34,8 +35,8 @@ You can find Abbreviations in delegates.txt | Tab Atkins | TAB | Google | | Daniel Rosenwasser | DRR | Microsoft | - ## RegExp set notation: Update + Presenter: Mathias Bynens (MB) - [proposal](https://github.com/tc39/proposal-regexp-set-notation) @@ -51,7 +52,7 @@ MB: and now I want to talk about what you see there on the right hand side, whic MWS: We really want to invite people to give us feedback just on the square brackets that make it's just a prefix really that looks new rather than making the whole thing with the curly braces new, but we understand that all the other backslash something has curly braces to modify are attractive probably because it really just then looks like character classes and if you have two or more in the regular expression than you don't have to prefix each one with something special and make it longer that way. So we would really like some feedback on where to go with that. We could pretty much go anywhere. -MWS: Okay, and then we had a lot of discussions, we tried to sort of have discussion separately about what to do about various things with the syntax and they ended up being interconnected. That's why eventually we sort of ended up writing a sort of complete proposal that covers multiple issues. One of the things is whether to use single or double punctuation. And in general I want to say that a good number of regular expression engines and related implementation support one or more of these operators, but they do it very differently. So there is not really a standard to follow and so we tried to look at existing practice, but also tried to look at whether we think that the expressions are readable, understandable, make sense, are sort of visually distinct and stuff like that. +MWS: Okay, and then we had a lot of discussions, we tried to sort of have discussion separately about what to do about various things with the syntax and they ended up being interconnected. That's why eventually we sort of ended up writing a sort of complete proposal that covers multiple issues. One of the things is whether to use single or double punctuation. And in general I want to say that a good number of regular expression engines and related implementation support one or more of these operators, but they do it very differently. So there is not really a standard to follow and so we tried to look at existing practice, but also tried to look at whether we think that the expressions are readable, understandable, make sense, are sort of visually distinct and stuff like that. MWS: We think that, for the operator said we proposed, the intersection and subtraction operators, we think that it's best to have double punctuation. It's visually distinct. It also helps with the dashes because the dash already has two other meanings as a literal and as a range syntax character. And we also think that we want to keep the door open for future syntax extensions and we are proposing to reserve double punctuation from the ASCII range for that and if then future syntax uses one of those combinations then it would look odd if the initial batch of operators was using single symbols. So that's kind of the with one what that looks like all of the choices for the the prefix the compatibility marker, namely the backslash UniSet. So there is Issue Number Four. @@ -59,7 +60,7 @@ MWS: We also have an issue 12 that has a link to a temporary document for now wi MWS: Next, we had a lot of discussion also about operator precedence just among five people was surprisingly contentious. What we ended up with is sort of the the Safest option I think so basically when you have multiple or operators just like an arithmetic you have to define in what order the operations get performed and it's again, there is no standard for how to do this, there appears to not even be a real standard for how to do this in math set notation or in Boolean algebra as far as I could find. So we basically punt on the issue and require brackets when different types of operators like union versus intersection versus subtraction get intermixed, so that you have to be explicit about the groupings and not have them implied by operator precedence. So it would be a syntax error if you have things that - basically in the second part of the slide here are missing the orange marked up Square brackets. That's issue #6 in the GitHub issue list. We have a number of options listed there; if you disagree with our choices, please chime in, but we think that this is a good starting point and if later people agreed on a hunch of what's a better practice than we could loosen that up even after the fact because this is sort of the strictest option. -MWS: Moving on, we had also a goal or task to talk about how the set operations would work with properties of strings. They do make sense for that too. For example, you might want to look at just the basic emoji that are also regular symbols. And in that case the symbol properties are only Single Character. So the intersection also ends up being only single characters. The second is a bit different. It's basically all of the Emoji recommended for General interchange. That's what the `RGI` stands for in the prefix of that property, except maybe you don't want the country flags. And so the second example would match most of the Emoji, but it wouldn't match the flag of France or the flag of German or whatever. Of course, you can't reasonably ask (?). We think it doesn't make sense to have a complement of a set that contains strings and we came up with a validation mechanism to check for that just based on metadata, which means that the validator for a regular expression already has to know how to basically have a dictionary of which names of properties are valid, so it would need one extra bit of information for each of the valid property names and that is whether the property can contain strings. So in the third example here, that would be a union of emoji keycap sequence which has something like a dozen multi character strings, and the Union of that with a symbol and then putting a compliment on the whole thing and that would be a syntax error straight in the parsing. On the other hand if you intersect the two, and in this case it's kind of boring because the intersection is actually empty. But because we can prove from an intersection that if we intersect the property that can have strings with the properties that cannot have strings only have characters the whole result must at most contain single characters and so a complement is valid in this case. This is simply all Unicode code points. +MWS: Moving on, we had also a goal or task to talk about how the set operations would work with properties of strings. They do make sense for that too. For example, you might want to look at just the basic emoji that are also regular symbols. And in that case the symbol properties are only Single Character. So the intersection also ends up being only single characters. The second is a bit different. It's basically all of the Emoji recommended for General interchange. That's what the `RGI` stands for in the prefix of that property, except maybe you don't want the country flags. And so the second example would match most of the Emoji, but it wouldn't match the flag of France or the flag of German or whatever. Of course, you can't reasonably ask (?). We think it doesn't make sense to have a complement of a set that contains strings and we came up with a validation mechanism to check for that just based on metadata, which means that the validator for a regular expression already has to know how to basically have a dictionary of which names of properties are valid, so it would need one extra bit of information for each of the valid property names and that is whether the property can contain strings. So in the third example here, that would be a union of emoji keycap sequence which has something like a dozen multi character strings, and the Union of that with a symbol and then putting a compliment on the whole thing and that would be a syntax error straight in the parsing. On the other hand if you intersect the two, and in this case it's kind of boring because the intersection is actually empty. But because we can prove from an intersection that if we intersect the property that can have strings with the properties that cannot have strings only have characters the whole result must at most contain single characters and so a complement is valid in this case. This is simply all Unicode code points. MWS: on the proposal page on GitHub. We also have a nice example for making a regular expression that recognizes hashtags and the goal is to recognize not just ones that are sort syntactically valid but also ones that contain real characters including real and recommended emoji. And for that you have to have these properties basically because there are thousands of characters and strings that are RGI emoji and you would want to do a union with id_continue, which is the identifier character set, but then have exceptions specifically for hashtags. So you really would want to have properties and make additions and exceptions to that. And this is an example of what this would look like. In this case I chose to have two versions of the example where one is using the backslash in is set in the other one is using the modifier type syntax. @@ -67,7 +68,7 @@ MWS: This slide is covered by issue 3. We have a couple more issues that are a l MB: Yeah at this point I think we can look at the queue if there is any feedback already. Yeah, let's start it there. -MM: Yeah, so I don't have any feedback on the specifics of this proposal, just in general on my reaction to adding more syntax to the regex syntax. The whole regex thing is sort of this little language within JavaScript that just keeps expanding and keeps getting more syntax over time and doesn't seem to be converging. And some of the things that I'm especially worried about with regard to the regex syntax just growing and increasing complexity over time is, it's already very difficult to tokenize, to lex, JavaScript. I would really like to see if there's some guidance on a way to tokenize a regex as a whole within the context of JavaScript that's stable such that all of these enhancements to the regex syntax don't need changes in the issue of how to tell when a regex as a whole is over so that you can tokenize regex with a stable pattern without having to constantly upgrade that. The other related issue is, there is this proposal which I put a link to in my question the regex make a template literal tag from Mike Samuel for doing safe interpolation. Each of these syntactic enhancements, many of them also create new syntactic contexts that - would they need to do context dependent escaping differently from existing syntactic contexts. So like I said, all of these are just issues in general with any proposal that adds more syntax to regex, but this is a fine one in which to raise those questions. +MM: Yeah, so I don't have any feedback on the specifics of this proposal, just in general on my reaction to adding more syntax to the regex syntax. The whole regex thing is sort of this little language within JavaScript that just keeps expanding and keeps getting more syntax over time and doesn't seem to be converging. And some of the things that I'm especially worried about with regard to the regex syntax just growing and increasing complexity over time is, it's already very difficult to tokenize, to lex, JavaScript. I would really like to see if there's some guidance on a way to tokenize a regex as a whole within the context of JavaScript that's stable such that all of these enhancements to the regex syntax don't need changes in the issue of how to tell when a regex as a whole is over so that you can tokenize regex with a stable pattern without having to constantly upgrade that. The other related issue is, there is this proposal which I put a link to in my question the regex make a template literal tag from Mike Samuel for doing safe interpolation. Each of these syntactic enhancements, many of them also create new syntactic contexts that - would they need to do context dependent escaping differently from existing syntactic contexts. So like I said, all of these are just issues in general with any proposal that adds more syntax to regex, but this is a fine one in which to raise those questions. LEO: Mark, there are some constraints here but I just wanted to mention that I understand your concerns if we keep adding things to regular expressions, but at the same time as the proposal says there is some other precedent of these features in other regular expressions and mostly from Perl and python regular Expressions. I don't think this exists but like I could say I'm some sort of enthusiast of exploring regular expressions and what can be done there. It's kind of like yes, it's sometimes it looks wild but I like for regular Expressions to have features that are available in for python regular expressions for the closest that we can that can be achieved there. That should be achieved here as well. So I appreciate this proposal for doing That in not creating something that is entirely new in the regular expression work. @@ -81,7 +82,7 @@ WH: I have a better answer for Mark. One of Mark's concerns is changes to lexing MM: Excellent news! Thank you. -KG: Yes, just make I wanted to remind you that the things can be added to the regex grammar without making lexing JavaScript more complicated because there's two grammars for regular expressions. There's the one that is used when lexing JavaScript and then there is a second one that is used to refine the regular expressions. It's not necessarily the case that all changes can be made without changes to that first grammar, but certainly many changes can be made and I feel like that obviates much of your concern. +KG: Yes, just make I wanted to remind you that the things can be added to the regex grammar without making lexing JavaScript more complicated because there's two grammars for regular expressions. There's the one that is used when lexing JavaScript and then there is a second one that is used to refine the regular expressions. It's not necessarily the case that all changes can be made without changes to that first grammar, but certainly many changes can be made and I feel like that obviates much of your concern. MM: Yes, and it sounds like together with Waldemar’s answer that this proposal in particular does not change the first grammar, which is great. That was my concern @@ -105,15 +106,16 @@ MB: I think WH is talking about the general case, not this particular example pe WH: I'm talking about the general case. You have to subtract the RGI emoji sequence from something a bit different there. I'm also noting that emoji flag sequences are not self-synchronizing, which is that if you have two flags in a row then RGI emoji flag sequence might match their characters at offset zero and two, but it also might match a misaligned one at offset one, so you'll get a lot of nonsense behaviors. Like if you have two US flags in a row then you might also match the Soviet Union flag because the US flag repeated twice is encoded as emoji flag characters USUS, but the Soviet Union flag is SU so it will match it in the middle. It's a mess. -RPR: Okay, and also how this can be cleared our offline because we're at the end of the time box name. I think this Just a an update, right? There's no request for stage advancement. +RPR: Okay, and also how this can be cleared our offline because we're at the end of the time box name. I think this Just a an update, right? There's no request for stage advancement. MB: That's right. We're hoping to get feedback on the issues on GitHub people can post there. Yeah, let us know what you think. Please participate on GitHub and about these specific issues that came up, it would be great if we could maybe we can continue that on GitHub that sounds like that might be easier. ### Conclusion/Resolution -No changes, was not seeking any +No changes, was not seeking any ## Error.prototype.cause for stage 3 + Presenter: Chengzhong Wu - [proposal](https://github.com/tc39/proposal-error-cause) @@ -154,6 +156,7 @@ CZW: There hasn't been any action towards the change, but since we reached stage Stage 3 ## Promise.anySettled + Presenter: Mathias Bynens (MB) - [proposal](https://github.com/tc39/ecma262/pull/2226) @@ -185,7 +188,7 @@ JHD: Around web compat, core-js confirmed and I know that none of the es-shims a JRL: About the rename and making anySettled be the standard name, which means Promise.race’s name would be anySettled. AMP definitely depends on the function name being race because we export it as part of an object and we use the function name for bad reasons. I don't agree that the web compat risk is minimal. I think there's actually a big risk with renaming functions to be something different. -MM: Okay, so I am against this on several grounds. First of all programs are read much more than they are written and we should always be first sensitive to the complexity imposed on readers and having an alias means that there's now two different names that readers can encounter for the same operation and it's not like the new one being better means we don't need to learn the old one it now means that you need to learn both if you don't read people's code. I chose the name race originally when this operation is original appeared in the E language I did it specifically because most promise operations, most asynchronous operations have a property called success confluence, which means that in the absence of thrown errors or rejected promises in case that all things succeed that most promise operations are insensitive to the order in which they succeed. There's this nice order insensitivity of most operations. Race specifically, its inherent in its nature that it's introducing a race condition that it violates success confluence and the name was chosen to emphasize that to readers. The name anySettled hides that. So on all of these grounds I think this just just does not pay for itself and makes things harder for code readers in an unnecessary way. +MM: Okay, so I am against this on several grounds. First of all programs are read much more than they are written and we should always be first sensitive to the complexity imposed on readers and having an alias means that there's now two different names that readers can encounter for the same operation and it's not like the new one being better means we don't need to learn the old one it now means that you need to learn both if you don't read people's code. I chose the name race originally when this operation is original appeared in the E language I did it specifically because most promise operations, most asynchronous operations have a property called success confluence, which means that in the absence of thrown errors or rejected promises in case that all things succeed that most promise operations are insensitive to the order in which they succeed. There's this nice order insensitivity of most operations. Race specifically, its inherent in its nature that it's introducing a race condition that it violates success confluence and the name was chosen to emphasize that to readers. The name anySettled hides that. So on all of these grounds I think this just just does not pay for itself and makes things harder for code readers in an unnecessary way. KG: What was that name of the property you were describing? @@ -201,10 +204,12 @@ MM: No, but I think that what it conveys clearly violates success confluence. So MB: I think it's clear there’s no consensus on this. Thanks everyone. I learned a new term today, "success Confluence". So I still call this a success. Thanks for all of your input. - ### Conclusion/Resolution -* Does not advance. + +- Does not advance. + ## Array find from last + Presenter: Wenlu Wang (KWL) - [proposal](https://github.com/tc39/proposal-array-find-from-last) @@ -212,7 +217,7 @@ Presenter: Wenlu Wang (KWL) KWL: After the last meeting we drafted a specification and did some compatibility investigation and found no issue yet. So we try to push it into stage 2. We can look at a simple polyfill. It's basically the same as find and findIndex but in reversed order. And I did some shallow compatibility check with well-known libraries, web APIs and well-known repos on GitHub. seems nothing conflict with a proposal. And existing implementations such as lodash and ramda basically compatible with this proposal. they takes a callback with item, index arguments and return an item or its index. -KWL: We can see the specification of findLast which is basically the same as find. And this is the specification or findLastIndex, which is also the same as findIndex. And there are some minor changes that were not included in the slides and we can see them in the specification on GitHub. Thanks. +KWL: We can see the specification of findLast which is basically the same as find. And this is the specification or findLastIndex, which is also the same as findIndex. And there are some minor changes that were not included in the slides and we can see them in the specification on GitHub. Thanks. RPR: Queue is empty. @@ -240,10 +245,9 @@ Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-resizablearraybuffer) - [slides](https://docs.google.com/presentation/d/1bpXftITzcZQpqBqtVGFiwgWL7WAqEo4ru4GwZaWYzcM/edit) +SYG: So this is the resizable array buffer and growable shared array buffer proposal. I would like to ask for stage 3 at the end of this presentation, but there are some changes and there and there might be concerns. -SYG: So this is the resizable array buffer and growable shared array buffer proposal. I would like to ask for stage 3 at the end of this presentation, but there are some changes and there and there might be concerns. - -SYG: So the high level changes that have been made since I presented last time are on this slide. Number one is the simple rename the spec draft had the global shared write buffer resize method named resize I changed it to growable because unlike resizable array buffers the global shared array buffers cannot shrink because we don't know how to make concurrency work if the buffer can shrink so it can only grow to reflect that. I have renamed it to just grow. +SYG: So the high level changes that have been made since I presented last time are on this slide. Number one is the simple rename the spec draft had the global shared write buffer resize method named resize I changed it to growable because unlike resizable array buffers the global shared array buffers cannot shrink because we don't know how to make concurrency work if the buffer can shrink so it can only grow to reflect that. I have renamed it to just grow. SYG: Number two, last time I presented this we talked about two ideas to allow implementation defined rounding up of the max byteLength and the requested byteLength during resizes and during the initial construction and the decision has been - or the champion's preference rather, I should say, is to allow implementation defined rounding up of the max byte length, but not allow any rounding of the requested byte length. I've talked to a few folks about this and I think there's some discussion to be had there as well on how implementation-defined do we want this rounding of the max byteLength to be? number three, is that one of the earlier action items identified when I presented this was that delegates wanted to hear about some user and implementation guidance, I think along the same lines as what we did for shared array buffers and shared memory to, in a non normative way, try to cut down on the interoperability risk because there's so much - being a low-level proposal there's a lot of latitude here given to implementations which could make interoperability worse and to that end instead of normatively requiring that all implementations have the same behavior. We're going to provide some guidance or suggestions for what implementations ought to do and I'll highlight that in the presentation as well. @@ -253,7 +257,7 @@ SYG: so first up a recap for the round up of the max byte size. The idea is that SYG: But this is too surprising and Bradley last time gave some very compelling examples that is captured on that GitHub thread on why this is a bad idea and would be a very bad surprise for programmers. So this is not going to be allowed. So that was the change about the rounding. -SYG: next up. I have written a draft of user guidance for how we expect users to use this successfully, to not run into interop issues. One bit of non-normative guidance would say to please test in your deployment environment; notably mobile differs very much from desktop. The number of bits that you have on your architecture matters very much because it's really it means a very big difference in your virtual address space a 32-bit max is, you know, 4GB, and 64 bit depending on how many bits your architecture uses for virtual memory could be least hundreds of terabytes to petabytes. But OSes may differ here as well. But in general we're going to say something like don't test on 64 bit and just assuming it will work on 32-bit. For the max size always use as small a max size as you can get away with preferably less than 1 Gigabyte even on 64-bit memories. And to specifically call out the fact that resizes can fail even if the requested size is less than the max. The fact that you successfully constructed a buffer with a max size does not guarantee that future resizes will always succeed and indeed if that is the behavior you're looking for you should just allocate the entire buffer up front. The implementation guidance is to call out that the specification that the design of this feature allows for implementation both as copying or reserving memory virtual memory upfront or as a combination of both. So by combination of both I mean that for example for very small buffers, maybe you don't want to round up to a page and reserve that page you just want to directly malloc to memory and then upon resizing you would copy it and for larger max sizes you might choose the reservation technique, but the API allows the virtual memory reservation technique to always be used in case the security property of having a non moving data pointer is important, for instance. The recommendation is that for multi-tenant hosts, by which I really just mean I think web browsers, but you can also imagine if you were hosting by web browsers and node where you are running possibly multiple applications at the same time, if you are a multi-tenanted host and you have a virtual memory subsystem it is recommended that those hosts implement as reserve instead of copy And on top of that it is recommended to limit the max size to be allowed to 1 to 1.5 gigabytes, even on 64 bit, to kind of cut down on the risk of virtual memory being exhausted. On hosts without virtual memory, so this will be systems like moddable and embedded systems, if they don't have virtual memory at all, they can just ignore the max but should still throw a range error if the max size can never be allocated. So that was for resizable array buffers, the non-shared buffers. For growable shared array buffers hosts that have virtual memory are recommended to implement the growing of the global shared buffers find reserving virtual memory. I think it is possible to copy, but that basically implies you have to stop the world of all your threads and to kind of Make them pause and make the copy then resume all the threads. This is extremely slow serialization and I cannot recommend anybody do that. So just don't do that. And hosts that again do not have virtual memory, but do have shared memory. I actually don't know what those systems are. But suppose you were such a system you can ignore the max always throw and grow and communicate this clearly to developers. So if people have feedback on that, please do so on the GitHub thread. +SYG: next up. I have written a draft of user guidance for how we expect users to use this successfully, to not run into interop issues. One bit of non-normative guidance would say to please test in your deployment environment; notably mobile differs very much from desktop. The number of bits that you have on your architecture matters very much because it's really it means a very big difference in your virtual address space a 32-bit max is, you know, 4GB, and 64 bit depending on how many bits your architecture uses for virtual memory could be least hundreds of terabytes to petabytes. But OSes may differ here as well. But in general we're going to say something like don't test on 64 bit and just assuming it will work on 32-bit. For the max size always use as small a max size as you can get away with preferably less than 1 Gigabyte even on 64-bit memories. And to specifically call out the fact that resizes can fail even if the requested size is less than the max. The fact that you successfully constructed a buffer with a max size does not guarantee that future resizes will always succeed and indeed if that is the behavior you're looking for you should just allocate the entire buffer up front. The implementation guidance is to call out that the specification that the design of this feature allows for implementation both as copying or reserving memory virtual memory upfront or as a combination of both. So by combination of both I mean that for example for very small buffers, maybe you don't want to round up to a page and reserve that page you just want to directly malloc to memory and then upon resizing you would copy it and for larger max sizes you might choose the reservation technique, but the API allows the virtual memory reservation technique to always be used in case the security property of having a non moving data pointer is important, for instance. The recommendation is that for multi-tenant hosts, by which I really just mean I think web browsers, but you can also imagine if you were hosting by web browsers and node where you are running possibly multiple applications at the same time, if you are a multi-tenanted host and you have a virtual memory subsystem it is recommended that those hosts implement as reserve instead of copy And on top of that it is recommended to limit the max size to be allowed to 1 to 1.5 gigabytes, even on 64 bit, to kind of cut down on the risk of virtual memory being exhausted. On hosts without virtual memory, so this will be systems like moddable and embedded systems, if they don't have virtual memory at all, they can just ignore the max but should still throw a range error if the max size can never be allocated. So that was for resizable array buffers, the non-shared buffers. For growable shared array buffers hosts that have virtual memory are recommended to implement the growing of the global shared buffers find reserving virtual memory. I think it is possible to copy, but that basically implies you have to stop the world of all your threads and to kind of Make them pause and make the copy then resume all the threads. This is extremely slow serialization and I cannot recommend anybody do that. So just don't do that. And hosts that again do not have virtual memory, but do have shared memory. I actually don't know what those systems are. But suppose you were such a system you can ignore the max always throw and grow and communicate this clearly to developers. So if people have feedback on that, please do so on the GitHub thread. SYG: The final change is the nailing down of the memory ordering constraints for growing these growable shared array buffers. So the high level is basically this this is a pretty arcane corner of the spec so certainly understandable if you don't fully grok all the details here. The idea is that when you do a grow that does it sequentially consistent access on the byteLengths it when it mutates it at the sequentially consistent access explicit length accesses via things like the byteLength accessor and the length getter from typed arrays. Those accesses are also sequentially consistent on the byte length- sorry I should preface all of this by saying the byte length itself is now shared because of course, threads have access to the same shared array buffer and they can observe the byte length of the underlying block. So the byte length itself is a shared thing which is why the memory order here matters at all. The higher level methods like shared array buffer.prototype do slice and they are also sequentially consistent on the byte length and notably what are not sequentially consistent are the implicit bounds checks in any index accesses. So whether you're using computed properties like [foo] or you're using atomics dot store, all those accesses on the byte lengths are unordered. And this is important for two reasons. One it gives much more latitude to compilers to optimize those bounds checks if we said that all bounds checks were sequentially consistent access has the byte length. That means that possibly all indexed accesses synchronize with other indexed accesses on the byte length in other threads and this means things like we cannot hoist the bounds checks, which would be bad for performance. The second reason is that I don't think the JS side does this but some wasm implementations do tricks to do the bounds check where they don't actually do a comparison. They guard pages on either side of like a four gig cage and then they use signal handlers to detect when out of bounds access happens. That implementation technique would not be possible if we said that the byte length accesses had to be sequentially consistent. So for performance implementation reasons bounds check aren't ordered. and this aligns with webassembly and it gives compilers more leeway. It does have some surprises, the surprise being mainly that if you do not explicitly synchronize by reading the length, for example, if one thread grows, the other thread does not explicitly make sure that it sees the grown buffer by first observing that the byte length has grown. It is possible that the second thread would not see the grown buffer with just the indexed access. This is deemed acceptable because it's good practice for you to explicitly synchronize on the grow anyway, like you really should do that by explicitly reading the length or by having another explicit event, so you establish a happens-before order between your thread and the thread that grows. That's basically what we have nailed down for the memory of ordering constraints shared array buffers. @@ -271,7 +275,7 @@ JMJ: Yeah. Thank you. JRL: So when you were explaining the round up for max byte length, it's a bit confusing to me why the Constructor is allowed to round up but the resize function is not allowed to round up. -SYG: Sorry. This was perhaps unclear from the slides. It's not a question of which method is allowed to resize, it's about which size is allowed to be resized. It's the only the max size is allowed to be resized and the max size is only a parameter to the constructor when you create the method. So in the second slide here, neither the 3000 in the Constructor, which is the initial size that's being requested, nor the resize requested size of 5000, neither of those are about to be resized. +SYG: Sorry. This was perhaps unclear from the slides. It's not a question of which method is allowed to resize, it's about which size is allowed to be resized. It's the only the max size is allowed to be resized and the max size is only a parameter to the constructor when you create the method. So in the second slide here, neither the 3000 in the Constructor, which is the initial size that's being requested, nor the resize requested size of 5000, neither of those are about to be resized. JRL: So is this concerning max byte lengths at all? The previous slide is about max byte length. That's saying that you're allowed to round up. This slide is just using byte length. So maybe that's my confusion. We're not talking about max byte length here at all. @@ -295,7 +299,7 @@ PHE: Before jumping into the naming and global thing. I did want to say that I t SYG: Cool. Thanks for your feedback. -DE: Yeah, I don't have a strong position on whether these are separate globals or not. I definitely see that there's cost in multiple environments to adding globals but on this specific point that Peter made about this separate constructor really being motivated by the web specific security things. I really disagree with that analysis. I think it's it's meaningful and a good design to not be increasing the expressiveness of array buffer and shared array buffer with this proposal. There's this core predictability property that you have right now that the size doesn't change and it makes sense to explicitly opt into that when you want to have an unpredictable, or a less stable size. And it's not it's not Web specific, but it definitely the web has more precedent for adding globals in other environments to that is a thing that differs. +DE: Yeah, I don't have a strong position on whether these are separate globals or not. I definitely see that there's cost in multiple environments to adding globals but on this specific point that Peter made about this separate constructor really being motivated by the web specific security things. I really disagree with that analysis. I think it's it's meaningful and a good design to not be increasing the expressiveness of array buffer and shared array buffer with this proposal. There's this core predictability property that you have right now that the size doesn't change and it makes sense to explicitly opt into that when you want to have an unpredictable, or a less stable size. And it's not it's not Web specific, but it definitely the web has more precedent for adding globals in other environments to that is a thing that differs. PHE: Sorry if I got that wrong, I was just stating my recollection of how we got there. If I got that wrong. I apologize. Your point is fair again. I was just trying to kind of recap what I recall the process we had at the time. @@ -305,7 +309,6 @@ SYG: I'm not going to die on the hill of where to put these. [TCQ emoji time! - mostly indifferent, a few positive and a few negative] - SYG: OK, that's not enough signal to make a change. I don't really care which way we end up. I'll be happy to change the change to reusing the globals if there is a stronger signal. YSV: Just give it a little bit of a signal from the from our side. We were discussing it a bit and we are not entirely convinced about reusing array buffer with the resizable static method. We prefer the global method right now. @@ -336,7 +339,7 @@ YSV: On the topic that I have on the queue I want to echo the request that I had SYG: Let's talk about that now. The pros of hard-coding a page size like wasm does are better interop, much more predictable, less fingerprinting. So you know that if rounds you're not going to get some arbitrary size, you're not going to get a variety of page sizes that you're going to get a single page size. The cons of hard-coding a page size is that it's one size doesn't necessarily fit all for best memory consumption, especially for these arbitrary use buffers is a little bit different from Wasm. Wasm linear memory is probably fine to always be aligned to such a large page in how wasm is designed to be used of, you know, running things that are compiled to it. Whereas I don't know it seems like buffers these buffers are more flexible. But what kind of binary data you want to put in them and maybe you don't always want to have such a big page size like 64k. That is pretty big, right, like most OS’s there are like 4K so some kind of wary to stipulate that the implementations don't make the choice that makes the most sense, don't get the leeway to choose a page size for them and the con of being able to choose your page size is that there's slightly more fingerprinting. I don't know how many bits there are there's more interop and there's more fingerprint, right? So, yeah, definitely a trade-off. so I would like to hear from the JSC folks certainly on how they feel about hard coding the page size. -KM: I mean, I don't have any particularly strong opinions one way or the other. I do agree that it's possible that taking a page size of 64k is potentially too large for this. I mean I could see picking up smaller page size, but then you have the problem that you're different than wasm could cause other weirdness. If you geta buffer from out of wasm somehow and all the sudden you it's like different rounding if you put it into Wasm or something, so that's a bit weird. I don't know what we would do in terms of in our implementation, so I'd have to think about this more I guess. +KM: I mean, I don't have any particularly strong opinions one way or the other. I do agree that it's possible that taking a page size of 64k is potentially too large for this. I mean I could see picking up smaller page size, but then you have the problem that you're different than wasm could cause other weirdness. If you geta buffer from out of wasm somehow and all the sudden you it's like different rounding if you put it into Wasm or something, so that's a bit weird. I don't know what we would do in terms of in our implementation, so I'd have to think about this more I guess. MLS: I also share the kind of concern KM shares. I think the 64k is too big, but you got to put some number. On our smaller devices like a watch, 64k is probably just to prohibitive to round up to, especially in the cases where you only want like 4K or whatever. @@ -373,6 +376,7 @@ WH: I'm really impressed by all the changes Shu has made here. - where to put constructors ## Incubation call chartering + Presenter: Shu-yu Guo (SYG) SYG: Due to the new tick-tock cadence we only had time to run to incubator calls since the last meeting which were error cause and module blocks, which leaves one overflow from last Charter for lazy Imports. I assume Yulia you would still like to have that call. @@ -407,14 +411,14 @@ DE: I don't know if we want to consider these incubator calls or not, but I migh SYG: I would prefer that if for those proposals where you want to have a call, but you don't want to establish a regular call, that you do use the incubator call framework. The whole point is to get to have a regular time set aside for those who don't want a regular call for just themselves. They don't want the extra overhead of setting that up. -DE: Yeah, great. Great. I'm happy to do that and I could be the one to make the doodles for those so that it's not all on Shu if that is so that helps. +DE: Yeah, great. Great. I'm happy to do that and I could be the one to make the doodles for those so that it's not all on Shu if that is so that helps. SYG: So we have five then, we have lazy Imports, possibly regex set notation, resizable write buffer, module fragments, and pipeline. It's unlikely that we'll get through that entire time that entire charter before the next meeting. So we're probably going to try to shoot for three or four. Thank you. - ### Conclusion/Resolution chartered: + - lazy imports - regex set notation, maybe - resizable / growable shared array buffer @@ -428,7 +432,7 @@ Presenter: Richard Gibson (RGN) - [proposal](https://github.com/tc39/proposal-intl-segmenter) - [slides](https://docs.google.com/presentation/d/1tkyQVE3o5qpbbJ39RidyZiy-r179RXraOKDeWLB5RB8) -RGN: Okay, so I'm Richard Gibson. You might remember me from such hits as segment for two, segmenter for stage three, back to stage two, segments for stage 3 and segmenter to stage 3 again. We are here now with what was going to be a segment for stage 4 but is actually an update for reasons that I will get into later. +RGN: Okay, so I'm Richard Gibson. You might remember me from such hits as segment for two, segmenter for stage three, back to stage two, segments for stage 3 and segmenter to stage 3 again. We are here now with what was going to be a segment for stage 4 but is actually an update for reasons that I will get into later. RGN: So first, I'll go into my normal spiel about what it is and why it matters. We've got this concept of string at a high level and at a low level it's a sequence at the lowest level. We're talking about code units and we don't actually do UTF-16. Although we're close. ECMAScript has what's sometimes been called WTF-16 because it can be potentially ill-formed where you have unpaired surrogates. Those compose into code points which are the big Unicode 21 bit values representing distinct representable entities. Visually, from a human perspective, those compose graphemes and graphene clusters, which is what humans perceive as the characters. This is where composition matters so you can have things like accents and other diacritical marks and also the combining sequences that we talked about before where the regional indicators two of them form a single grapheme cluster that is a flag representing a country. And those string into higher-level concepts as well. You can have words and separations of words and then ultimately you have sentences as a really high level textual concept. @@ -436,7 +440,7 @@ RGN: So segmenter deals with the bottom three of those, the ones that are abstra RGN: Since last time we met there have been some minor revisions throughout Intl itself where we change the way that options processing would work and decided that the new way should apply to everything going forward including Segmenter going forward, so where were previously the old API is would coerce into an object the new ones including segmenter now require that options either be undefined or already an object so we don't get weird prototype pollution kind of behaviors. -RGN: We also have a few open issues which are notionally open but not really from my perspective. One of them is the representation of the iterable reuse It was proposed that it aligned with Number.range, but that one is itself less mature and so it doesn't seem to be worth disrupting the already implemented status of segmenter. The others were coming from the community where someone is observing weirdness around - [audio issues] - the other pair of issues are around how do you do with custom dictionaries if you're you're not happy with what the implementation has shipped and and with this one also, we're not intending to act on it, at least not right now; any issue that someone has is basically going to be - there's no way to deal with and we don't want to provide hooks inside of Segmenter you can just apply your own strategy for dealing with it, you know ship whatever you were going to ship. The algorithms in segmenter wouldn't help you anyway, if you're not happy with the implementation of the data. +RGN: We also have a few open issues which are notionally open but not really from my perspective. One of them is the representation of the iterable reuse It was proposed that it aligned with Number.range, but that one is itself less mature and so it doesn't seem to be worth disrupting the already implemented status of segmenter. The others were coming from the community where someone is observing weirdness around - [audio issues] - the other pair of issues are around how do you do with custom dictionaries if you're you're not happy with what the implementation has shipped and and with this one also, we're not intending to act on it, at least not right now; any issue that someone has is basically going to be - there's no way to deal with and we don't want to provide hooks inside of Segmenter you can just apply your own strategy for dealing with it, you know ship whatever you were going to ship. The algorithms in segmenter wouldn't help you anyway, if you're not happy with the implementation of the data. RGN: So that brings us around to the stage advancement topic. We've had 262 tests for a while. They are passing by unflagged implementations in V8 and JavaScriptCore. We've gotten feedback from them, and we've got the conforming spec text with all the conventions of 402 ready to go in a pull request. Last I checked we didn't have complete sign off because I was a little bit late in putting up that but I don't think it's controversial; it matches what's been in the Segmenter proposal for a while. @@ -452,7 +456,7 @@ YSV: So I just wanted to say thank you to Richard and everyone who's worked on t RGN: Thank you, too. I picked this one up mid-stream as I think most of the people know and really I've appreciated working with the implementers on it. And also carrying on the torch of the original authors. -AKI: Queue is empty. +AKI: Queue is empty. RGN: Well, thanks everyone and look forward to a follow up later this year when we actually proposed it for real. @@ -463,6 +467,7 @@ AKI: Well thank you so much for an update. was not seeking advancement, awaiting future feedback from Mozilla re: implementing this using ICU4X ## Top-level await + Presenter: Guy Bedford (GB) - [proposal](https://github.com/tc39/proposal-top-level-await/pull/159) @@ -472,7 +477,7 @@ GB: Okay, so just to follow up on yesterday and to try and get into the specific GB: To just illustrate the algorithm again, we start on the completion of async at the bottom. It checks its parents. The first parent is A. It checks if A is ready for execution, it checks index, it then checks X and then it goes ahead and executes X. And that's the kind of recursive algorithm that's running up the tree and that's what gets us the TLA execution order, which is perhaps unintuitive. It's certainly not the post execution order. And then to explain the PR which is fixing this behavior, which is on the next slide, basically we run the same kind of recursive algorithm, but instead we just gather an execution list over the modules that are synchronously ready to execute on completion of an async module. So we do the same thing. When async finishes completing the parents of a long it took we then run this this gather a single parent completion method on the parents. So first on A then we look up at its current, index isn't ready to execute, X is ready to go. So we add it to the list and I've got an exec list of X. then fall back to the algorithm rerun the same check, again, and is index ready at that point. Then we add B, and then we've got indexed and so exact list is ax be indexed and And then finally we sort that exec list into the post ordering and then execute it. And these are just the modules that were ready to execute so we determined already succeeded. So that's the gist of the algorithms. Hopefully that wasn't too much to digest in a short space of time, but hopefully that kind of illustrates it more visually.Any questions are very welcome. -MM: So the different orders, are they differently hard to shim. I know that none of the remaining candidate orders can be shimmed within easy local rewrite. So get so given that that's off the table. What's the difference in difficulty to difficulty to shim the others? +MM: So the different orders, are they differently hard to shim. I know that none of the remaining candidate orders can be shimmed within easy local rewrite. So get so given that that's off the table. What's the difference in difficulty to difficulty to shim the others? GB: Well, I guess the one data point we do have there is that webpack has shimmed that and shipped that. system js, I mean I haven't worked on an implementation yet personally, but I think with top-level awaits you do generally end up wrapping modules anyway, and then triggering their executions and having some kind of management of that. So I think for the most part It shouldn't affect that too much, but I'm not a hundred percent sure. @@ -482,7 +487,7 @@ DE: So the broader context of how this was discovered might also be interesting. JHD: Yeah, so first of all, I trust the judgment of all of the folks who provided input here. So if everyone in that list says it’s a good change then let's do it! But for my understanding, one of my biggest concerns about top level await as it progressed through the stages was indeed the ordering changes - that if I use top level await in a module in the graph then other modules will then execute in a different order. Is that - does this change you're talking about make that no longer the case? That they're now in the same order whether I have an `await` or not? -GB: So that's I think one of the invariants that we've been trying to make sure is maintained in that sets of the postorder the over will post order execution is inspected by async nodes as much as possible. It's only between async modules that you get this sort of promises parallel behavior of between async siblings. So if you have a - this case is that if you have a graph and you add an async module beneath it, that it doesn't change the ordering above, but conversely if you have an async module and you're importing a synchronous module the rough ordering will be the same ordering under the current spec. So the on both sides I should remain you're saying without that. +GB: So that's I think one of the invariants that we've been trying to make sure is maintained in that sets of the postorder the over will post order execution is inspected by async nodes as much as possible. It's only between async modules that you get this sort of promises parallel behavior of between async siblings. So if you have a - this case is that if you have a graph and you add an async module beneath it, that it doesn't change the ordering above, but conversely if you have an async module and you're importing a synchronous module the rough ordering will be the same ordering under the current spec. So the on both sides I should remain you're saying without that. YSV: That's not that's only true with this change. This should be true without it as well. @@ -514,7 +519,7 @@ KM: I don't think the memory issue is too much of a concern because you only nee SYG: I agree, I don't think the memory issue is as an issue depending on how you sort it. -YSV: yeah, I think I think the biggest impact of this is going to be what happens to initialization if we have a really large module graph, how much does it cost to do the initialization step if we also have to sort like thousand, ten thousand modules? The space thing probably isn't going to be too much of an issue. +YSV: yeah, I think I think the biggest impact of this is going to be what happens to initialization if we have a really large module graph, how much does it cost to do the initialization step if we also have to sort like thousand, ten thousand modules? The space thing probably isn't going to be too much of an issue. KM: My guess it's probably that it will be dwarfed by the actual execution of the module - like initializing the module to get ready to run, sorting a thousand things with almost any sorting algorithm is probably have a lot faster than that. @@ -543,12 +548,12 @@ YSV: Cool, Can we call that consensus? This is the "speak now or forever hold yo Consensus for post-ordering change ## Temporal Pt 2 + Presenter: Philip Chimento (PFC) - [proposal](https://github.com/tc39/proposal-temporal) - [slides](https://ptomato.github.io/temporal-slides-in-progress/) - PFC: Just to start off, after the plenary ended yesterday we spent another hour and half or so discussing the items that had remained on the queue. I think we got all of them. So I've made a couple of extra slides to present some of the things that this smaller group proposed so that the rest of the plenary can be aware and possibly discuss them. AKI: Real quick question, as we do have a screenshot of the queue from beforehand. Do you want to come back to that? Do you want to disregard it and start over? Should we just post the image somewhere so people can be reminded what they were going to talk about? @@ -573,7 +578,7 @@ DE: I think this would be a positive editorial change. I also think we could exp PFC: My take away from this is that from our perspective the only reason for this note to exist is because it exists everywhere in the specification, and our mantra is to do what everything else in the specification does. That's a larger editorial discussion, and I'm wondering whether this is the right venue. We’re good either way. -JHD: The reason I think this is a useful venue for it is because this is the first collection of classes that are designed to interoperate between each other that we're adding. So Temporal is in many ways uniquely different from the other things in the spec, but I agree with everyone that said that it can be an editorial thing that we refine later. +JHD: The reason I think this is a useful venue for it is because this is the first collection of classes that are designed to interoperate between each other that we're adding. So Temporal is in many ways uniquely different from the other things in the spec, but I agree with everyone that said that it can be an editorial thing that we refine later. KG: As editor, I'm happy to revisit this and I don't think it's something that we need consensus on. I regard revisiting these notes as within what editors can do without asking for consensus. @@ -585,19 +590,19 @@ JHD: I just want to clarify. Same-date equality from zero was the intuition I'd PFC: Sorry, maybe I should say that they sort at equal precedence. -JHD: Thank you. +JHD: Thank you. WH: Sounds good. I like this change. This is essentially making all the different ways of representing the same date be an equivalence class with respect to compare. This is good. PFC: Thanks. I'll move on to the next one: mutability due to calendars and time zones. So we spent quite a while discussing this. It was an item that KG brought up, and I believe it was on the queue yesterday. Temporal objects are immutable but because some of the methods delegate to whatever the associated calendar or time zone object says, there's no guarantee that those methods are pure if you supply your own calendar or time zone object. I'm not entirely confident I'm summarizing the conclusion correctly. I think we could say that this is not ideal but possibly not avoidable or at least not without making some other trade-offs, and this is one of the things that we think is useful to investigate during stage 3 while implementations are taking place. Obviously it's going to be easier to optimize implementations the more immutable things are, and so this is something that we can examine within that context. But the idea is not to make a change to the proposal right now. We believe that what's in the proposal right now is the best solution to these trade-offs that we could determine right now. So let's have at it for this item. -MM: I missed the beginning of this. Can you explain what the mutability is that you're referring to? +MM: I missed the beginning of this. Can you explain what the mutability is that you're referring to? PDL: I will explain it a little bit to be more wide-reaching. We delegate to the calendar to get the actual year. So if you add the Hebrew calendar, what we have in the internal slots is the ISO year, month, and day. We ask the calendar object, what is the year for that ISO year/month/day in your Hebrew calendar? Now because the Hebrew calendar is an object, it is theoretically possible that I modified the instance of the calendar that was passed in, and therefore that the answer to "what is the year?" changes for every time that I request that piece of information. The only way to avoid that would be to make that Hebrew calendar itself frozen. -MM: I just want to verify the instance is the only thing that's mutable and there is no such instance among the primordial objects. Well, that is the truth. They're dead. +MM: I just want to verify the instance is the only thing that's mutable and there is no such instance among the primordial objects. Well, that is the truth. They're dead. -DE: That's true. There are no primordial calendars or time zones. We were very careful to do that. We were very careful to be assured that whenever you make a new one of these things, it's a different fresh instance. +DE: That's true. There are no primordial calendars or time zones. We were very careful to do that. We were very careful to be assured that whenever you make a new one of these things, it's a different fresh instance. MM: Okay. I have no objection to this mutability. @@ -609,25 +614,25 @@ PDL: That is accurate. If it's not frozen it could be mutated though. MM: The `Array.prototype` also starts out mutable. The invariant is that there's no hidden state and that the only mutable state in the primordial is on properties that can be frozen and on the extensibility of the object so once the primordials are frozen there is no remaining mutable state. -PDL: That invariant holds true. +PDL: That invariant holds true. MM: Okay, great. Then I'm fine with this. BT: There's no one else on the queue. So I think we're good with this slide. -PFC: Okay. I'll move on to this one. This is a point JHD raised about the conceptual consistency of time zones. For an implementation that doesn't have 402 capabilities, what is the correct way to ensure that the minimal set of time zones that Temporal supports is conceptually equal to the minimal set of time zones that you can get using legacy Date? The data model of Date itself doesn't contain a time zone, but you can get a UTC date and time from a Date object and you can get a date and time in the system's current time zone using a Date object. And so you need to be able to do those in Temporal as well. We had a discussion on how to ensure that in the spec, and whether the spec text should even ensure that. One question that came up is implementations that don't internally think in terms of time zones, that use libc functions to get the current UTC offset, and how that changes if you pick up your computer and move to a different time zone, changing the system time zone. So we do have language in the proposal that says that once a particular value has been returned from a particular named time zone object, then during the lifetime of the surrounding agent, a different value may not be returned. So we expect that in implementations where the current time zone is just whatever libc says it is, that they'll implement Temporal by only supporting the UTC time zone. I'm not sure if I'm making this very clear. +PFC: Okay. I'll move on to this one. This is a point JHD raised about the conceptual consistency of time zones. For an implementation that doesn't have 402 capabilities, what is the correct way to ensure that the minimal set of time zones that Temporal supports is conceptually equal to the minimal set of time zones that you can get using legacy Date? The data model of Date itself doesn't contain a time zone, but you can get a UTC date and time from a Date object and you can get a date and time in the system's current time zone using a Date object. And so you need to be able to do those in Temporal as well. We had a discussion on how to ensure that in the spec, and whether the spec text should even ensure that. One question that came up is implementations that don't internally think in terms of time zones, that use libc functions to get the current UTC offset, and how that changes if you pick up your computer and move to a different time zone, changing the system time zone. So we do have language in the proposal that says that once a particular value has been returned from a particular named time zone object, then during the lifetime of the surrounding agent, a different value may not be returned. So we expect that in implementations where the current time zone is just whatever libc says it is, that they'll implement Temporal by only supporting the UTC time zone. I'm not sure if I'm making this very clear. PDL: The short answer to this is that we require support for UTC and for whatever the system thinks the time zone is, and that equates to supporting the same things as Date. The issue that JHD was worried about, or the concern he raised, was: is there any situation where there is something that you can do with Date that you could not do with Temporal so as to be a hindrance to the upgrade path. And the answer to that is, we are ensuring that there isn't. In addition we have other methods to actually make sure in user code that you can still do the same thing, so that if we have deviated (?) basically without them change required. PFC: Thanks. That was a better explanation. I was kind of rushing to make these slides yesterday. Any questions about this item? -JHD: I was just going to summarize where I'm at. PDL’s expression of my root desire is correct. I'm interested in ensuring that whatever you can do with Date can be done with Temporal, ideally jumping through minimal hoops so that people can easily migrate. Through our discussion yesterday it seemed like, although it's nicer when we can specify things in the language, that it is not always possible and in this case because of wide variance in implementations of Date there isn't really a viable way to mandate that an implementation makes Temporal as useful as Date and we would leave it to perhaps the HTML spec or individual implementations to decide to do a useful thing or not, and we could leave it to bug reports essentially to try and smooth that over when I was not matched. Obviously I hope everyone, every implementer in this room, is ensuring that these things are operating off of similar data, but yeah. If there was a way to specify it in 262. I would like to do that, but there does not appear to be a way to specify what I'm asking for. And so I will be content with having the intention communicated in various places, which is that they're roughly similar in the common case. +JHD: I was just going to summarize where I'm at. PDL’s expression of my root desire is correct. I'm interested in ensuring that whatever you can do with Date can be done with Temporal, ideally jumping through minimal hoops so that people can easily migrate. Through our discussion yesterday it seemed like, although it's nicer when we can specify things in the language, that it is not always possible and in this case because of wide variance in implementations of Date there isn't really a viable way to mandate that an implementation makes Temporal as useful as Date and we would leave it to perhaps the HTML spec or individual implementations to decide to do a useful thing or not, and we could leave it to bug reports essentially to try and smooth that over when I was not matched. Obviously I hope everyone, every implementer in this room, is ensuring that these things are operating off of similar data, but yeah. If there was a way to specify it in 262. I would like to do that, but there does not appear to be a way to specify what I'm asking for. And so I will be content with having the intention communicated in various places, which is that they're roughly similar in the common case. DE: One part that I remain confused about in JHD's summary is where he says that Temporal wouldn't be as useful as Date in all implementations. I think Temporal is the one that's more likely to have the correctly meaningful semantics. So in some in some JavaScript engines, there's a different implementation of Intl time zones compared to Date's time zones. The Intl one generally I think is the reference that you can trust more. Temporal goes through those kinds of paths as well. So I think it's the safer bet. If you really want to emulate `Date` behavior, it's possible to do so with a custom time zone. I think we could say that implementations should make Date and Temporal align, but I think we all agreed that it's important to to preserve the ability for engines to make simple Date implementations using libc which will not end up going through the same kind of mechanism as would be useful to implement Temporal. So overall I think all the things we're saying lined up. JHD: And I totally agree with your assessment of the likelihood. My PTSD at this point is about relying on likelihood when it's possible to mandate things, which it isn't in this case. -PFC: Was there anything on the screen-captured queue from yesterday that we didn't cover? +PFC: Was there anything on the screen-captured queue from yesterday that we didn't cover? BT: I just want to make sure that everyone who had a queue entry from yesterday got their item in, so SFC, WH, JHD, KG, anything from any of you? @@ -659,9 +664,9 @@ BT: I think we only need two reviewers as my recollection, but this is also a ve DE: I'm excited to hear from BFS, but if he's not here, that's fine. -BT: Okay, what about the editors? +BT: Okay, what about the editors? -KG: Yeah, the spec text is not great yet. I'm not really concerned about that blocking stage 3, but like I've filed an issue with the 30 or so things that I found in the first two sections, there's more I'm sure, but it's going to take forever to finish that review just because it's massive. +KG: Yeah, the spec text is not great yet. I'm not really concerned about that blocking stage 3, but like I've filed an issue with the 30 or so things that I found in the first two sections, there's more I'm sure, but it's going to take forever to finish that review just because it's massive. SYG: There are incorrect things, but they're not incorrect to the extent that there are risks for implementers; it's more like they are pedantically incorrect. So I'm also not worried about stage 3, but yeah. @@ -669,7 +674,7 @@ BT: Okay, and I heard from BFS through the grapevine that stage 3 Temporal was r JHD: I took a look at it. I'm sort of in an odd state where I reviewed it as an editor, but I'm not one right now, but overall I think there's a lot of coherence and correctness issues in the spec, and because it's so large, it's very difficult to have confidence that it's all correct. That said, nothing major seems to be there, and given the intention which hopefully is broadly telegraphed to implement Temporal behind the flag, that seems like it would mitigate those concerns and buy time for further review. So if the Temporal champions and the implementers in the room are intending to stick with making sure it's always flagged for now that that seems like it's great for stage 3. -YSV: That works for us. +YSV: That works for us. PFC: Yeah, I think if it was not only an intention, but it was also a requirement from yesterday's plenary that it's behind a flag until at least the IETF standardization process is completed. Is that correct? @@ -703,16 +708,18 @@ PFC: That's reasonable. YSV: Yeah, I also agree with Shu that getting the species PR merged in first might be a really good idea. -JHD: It seems like it'd be nice if test262 tests were also somewhat prioritized to help implementers do the same thing. That's not a requirement. +JHD: It seems like it'd be nice if test262 tests were also somewhat prioritized to help implementers do the same thing. That's not a requirement. -PFC: Yeah, that is something that I do personally plan to be working on. +PFC: Yeah, that is something that I do personally plan to be working on. -BT: Okay. I'm not hearing any objections. So this sounds like we have approval for stage 3 for Temporal. +BT: Okay. I'm not hearing any objections. So this sounds like we have approval for stage 3 for Temporal. [cheering] + ### Conclusion/Resolution stage 3, pending: + - removing affordances for subclassing: - just use intrinsics instead of SpeciesConstructor - static methods will use intrinsic rather than `this` @@ -724,28 +731,29 @@ stage 3, pending: - implementations should not ship unflagged before IETF resolution on syntax ## Pipeline Operator + Presenter: Daniel Ehrenberg (DE) - [proposal](https://github.com/tc39/proposal-pipeline-operator/) - [slides](https://docs.google.com/presentation/d/1for4EIeuVpYUxnmwIwUuAmHhZAYOOVwlcKXAnZxhh4Q/edit) -DE: I just want to start by thanking the JavaScript Community because this is very much a Community Driven proposal and especially thanks to Gilbert James and JS Choi who helped work through many details a number of these four bones and even wrote A lot of these slides. So what is the pipeline operator? The pipeline operator lets you chain function calls together. So instead of having them be deeply nested you could write one right after the other so it's more like logical or and this is really important because function method chaining is important. It's a very ergonomic pattern the developers like to use. Has a lot to do with jquery's ergonomics and it's good to be able to do chaining together with lexical scope. That's a value judgment that I'm asserting that I think some other people share that you know, lexical scope helps keep things well factored and these ergonomic benefits that are nice to use together with it. +DE: I just want to start by thanking the JavaScript Community because this is very much a Community Driven proposal and especially thanks to Gilbert James and JS Choi who helped work through many details a number of these four bones and even wrote A lot of these slides. So what is the pipeline operator? The pipeline operator lets you chain function calls together. So instead of having them be deeply nested you could write one right after the other so it's more like logical or and this is really important because function method chaining is important. It's a very ergonomic pattern the developers like to use. Has a lot to do with jquery's ergonomics and it's good to be able to do chaining together with lexical scope. That's a value judgment that I'm asserting that I think some other people share that you know, lexical scope helps keep things well factored and these ergonomic benefits that are nice to use together with it. DE: The detailed semantics they're not very detailed are well, you know if you chain if you if you type X to Y dot f then of called y.f of x, of course it evaluates X first so if this is at stage 1. There are some alternatives possible. We could have a pipe function where you pass in pipe the pipe, you know a value and then a list of functions. It's possible, but all those all these function calls and commas in parentheses. It doesn't end up being as readable. You could also just use variables like name a variable dollar sign equals and then the function call repeatedly and with do expressions, you could even make that dollar sign variable local. It's a subjective point of view, but I think pipeline is more ergonomic than those Alternatives and it's worth it to add to the language. DE: So I want to outline the points of debate that we've had in the past about pipeline and mention my opinions for whatever they're worth. I'm not going to be able to Champion this proposal going forward. So the real purpose of the talk is to recruit a new champion group for this proposal. We have community support, but we also need committee Support driving it driving it forward. So let's start by talking about controversies because we like controversies in TC39. -DE: So one is if you have Arrow functions in a pipeline. These can be useful to give these named values to the intermediaries. If we allow one parenthesized arrow functions, then we would have to parse the tighter precedent. There's a grammar designed it has this kind of foot gun. It's kind of a trade-off. trade-off. So I think pipeline is nicer without having these parentheses and I think it's reasonable to trade off. We also talked about placeholders and partial application. A pipeline is one way to handle a case where it's just more complicated than a unary function, but The problem is that if we make a pipeline based on either a functions then it promotes to some extent the use of currying or helper functions. So if you want to add a number two thing that's previous in the pipeline. You might be tempted to make this kind of a function which returns another function makes another argument and then as pipeline looks nice, so it's debatable whether we want to encourage such patterns in the language. Some people are already doing this in JavaScript and some people say it's a bad idea and we shouldn’t encourage it. I think there are reasonable arguments on both sides. +DE: So one is if you have Arrow functions in a pipeline. These can be useful to give these named values to the intermediaries. If we allow one parenthesized arrow functions, then we would have to parse the tighter precedent. There's a grammar designed it has this kind of foot gun. It's kind of a trade-off. trade-off. So I think pipeline is nicer without having these parentheses and I think it's reasonable to trade off. We also talked about placeholders and partial application. A pipeline is one way to handle a case where it's just more complicated than a unary function, but The problem is that if we make a pipeline based on either a functions then it promotes to some extent the use of currying or helper functions. So if you want to add a number two thing that's previous in the pipeline. You might be tempted to make this kind of a function which returns another function makes another argument and then as pipeline looks nice, so it's debatable whether we want to encourage such patterns in the language. Some people are already doing this in JavaScript and some people say it's a bad idea and we shouldn’t encourage it. I think there are reasonable arguments on both sides. -DE: One way that we can make pipeline more expressive without encouraging currying or the use of Arrow functions is to it's called hack style. We have these nicknames for the different variants of the pipeline operator in our own pipeline discourse. There's Hack style, smart mix, the F-sharp style. So the Hack style is on the right hand side of the pipe line rather than it being a unary function. You have use a question mark placeholder in one in one particular place. You could use it anywhere in the expression you can use it in an object literal you You can use it any argument position or the receiver position We could bikeshed what the placeholder is, but the idea is that you have to use the placeholder that the thing on the right hand side is not a matter of function. So in some ways simpler and more and more expressive so it does force you to write this in the simple cases that do that would work so TAB from you know, CSS standards World wrote an essay recently that I linked to from the slides. Showing, you know arguing for hex tile and JS Choi shortly before the speeding wrote up a whole new proposal for hack style, you know articulating the design very very clearly. There was also previously discussion of the smart mix alternative. This would have these placeholders, but it also had the short style, but there were the bare style in my opinion. This just became too complicated. I think a lot of people felt the same way. So hack style provides a simpler subset without much to learn. +DE: One way that we can make pipeline more expressive without encouraging currying or the use of Arrow functions is to it's called hack style. We have these nicknames for the different variants of the pipeline operator in our own pipeline discourse. There's Hack style, smart mix, the F-sharp style. So the Hack style is on the right hand side of the pipe line rather than it being a unary function. You have use a question mark placeholder in one in one particular place. You could use it anywhere in the expression you can use it in an object literal you You can use it any argument position or the receiver position We could bikeshed what the placeholder is, but the idea is that you have to use the placeholder that the thing on the right hand side is not a matter of function. So in some ways simpler and more and more expressive so it does force you to write this in the simple cases that do that would work so TAB from you know, CSS standards World wrote an essay recently that I linked to from the slides. Showing, you know arguing for hex tile and JS Choi shortly before the speeding wrote up a whole new proposal for hack style, you know articulating the design very very clearly. There was also previously discussion of the smart mix alternative. This would have these placeholders, but it also had the short style, but there were the bare style in my opinion. This just became too complicated. I think a lot of people felt the same way. So hack style provides a simpler subset without much to learn. DE: RBN has also proposed partial application. So when we look this question mark we could make that part of a First-Class construct. It could be used outside pipelines that would restrict a bid in which cases it could be used many of these examples would not really fit in partial application. Like how would the await work or how would it work with the receiver? We would probably need two different syntax in an object. It probably wouldn't work. But maybe it's better to omit those cases and get the benefit of being able to use it outside of a pipeline context. So we discussed pipeline in committee a number of times before it's at stage one. And there were some serious concerns raised about this kind of garden path problem when you're looking at an expression, you don't quite know it's a partial application case until you get to the question mark later. Hack style pipelines don't have this because you see the pipeline first and then you see something with a question mark. So once you see the pipeline, you're always anticipating that a question mark will come later. And there and these expressiveness limitations. -DE: The final thing is async/await integration. So imagine you have some code that has these nested function calls and you want to use a pipeline with it. You're evolving that code and you make it so that one of these functions is an async function. So you want to do this nested function calls still but the problem is, you know, it's an async function. You didn't wait it's argument. So you gotta promise returned from capitalize then you appended an exclamation mark to it and JavaScript helpfully converted it to string, but you actually wanted to await the promise. So we want to put in await here if it's a function call and if we're using pipeline we could use.then for this. Or we can use two pipelines, but this kind of breaks the flow. I think logically what you want is to be able to put await as a as an item in the pipeline as previously concerned about an ASI Hazard here, but I think await would naturally have a no line Terminator following it and or at least we could use one here and that will remove any ASI hazards. +DE: The final thing is async/await integration. So imagine you have some code that has these nested function calls and you want to use a pipeline with it. You're evolving that code and you make it so that one of these functions is an async function. So you want to do this nested function calls still but the problem is, you know, it's an async function. You didn't wait it's argument. So you gotta promise returned from capitalize then you appended an exclamation mark to it and JavaScript helpfully converted it to string, but you actually wanted to await the promise. So we want to put in await here if it's a function call and if we're using pipeline we could use.then for this. Or we can use two pipelines, but this kind of breaks the flow. I think logically what you want is to be able to put await as a as an item in the pipeline as previously concerned about an ASI Hazard here, but I think await would naturally have a no line Terminator following it and or at least we could use one here and that will remove any ASI hazards. -DE: On the other hand if we use a Hack style pipeline, then await is always supported. You can just include await and the question mark inside of any kind of pipeline. So there's no particular special feature need to. JS Choi wrote up this is table comparing them then you can reference later. Table comparing alternatives. But aside from these kind of smaller countries the details link important thing to note about this proposal because there's overwhelming Community Support. So lots of people are saying like, let's stop arguing and just settle on this minimal proposal which basically is the F sharp proposal, but without the await in the parentheses free error functions because it seems important to a lot of people. People in the community thinking that this proposal was blocked due to the smart pipe line so that you know the discussion got a bit ugly in some places. I think that when I was trying to frame that we can be open to multiple possibilities. +DE: On the other hand if we use a Hack style pipeline, then await is always supported. You can just include await and the question mark inside of any kind of pipeline. So there's no particular special feature need to. JS Choi wrote up this is table comparing them then you can reference later. Table comparing alternatives. But aside from these kind of smaller countries the details link important thing to note about this proposal because there's overwhelming Community Support. So lots of people are saying like, let's stop arguing and just settle on this minimal proposal which basically is the F sharp proposal, but without the await in the parentheses free error functions because it seems important to a lot of people. People in the community thinking that this proposal was blocked due to the smart pipe line so that you know the discussion got a bit ugly in some places. I think that when I was trying to frame that we can be open to multiple possibilities. -DE: I tried to be open to these multiple alternatives, but a lot of people got upset about thinking that the smart pipe line was holding this back which isn't the case. Anyway there was a lot of interest in the state of JS survey and of course there are lots of methodological problems here, but it's interesting that people really saw pipeline is important. Yulia Startsev did some research and blind testing the F sharp and smart variants and this was really good so that we can learn a lot about how to do research in the future and I really liked you lie as paper about the qualitative analysis of the argument schemes that were used in people's pipeline responses. Overall, my my understanding is that this research was not conclusive enough to make a particular decision about one or the other or some other proposal, but maybe if YSV has something more to say +DE: I tried to be open to these multiple alternatives, but a lot of people got upset about thinking that the smart pipe line was holding this back which isn't the case. Anyway there was a lot of interest in the state of JS survey and of course there are lots of methodological problems here, but it's interesting that people really saw pipeline is important. Yulia Startsev did some research and blind testing the F sharp and smart variants and this was really good so that we can learn a lot about how to do research in the future and I really liked you lie as paper about the qualitative analysis of the argument schemes that were used in people's pipeline responses. Overall, my my understanding is that this research was not conclusive enough to make a particular decision about one or the other or some other proposal, but maybe if YSV has something more to say YSV: What we were comparing back then the smart pipeline and the F-sharp still pipeline. They performed very similarly like not differently enough that we could say something concrete about user understanding. In some ways F-sharp performed better in terms of people not making as many mistakes while people mostly preferred the ease of use of the smart pipeline operator though. I have to say looking at the Hack proposal that also looks really interesting that looks like it might resolve a lot of those issues. We didn't get anything conclusive. We ran it last time and nothing that we could say. We should absolutely do one or the other. @@ -753,49 +761,49 @@ MM: How does doing nothing compare to doing any of these? YSV: We didn't test that but several survey responders said that it would be preferable to do nothing. We would have to actually test that and that is something we could test. It wasn't in this specific survey. -MM: I think that's the most important question. +MM: I think that's the most important question. -DE: Well, you know figuring out these things by surveys. I think it's a great advance in the committee and it's also not something that we expected for things in the past? So I think when it's possible, it's a good ergonomics win. My take is we should go ahead and do this feature. I think it's a good ergonomics win. It would be nice to include the await and unfriendly sized Arrow functions features, but it would also be nice to admit them and use a minimal feature. I think placeholders or the Hack style or the smart mix I think these create some additional complexity, you know at a high level in other programming languages that I've worked with, you know, I used to work on factor a stack-based language. It developed its own very complex idioms for avoiding the use of named variables and these were harmful to developer understanding. So I don't want to go overboard here. Maybe the Hack proposal is is a good way through.. I like the F sharp proposal, it’s a subjective thing. I think we should take a decision here. +DE: Well, you know figuring out these things by surveys. I think it's a great advance in the committee and it's also not something that we expected for things in the past? So I think when it's possible, it's a good ergonomics win. My take is we should go ahead and do this feature. I think it's a good ergonomics win. It would be nice to include the await and unfriendly sized Arrow functions features, but it would also be nice to admit them and use a minimal feature. I think placeholders or the Hack style or the smart mix I think these create some additional complexity, you know at a high level in other programming languages that I've worked with, you know, I used to work on factor a stack-based language. It developed its own very complex idioms for avoiding the use of named variables and these were harmful to developer understanding. So I don't want to go overboard here. Maybe the Hack proposal is is a good way through.. I like the F sharp proposal, it’s a subjective thing. I think we should take a decision here. -DE: And move forward so I think it's okay if we don't support all idioms in pipelines. The learning that we have from other programming languages is if we try to solve all the cases without forcing people to make variables that could be harmful to developer understanding. So that's why I want to stay with something relatively simple one way or the other. I think in general we've been doing more things in JavaScript, which you could think of as functional programming, you know records and tuples, and temporal being immutable data structures it all kind of fits together in some high-level way and it's a trend that I think JavaScript programmers could hopefully be happy about for TC39. So overall I would prefer something on the spectrum between minimal and F sharp. I think it would be okay to look at Hack as well. So I think we understand roughly the whole design space. We have Babel prototypes of F sharp and smart pipeline. There's ongoing work to implement hack in Babel and we have a supportive community that's willing to work on This. I think we need to make a decision and I'm not going to be able to champion this proposal. So I want to ask people. Do you want to Champion this proposal and how do you feel about these responsibilities? +DE: And move forward so I think it's okay if we don't support all idioms in pipelines. The learning that we have from other programming languages is if we try to solve all the cases without forcing people to make variables that could be harmful to developer understanding. So that's why I want to stay with something relatively simple one way or the other. I think in general we've been doing more things in JavaScript, which you could think of as functional programming, you know records and tuples, and temporal being immutable data structures it all kind of fits together in some high-level way and it's a trend that I think JavaScript programmers could hopefully be happy about for TC39. So overall I would prefer something on the spectrum between minimal and F sharp. I think it would be okay to look at Hack as well. So I think we understand roughly the whole design space. We have Babel prototypes of F sharp and smart pipeline. There's ongoing work to implement hack in Babel and we have a supportive community that's willing to work on This. I think we need to make a decision and I'm not going to be able to champion this proposal. So I want to ask people. Do you want to Champion this proposal and how do you feel about these responsibilities? [audio issues.] -TAB: [robot meow.] +TAB: [robot meow.] -TAB: All Alright, so I've got two bits will start here and then we'll go back to slide for it real quick note the third bullet point first. Good point functional spec based in Vector languages show you can go overboard on .3 programming and it's harmful to people learning the language. I completely agree with this as somebody who loves Haskell and functional languages and really likes the challenge of writing fun point free stuff. You can absolutely do unreadable things. and that's why hex tile is the correct solution here because it's literally not Point free. F sharp style requires that the right-hand side of your pipeline be an expression that resolves to a function which is then called. a unary function in practice because we're just going to call it with the left hand side as its sole argument. This encourages you to write either arrow functions, or fancy point free Shenanigans to generate a function that will eventually take one argument. On the other hand pack style with the placeholders is the exact opposite of that you just write ordinary code exactly as you would see anywhere else in your code base and the Left hand side gets subbed in wherever the placeholder is. I can show this off a little bit better if we jump back to slide 25.Because that’s the comparison between a couple of the options. +TAB: All Alright, so I've got two bits will start here and then we'll go back to slide for it real quick note the third bullet point first. Good point functional spec based in Vector languages show you can go overboard on .3 programming and it's harmful to people learning the language. I completely agree with this as somebody who loves Haskell and functional languages and really likes the challenge of writing fun point free stuff. You can absolutely do unreadable things. and that's why hex tile is the correct solution here because it's literally not Point free. F sharp style requires that the right-hand side of your pipeline be an expression that resolves to a function which is then called. a unary function in practice because we're just going to call it with the left hand side as its sole argument. This encourages you to write either arrow functions, or fancy point free Shenanigans to generate a function that will eventually take one argument. On the other hand pack style with the placeholders is the exact opposite of that you just write ordinary code exactly as you would see anywhere else in your code base and the Left hand side gets subbed in wherever the placeholder is. I can show this off a little bit better if we jump back to slide 25.Because that’s the comparison between a couple of the options. -TAB: Here we go. Yeah. So in either of the first two proposals 0 proposal one, there is one case the first one and the F sharp one every single other variant requires you to write an arrow function. Or if you're using higher order point free stuff great point free thing to handle this sort of thing like the second example where you're calling a two argument function. Passing is the second thing that could be a partial application. +TAB: Here we go. Yeah. So in either of the first two proposals 0 proposal one, there is one case the first one and the F sharp one every single other variant requires you to write an arrow function. Or if you're using higher order point free stuff great point free thing to handle this sort of thing like the second example where you're calling a two argument function. Passing is the second thing that could be a partial application. on the other hand looking over at the hack pipes proposal. Literally none of this requires any extra functions put in it's the exact code from the original expression column on the left side just with a placeholder the spot where you extracted a chunk of code and moved it into the left hand side. So I tried to have this argument little Dan an hour ago in the chat room, and I don't understand what's confusing between the two of us, but he didn't end up changing. The slide. Because the point he's making about it being avoiding heavy point free stuff meaning we should go with F-sharp style is literally exactly backwards. There is sense in which hack style encourages point free at all. It's exactly as pointful as normal JavaScript, but F sharp does encourage point free. So I just fundamentally do not understand what DE is trying to say about that. He tried to make it clear that something about complexity but his examples of complexity are all point free stuff that would be encouraged by preferred proposal. WH: Can you explain what you mean by “point three” and “point four”? -TAB: I was saying “point free” and “pointful”, which sound like “point three” and “point four”. That is also called tacit programming is where you create new functions out of existing functions without ever explicitly naming the arguments to the new functions. And point full of +TAB: I was saying “point free” and “pointful”, which sound like “point three” and “point four”. That is also called tacit programming is where you create new functions out of existing functions without ever explicitly naming the arguments to the new functions. And point full of DE: To just step back point free refers to a variable. it's kind of like your medical Jargon so arguably in the Hack style the variable is question mark. That's a name. I think the high level goal of this chaining you know of pipeline is so that you don't have to put in this mental work to name these intermediate values when they're quite trivial and I think even though The question mark is technically pointful. It's achieving this high level goal. So, you know in some way at a high level Hack is simple, but there's a lot of pieces to it. I think, you know at least from certain kinds of perspectives the F sharp pipeline is simpler. -TAB: can you give an example from this page or anywhere else where you think the tech pipeline is more complex than what you see in the F sharp example. +TAB: can you give an example from this page or anywhere else where you think the tech pipeline is more complex than what you see in the F sharp example. -DE: I mean if you're talking about how this would be specified +DE: I mean if you're talking about how this would be specified TAB: Users are the ones that we care about. -DE: so, you know the reason that I want to pass this proposal off to others is because I became quite tired of arguing about it. I think we have plenty of evidence that this feature will be useful. I think we have multiple legitimate possibilities and I want to find a champion group who can come to a decision. I think we can come to a decision pretty swiftly. I don't think it's useful to spend a few more years arguing about it and then propose it for stage 2. I have my personal preference that I've articulated and I'm happy to talk through the reasons, but I tried to already. You know, F sharp is a lot simpler. I just doesn't do as many things it's less expressive and he just calls the function. We have a pretty big queue at this point. So I think also we should try to be maybe a little bit more high level and our feedback here, but I don't think we need to really get into much debate right now, but it is good to explore the problem space. +DE: so, you know the reason that I want to pass this proposal off to others is because I became quite tired of arguing about it. I think we have plenty of evidence that this feature will be useful. I think we have multiple legitimate possibilities and I want to find a champion group who can come to a decision. I think we can come to a decision pretty swiftly. I don't think it's useful to spend a few more years arguing about it and then propose it for stage 2. I have my personal preference that I've articulated and I'm happy to talk through the reasons, but I tried to already. You know, F sharp is a lot simpler. I just doesn't do as many things it's less expressive and he just calls the function. We have a pretty big queue at this point. So I think also we should try to be maybe a little bit more high level and our feedback here, but I don't think we need to really get into much debate right now, but it is good to explore the problem space. BT: Have did you have more to say on your essay. -TAB: I guess one quick high level thing then this is just a general talk about either proposal my big point of my essay. Is that regardless of Which choice we make, Each one is slightly optimized for different things showing up on the right hand side. F sharp is optimized for unary function calls. Hack is optimized for everything but unary function calls. but in the case where it's non optimal in either case, you're paying a tax of three characters either a three-character prefix to introduce an arrow function in the F-sharp style or a three-character post fix to actually Invoke the function with parenthesis placeholder close parenthesis in the Hack style, and that's it. So long as the proposal handles await and ideally yield but way to the important part the two are 100% equivalent in expressivity and it's just a matter of which cases. burdening with a minor additional syntax tax. All in all the choice is going to be a fairly minor thing. I think there is a clearly better option. But as I said in a quote that then you had one of the slides I think the F sharp is better than no pipeline is all because of how popular method chaining is as a method. It's the one of the big reasons why jQuery is so popular is because it lets you chain everything and people like that style of programming. pipeline makes it accessible to all code types. Not just methods. so it would be very good to have anything in there. It's just a matter of which one we burden with the tiny bit of syntax. That's it. Okay. Thank you of all the Maria +TAB: I guess one quick high level thing then this is just a general talk about either proposal my big point of my essay. Is that regardless of Which choice we make, Each one is slightly optimized for different things showing up on the right hand side. F sharp is optimized for unary function calls. Hack is optimized for everything but unary function calls. but in the case where it's non optimal in either case, you're paying a tax of three characters either a three-character prefix to introduce an arrow function in the F-sharp style or a three-character post fix to actually Invoke the function with parenthesis placeholder close parenthesis in the Hack style, and that's it. So long as the proposal handles await and ideally yield but way to the important part the two are 100% equivalent in expressivity and it's just a matter of which cases. burdening with a minor additional syntax tax. All in all the choice is going to be a fairly minor thing. I think there is a clearly better option. But as I said in a quote that then you had one of the slides I think the F sharp is better than no pipeline is all because of how popular method chaining is as a method. It's the one of the big reasons why jQuery is so popular is because it lets you chain everything and people like that style of programming. pipeline makes it accessible to all code types. Not just methods. so it would be very good to have anything in there. It's just a matter of which one we burden with the tiny bit of syntax. That's it. Okay. Thank you of all the Maria WH: Slide 29 has a list of things which are missing from JavaScript according to the user survey. I would note that functions are high on there — the community desire for functions is almost as high as the pipeline operator. So maybe we should add functions to the language ☺. -DE: Maybe it was an error to include this because I was also confused by that function thing and but I think we have other circumstantial evidence that you know, a lot of people are excited about it. Like the number of Emoji reacts. +DE: Maybe it was an error to include this because I was also confused by that function thing and but I think we have other circumstantial evidence that you know, a lot of people are excited about it. Like the number of Emoji reacts. WH: Yes, I am pretty worried about the schism of the ecosystem caused by anything which encourages the currying form — what you are calling the F-sharp form. The biggest learning curve will be from some of the syntactically simplest versions of this proposal. If you only have a pipeline operator, it will cause a rift where half of the libraries will adopt the currying form with functions returning functions, and half will not. It's just going to cause an enormous cognitive burden on folks learning the language trying to figure out which is which and getting them confused with each other. So I'm really worried about the complexity behind the simpler variants. -DE: What do you think about the hack form WH? +DE: What do you think about the hack form WH? -WH: This one is much nicer. +WH: This one is much nicer. YSV: All right, so this is one argument to consider for a minimal set. It came out of the research done in pipeline. We've talked about it before but people when asked to figure out where a bug was in the code, they were given several examples of this they most frequently struggled find the bug in pipeline code that included await. People would quit out of the survey because they couldn't find the bug and this was where we had a lot of drop-off points. So that's something to consider if we do something minimal first, but again things have changed since then. @@ -809,9 +817,9 @@ JHX: Yeah, I don’t dislike hack style, but the problem here's I I feel they ar DE: Yeah,that seems possible. I guess I kind of shared SYG analysis of that for that. So it's good. -DRR: I think our team has some light interest in potentially championing this proposal. I have some wariness to some that I think are true and Waldemar have both echoed, you know echoed right community rift maybe certain styles of programming that are not as efficient also where you were made end up at creating a lot of garbage incidentally. It's something that we've heard quite a bit of and while it's something that maybe the committee was already aware of like the think the reason that people sort of often asked for this is because you know lack of static tooling that can trip down prototypes on classes and things like that and tree shake away method. So people sort of look to these these functions that are defined in other modules loosely and compose that way there There are also some other tool in trade-offs that wanted to just bring up like if you have a thousand different helper functions now all of these things have to sort of pollute your completion list when you're when you're using some analysis on that, so I think we're interested in being involved in this. just to understand like some of the some of the potential paper cuts that will you know, incidentally come with the features as well, but seems like a nice ergonomic Improvement that would come with the language as well. +DRR: I think our team has some light interest in potentially championing this proposal. I have some wariness to some that I think are true and Waldemar have both echoed, you know echoed right community rift maybe certain styles of programming that are not as efficient also where you were made end up at creating a lot of garbage incidentally. It's something that we've heard quite a bit of and while it's something that maybe the committee was already aware of like the think the reason that people sort of often asked for this is because you know lack of static tooling that can trip down prototypes on classes and things like that and tree shake away method. So people sort of look to these these functions that are defined in other modules loosely and compose that way there There are also some other tool in trade-offs that wanted to just bring up like if you have a thousand different helper functions now all of these things have to sort of pollute your completion list when you're when you're using some analysis on that, so I think we're interested in being involved in this. just to understand like some of the some of the potential paper cuts that will you know, incidentally come with the features as well, but seems like a nice ergonomic Improvement that would come with the language as well. -JHX: One group of people use like rambda or low - FP they are a group of people. They just want to chain the method. They don't use curry or things like that. These people may like the hacks style. So the problem here is I find these two groups have two different requirements. And actually they have very different developer experience. As I explained in a previous meeting where I present to the extension proposal that if you mix the Old-style and the pipeline operator you get many problems. But the guys I mentioned that only used points free style they do not use any mix of that. So they are okay about the F-sharp style. So I think we should consider that there are substantial conflicts between the two groups, and it's I think it's hard to satisfy both in one syntax. And so I really hope we can consider that the other past that for example in some FP language like closure there are two or three different pipeline operator or we could have both the short style so the two groups of people can use it and satisfy both. +JHX: One group of people use like rambda or low - FP they are a group of people. They just want to chain the method. They don't use curry or things like that. These people may like the hacks style. So the problem here is I find these two groups have two different requirements. And actually they have very different developer experience. As I explained in a previous meeting where I present to the extension proposal that if you mix the Old-style and the pipeline operator you get many problems. But the guys I mentioned that only used points free style they do not use any mix of that. So they are okay about the F-sharp style. So I think we should consider that there are substantial conflicts between the two groups, and it's I think it's hard to satisfy both in one syntax. And so I really hope we can consider that the other past that for example in some FP language like closure there are two or three different pipeline operator or we could have both the short style so the two groups of people can use it and satisfy both. DE: Yeah, that's exactly what is proposed to few years ago with smart mix where we would support placeholders and support their bare form. I think this is where many people reviewing the proposal felt that it fell off the complexity cliff. For that reason JS Choi decided to withdraw this proposal and I support that. @@ -821,13 +829,13 @@ DE: Yeah, imagine we could have a champion group with a whole bunch of different JGT: This is the first time that I've taken a close look at this proposal. I think about the majority of developers who may not have deep experience in functional programming, javascript internals, or spec details. I think the Hack style (channeling those everyday developers) was the only one that would make immediate sense to them and would be really clear what it does, because it looks like regular ecmascript code. As far as the learning curve, it seems like it would be far better for those developers. So anyway, sample size of one, but that's my take. -???: Can I chime in on under Hack style? So when I look at it, then the comparison table is don't like it is trying to procure the others now. Because the Hack style doesn't seem to support ternary operator easily because it will be confusing to have a question mark followed by another question right and then two values for example for what may be due to the symbol being picked for representing the offer and singular. +???: Can I chime in on under Hack style? So when I look at it, then the comparison table is don't like it is trying to procure the others now. Because the Hack style doesn't seem to support ternary operator easily because it will be confusing to have a question mark followed by another question right and then two values for example for what may be due to the symbol being picked for representing the offer and singular. -DE: Well, that's a good point. The comparison tables were certainly written an advocate of the Hack style proposal. I suggested that this slide deck be written with a question mark because that seemed to be the most intuitive Placeholder for most people it would also be confusing to use this in conjunction with optional chaining. Actually, maybe that would even cause a parsing ambiguity. So, you know we could also use that sign or hash for the hack placeholder. I already had a different choice. +DE: Well, that's a good point. The comparison tables were certainly written an advocate of the Hack style proposal. I suggested that this slide deck be written with a question mark because that seemed to be the most intuitive Placeholder for most people it would also be confusing to use this in conjunction with optional chaining. Actually, maybe that would even cause a parsing ambiguity. So, you know we could also use that sign or hash for the hack placeholder. I already had a different choice. We were using the hash symbol for the proposal consistently and we just switched over to the question mark literally days ago, but with the understanding that yes, there might be parsing issues. They just might be easier to read for people in this initial proposal the The exact signing of the sigil is unimportant. -Yeah, I it feels good to have the Hack style. Not That explicit a little bit interested. Although that that placeholders them to be there. but I think it's time people get used to it, But we I think we need more examples on that table to sway someone's opinion if you guys need I mean, this is an interesting problem. I would like to be this is also the first time I've exposed to because I am interested in this but it is surprising that it so many Indians to Earth is kind of implicit about this example. +Yeah, I it feels good to have the Hack style. Not That explicit a little bit interested. Although that that placeholders them to be there. but I think it's time people get used to it, But we I think we need more examples on that table to sway someone's opinion if you guys need I mean, this is an interesting problem. I would like to be this is also the first time I've exposed to because I am interested in this but it is surprising that it so many Indians to Earth is kind of implicit about this example. BT: Okay. Thank you for volunteering to help out. It looks like Ron also is interested in Champion as well. diff --git a/meetings/2021-03/mar-9.md b/meetings/2021-03/mar-9.md index 81eea135..5c3dc423 100644 --- a/meetings/2021-03/mar-9.md +++ b/meetings/2021-03/mar-9.md @@ -1,7 +1,8 @@ # 9 March, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Ross Kirsling | RKG | Sony (PlayStation) | @@ -36,14 +37,13 @@ | John Hax | JHX | 360 | | Aki Rose Braun | AKI | PayPal | - ## Editors Update + Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1AI-r8JDTIGD4Sg-DvazQcfchGJQ3Q21XOW9HnddsXRk/) - -KG: There have been no major changes to the specification since our last meeting less than two months ago. We have landed a few normative things. These are all things we had consensus for. +KG: There have been no major changes to the specification since our last meeting less than two months ago. We have landed a few normative things. These are all things we had consensus for. KG: #2216, the relevant change is that a derived class that uses a default super constructor will no longer use the current value of array.prototype symbol.iterator. It will do the iteration in a way which is not over-writable. It will invoke its super constructor without relying on whatever the current value of Array.prototype symbol.iterator. @@ -51,7 +51,7 @@ KG: #2221: explicit methods for typed arrays. This is just an under specificatio KG: #2256: this is a grammar issue Waldemar raised ages ago, which we have finally merged. The grammar allowed both a for-of loop with a variable named async and a regular for loop, so like a c-style for loop, that had an async arrow with a binding identifier named "of" as the parameter. So `for ( async of`, that sequence of four tokens could start both kinds of loops. You wouldn't know which production you were parsing at that time. And this is an ambiguity that we try to avoid. So the solution was to just ban for-of loops with a variable named async because it's just not a thing that you would particularly want to do. `async` is an odd choice for a variable name. -KG: #1585: this is a from Matthias and others making array.prototype sort more specific. Again, one of those things has been open for a long time. But we finally got - we were waiting for more data to make sure everyone could implement it and they could so it's landed. +KG: #1585: this is a from Matthias and others making array.prototype sort more specific. Again, one of those things has been open for a long time. But we finally got - we were waiting for more data to make sure everyone could implement it and they could so it's landed. KG: #2116: the order of the "length" and "name" properties on functions is observable. It did not previously have a defined order. Now it does. That's mostly a prerequisite for other changes, other editorial changes we would like to make. @@ -59,32 +59,34 @@ KG: OK, and we have a very similar list of upcoming work. I'm not going to recap KG: A reminder–there's a project board where we track the major stuff that we're planning on doing or have started doing. -KG: And the most important thing in this presentation is: there is a candidate for the ES2021 spec ready. It's just cut from the main branch as of, I believe, Sunday evening or Monday morning. This is something that needs to be presented to begin the formal two-month period where everyone can have their company's lawyers review the candidate to make sure there's no IP that they care about or any other things that they care about that would prevent this from being released as a standard. So we would like to start the opt-out period now. I would like to ask for unanimous consent for the 2021 candidate, which will begin the IPR out that period towards putting this as a formal standard. +KG: And the most important thing in this presentation is: there is a candidate for the ES2021 spec ready. It's just cut from the main branch as of, I believe, Sunday evening or Monday morning. This is something that needs to be presented to begin the formal two-month period where everyone can have their company's lawyers review the candidate to make sure there's no IP that they care about or any other things that they care about that would prevent this from being released as a standard. So we would like to start the opt-out period now. I would like to ask for unanimous consent for the 2021 candidate, which will begin the IPR out that period towards putting this as a formal standard. MM: Sounds good to me. -DE: I support this as well. +DE: I support this as well. -YSV: I also support this. +YSV: I also support this. IS: So it will take maybe one or two days or until it is officially published, you know on the Ecma documents Etc. So we can take it as it is practically today but publication of this takes one or two days. (IS Note outside the meeting: It has been done already on March 10, 2021. End: May 10, 2021) -JHD: It's on the reflector and it's also a public release on the GitHub repo. +JHD: It's on the reflector and it's also a public release on the GitHub repo. -IS: We have two channels, for the official Ecma Channel, you know with the Ecma documentation and etc. And then we have our own in TC39. So I will immediately write to Patrick Ch. that he should put it out and then later it will be put out tomorrow, so it's almost the same. +IS: We have two channels, for the official Ecma Channel, you know with the Ecma documentation and etc. And then we have our own in TC39. So I will immediately write to Patrick Ch. that he should put it out and then later it will be put out tomorrow, so it's almost the same. ### Conclusion/Resolution -* Unanimous consent for 2021 Candidate -* Opt out period until May 10th, 2021. + +- Unanimous consent for 2021 Candidate +- Opt out period until May 10th, 2021. ## ECMA 402 + Presenter: Leo Balter (LEO) LEO: For ECMA 402 There is a very interesting part here. In the repo for the wiki pages we have a reasonably maintain proposal in PR progress tracking and it's a good one for everyone to make sure like, what are the merged PRs and also to keep track of what is already in the 2020 Edition, but what is coming for the 2021 Edition. This is a good way to see and keep track of what we have for the 2021, including Intl.list format. intl.display names, the date-time format from a date-style and time-style, and also format range. We have intl.Segmentor for possibly for stage 4 in this meeting, but this is not part of that 2021 edition. -LEO: I would just like to highlight right before our release candidate cut and André Bargull did at the pretty impressive review of the specs and I think this has been really /interesting. There is a lot of things here to take a look at it if you're interested. That's mostly it. As a quick highlight as well, I think I fixed the GitHub actions workflows. We recently migrated from Travis to GitHub. But now we've got a good working deploy process as well using GitHub actions and it's really faster and it runs very smoothly. +LEO: I would just like to highlight right before our release candidate cut and André Bargull did at the pretty impressive review of the specs and I think this has been really /interesting. There is a lot of things here to take a look at it if you're interested. That's mostly it. As a quick highlight as well, I think I fixed the GitHub actions workflows. We recently migrated from Travis to GitHub. But now we've got a good working deploy process as well using GitHub actions and it's really faster and it runs very smoothly. -LEO: We also have the release candidate. So if you jump to https://tc39.es/ecma402/2021/. You should be able to see it, but there is also a pdf version with numbered pages in the same reflector thread that JHD posted for. yes, 2021 and I think for formalities I should be asking for the same consensus for this release candidate itself and therefore other reviews. I'm pretty sure @anba has captured most of the corner cases around the Ecma 402 specs and I’m really thankful. +LEO: We also have the release candidate. So if you jump to https://tc39.es/ecma402/2021/. You should be able to see it, but there is also a pdf version with numbered pages in the same reflector thread that JHD posted for. yes, 2021 and I think for formalities I should be asking for the same consensus for this release candidate itself and therefore other reviews. I'm pretty sure @anba has captured most of the corner cases around the Ecma 402 specs and I’m really thankful. IS: So also the same two-month period for opt out. It starts that we say from tomorrow, so it would be May 10 that it finishes. Is that correct? @@ -92,60 +94,69 @@ AKI: I believe so. IS: Okay go. Thank you. So both for boot specification ECMA-262 ECMA-402 and it also means that both are now frozen. Frozen means that obviously we will find editorial changes and mistakes and whatever, so those are possible, but substantive changes etc are not. So this is what “Frozen” means which is not from the editorial point of view. It is still so that we still can make “editorial” changes (but not substantive ones). Okay, thank you. So, I think I understood. -AKI: All right. Excellent. Congratulations to 2021 us. +AKI: All right. Excellent. Congratulations to 2021 us. + ### Conclusion/Resolution -* Unanimous consent for 2021 Candidate -* Opt out period until May 10th. + +- Unanimous consent for 2021 Candidate +- Opt out period until May 10th. + ## Introducing: Make B.1.{1,2} (octal literals & escapes) normative for sloppy code + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/ecma262/pull/1867) KG: So this is a project that we began quite a long time ago. We got consensus on basically getting rid of annex B in the sense of merging it into the main specification with similar requirements around when it is normative and when it is not. However since then there has been some pushback around parts of it, so we are doing it in a more piecemeal way. You may remember at a meeting or two ago, we talked about moving the `__proto__` syntax and accessor into the main specification. This is another part of that. -KG: So annex b.1.1 B.1.2 are octal escapes, legacy octal integer literals. This is like 034, or whatever. We would be moving them into the main specification, still only legal in sloppy mode code. So the only change as far as normativeness is that this would no longer be optional for non web browsers. Every implementation would be required to have it as in practice -- like if you want to run code that's out there, you probably need to have this anyway, whether or not you are a web browser. Yeah, so that's the change that: upstreaming b.1.1 and b.1.2 into the main specification with the same strictness requirements as is there currently, but without the optionality implied by Annex B. I would like to ask for consensus. +KG: So annex b.1.1 B.1.2 are octal escapes, legacy octal integer literals. This is like 034, or whatever. We would be moving them into the main specification, still only legal in sloppy mode code. So the only change as far as normativeness is that this would no longer be optional for non web browsers. Every implementation would be required to have it as in practice -- like if you want to run code that's out there, you probably need to have this anyway, whether or not you are a web browser. Yeah, so that's the change that: upstreaming b.1.1 and b.1.2 into the main specification with the same strictness requirements as is there currently, but without the optionality implied by Annex B. I would like to ask for consensus. MM: Yes. -DE: Yes. +DE: Yes. -AKI: All right, that sounds consensus-y to me. Great. +AKI: All right, that sounds consensus-y to me. Great. ### Conclusion/Resolution -* Consensus + +- Consensus ## Normative: specify creation order for capturing group properties + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/ecma262/pull/2329) KG: All right, so you may recall that the order in which non-numeric properties are created on an object is normative because it is observable using object.keys. So for example, if you use named capturing groups in regular expressions, as you should, you get this `groups` object that has capturing groups on it. Those are created by this loop that just iterates over all of the capturing groups and then says, if this is a named group then create a named property. And it did not say - it said for each of these integers that is a capturing group you should create a property, but it did not say to do it in any particular order. You have one two, three, four, five six, seven, eight nine ten, but it didn't have to do them in that order. So the change is just to say do it in ascending order. Again this is observable via object.keys. As far as I am aware everyone does this anyway, I can't imagine doing it any other way. Well, I can imagine it, but there are also test262 tests which expect this - which was slightly wrong because this didn't actually define an order so would like to add a consensus to add "in ascending order" to this step. -DE: Makes sense to me. +DE: Makes sense to me. -??: Yes, please. +??: Yes, please. -MM: Yes good. +MM: Yes good. DE: Tangent–Since we are discussing things about capture groups. There's this prohibition against named capture groups that are duplicates, but sometimes that's very useful in a disjunction or even a repetition. So if anybody thinks this should be relaxed, as many people do, and wants to champion a proposal: Please get in touch with me. I'd be happy to work with you on that. Thank you so much Kevin for fixing up the loose ends loose ends for capture groups. - AKI: Thank you, Daniel. + ### Conclusion/Resolution -* Consensus + +- Consensus + ## Backup incumbent tracking for FinalizationRegistry jobs + Presenter: Shu-yu Guo (SYG) - [slides](https://docs.google.com/presentation/d/1w8b_kPc5UccV4Y_k3WEsSnQLMoHWDqMdkhQ2MIJ-OBk/edit#slide=id.p) -- [proposal](https://github.com/tc39/ecma262/pull/2316) +- [proposal](https://github.com/tc39/ecma262/pull/2316) -SYG: Alright, so basically I'm not going to recap what back up the incumbent settings object tracking thing is. It's like HTML Arcana, but remember I did explain this a couple of meetings ago for promises specifically to to add these hosts defined host hooks called make host make job callback and host called job call back for these callbacks that You pass on to the host to run like promise-like promise handers the idea is that then the host like HTML can add whatever state they need to it and pull it back out when they call it in this case. They would track the backup and convent object thing. So we did this for promises and this should be uniformly done for all callbacks that go to the host and we forgot to do this for finalization registry callbacks when we merge finalization registering the main spec. So, this PR is basically to add those two callbacks to the finalization registry. So those two host defined abstract operations that these supposed to host hook calls to the finalization registry machinery and this behavior is we're doing this to consistency with promise callback behavior. It will unblock the HTML integration PR for finalization registry and weak refs get the HTML integration stuff merge, which is good because it's already stage 4. Firefox is the only one who implements this incumbent tracking behavior per spec for both promises and finalization registry is my understanding, please correct me if I'm wrong Yulia. +SYG: Alright, so basically I'm not going to recap what back up the incumbent settings object tracking thing is. It's like HTML Arcana, but remember I did explain this a couple of meetings ago for promises specifically to to add these hosts defined host hooks called make host make job callback and host called job call back for these callbacks that You pass on to the host to run like promise-like promise handers the idea is that then the host like HTML can add whatever state they need to it and pull it back out when they call it in this case. They would track the backup and convent object thing. So we did this for promises and this should be uniformly done for all callbacks that go to the host and we forgot to do this for finalization registry callbacks when we merge finalization registering the main spec. So, this PR is basically to add those two callbacks to the finalization registry. So those two host defined abstract operations that these supposed to host hook calls to the finalization registry machinery and this behavior is we're doing this to consistency with promise callback behavior. It will unblock the HTML integration PR for finalization registry and weak refs get the HTML integration stuff merge, which is good because it's already stage 4. Firefox is the only one who implements this incumbent tracking behavior per spec for both promises and finalization registry is my understanding, please correct me if I'm wrong Yulia. YSV: (silent confirmation) SYG: And chrome is interested in aligning here, but here, but it's like - threading through the incumbent object correctly everywhere in blink and V8 is I think going to be some work and we're not prioritizing it very highly currently, but if the eventually to plan is to eventually align on this behavior, and if you look at the HTML spec, in fact, there be a little side icon saying this behavior is currently only implemented Firefox. So that's it. Any issues with getting consensus for this? Is there anything on the Queue? -MM: My question is, for the incumbent thing, when we introduced that for promises, did we make some big qualification, at least with a non normative note, hopefully something more normative, that this is only for web browsers rather than a general host hook that should get that hosts should feel free to use. +MM: My question is, for the incumbent thing, when we introduced that for promises, did we make some big qualification, at least with a non normative note, hopefully something more normative, that this is only for web browsers rather than a general host hook that should get that hosts should feel free to use. SYG: We certainly did. Let me okay. I think you don't mean incumbents. I think you actually mean the host hooks host called job callback and host make job callback. And the normative - it's not even a non normative, if you look up those AOs on the spec actually, maybe I can just share. But Basically if you look up those AOs on the spec, there's a default implementation, which is basically for host job call back. Just does the plain call and for the HostMakeJob call back just make a wrapper that does nothing, the notes that says ecmascript hosts that are not web browsers must use the default implementation, @@ -161,16 +172,20 @@ MM: It's fine with me. YSV: You have consensus from my side Shu, and what you said was correct for Firefox. -SYG: Okay great. Thanks for confirming. +SYG: Okay great. Thanks for confirming. + ### Conclusion/Resolution -* Consensus + +- Consensus + ## Class Static Initialization Blocks + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-class-static-block) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkfhG_gVnKlNwMT-MyA?e=owLLRf) -RBN: I will keep this short. I put 15 minutes on the agenda. I just want to provide a brief update on the class static initialization block proposal that we discussed in the last meeting. We've already gone over the motivations, so I'm not going to spend too much time talking about that. What I do want to point out is where we ended up with proposed semantics. What we discussed is the proposed semantics for stage 3, which was that we would allow for multiple static initialization blocks per class. Which was a change from only 120 more as of PR 38, which was what we discussed as part of the conditional consensus for stage 3. We would evaluate these static initialization blocks interleaved with static field initializers as part of the layering of this proposal on top of the static Fields proposal. That has been addressed. +RBN: I will keep this short. I put 15 minutes on the agenda. I just want to provide a brief update on the class static initialization block proposal that we discussed in the last meeting. We've already gone over the motivations, so I'm not going to spend too much time talking about that. What I do want to point out is where we ended up with proposed semantics. What we discussed is the proposed semantics for stage 3, which was that we would allow for multiple static initialization blocks per class. Which was a change from only 120 more as of PR 38, which was what we discussed as part of the conditional consensus for stage 3. We would evaluate these static initialization blocks interleaved with static field initializers as part of the layering of this proposal on top of the static Fields proposal. That has been addressed. RBN: We still currently don't support decorators on static blocks, which is we don't know what that exactly would mean yet, but if that's something we eventually do want to do that will probably happen or be discussed as part of the decorators proposal or later. There was another issue that we were concerned about which was whether or not or how to handle new.target inside of a static block. At the time was not did not feel very clearly specified in static fields, that was because I was looking at an older version of the proposal spec text rather than the version that is the one is the diff from the actual ecma262 spec. So this has been updated and is now consistent in that new.target will return undefined just like it does in methods and in static fields. @@ -178,7 +193,7 @@ RBN: The semantics we also discussed, these have not changed, is a static initia RBN: So in the last meeting in January, we conditionally advanced to stage three pending the changes that were that we just discussed. Those were approved and merged. So assuming no other concerns that theoretically means that we are now at stage 3 with this proposal since that was the only blocking issue for stage 3. That's pretty much all I have for this if anyone has any comments that they'd like to add. -DE: Great job on this proposal. I'm very happy about how responsive you were to all the concerns and patient with my review. And so, thanks. I support this being considered stage 3, which I agree it kind of already is. +DE: Great job on this proposal. I'm very happy about how responsive you were to all the concerns and patient with my review. And so, thanks. I support this being considered stage 3, which I agree it kind of already is. RBN: All right since this since we have the conditional approval, I'm not specifically asking for stage advancements since this is essentially now stage 3. I am interested in getting some feedback from implementers that would be interested in investigating this feature for what we need as our requirements for stage 4 and I'll probably be filing issues on various issue trackers and in the near future starting to work on the stage for process. @@ -186,26 +201,27 @@ YSV: It's already tracked on Firefox. RBN: Wonderful if there are publicly available issue links for these if you could add either contact me directly just add something to the issue tracker so I can track those issues. I would be helpful. -SYG: V8 has implemented as a flagged feature and I plan to send the intent to ship soon. Probably Friday. +SYG: V8 has implemented as a flagged feature and I plan to send the intent to ship soon. Probably Friday. -RBN: And we have a community contributor that's already working on putting together a down level implementation for typescript right now. And that's all I have. So, thank you. +RBN: And we have a community contributor that's already working on putting together a down level implementation for typescript right now. And that's all I have. So, thank you. ### Conclusion/Resolution -* Still stage 3 +- Still stage 3 ## Records and Tuples update + Presenter: Robin Ricard (RRD) - [slides](https://docs.google.com/presentation/d/15ggPmSVt-cI9asKaoolZkvjvV62Xh3I9LSD7R5nXQ8A/edit) RRD: This is a really quick update so I don't want to take much everyone staying here because this need to questions that we intend to ask to the committee. This is not a decision where we're taking here. So no stage advancement or anything like this today on record and tuple. -RRD: So this is basically about coming back to what (we discussed?) in the stage 1 slides in October 2019, which is that code that you would write for record and tuple should also work with objects and arrays and execute mostly the same especially while accessing them. Right and we found out recently by triaging things and making sure that everything was coherent that Array.prototype doesn't have all of the methods that we added to record prototype and we quickly thought about it and we are thinking of potentially adding them to the array prototype. And so that include the `popped` which removes and elements and gives you basically a copy of the Tuple but with the element popped, `pushed`, `reversed`, `shifted`, `sorted`, `spliced`, `shifted`, and `with` which lets you change the value given an index and a new value that for replace it. And as it is not noted here, but if we were to add them to Array.prototype, we would add them in such a way that each returns an array nautical and we actually found out that this could be useful without recording Tuple that even if tuples can do exist. +RRD: So this is basically about coming back to what (we discussed?) in the stage 1 slides in October 2019, which is that code that you would write for record and tuple should also work with objects and arrays and execute mostly the same especially while accessing them. Right and we found out recently by triaging things and making sure that everything was coherent that Array.prototype doesn't have all of the methods that we added to record prototype and we quickly thought about it and we are thinking of potentially adding them to the array prototype. And so that include the `popped` which removes and elements and gives you basically a copy of the Tuple but with the element popped, `pushed`, `reversed`, `shifted`, `sorted`, `spliced`, `shifted`, and `with` which lets you change the value given an index and a new value that for replace it. And as it is not noted here, but if we were to add them to Array.prototype, we would add them in such a way that each returns an array nautical and we actually found out that this could be useful without recording Tuple that even if tuples can do exist. WH: I couldn't hear you. You said it returned an array, not what? -RRD: Yes. It would return an array not a tuple. Because originally those come from the Tuple that prototype so we wouldn't be copying the exact same spec text word for word to array prototype and essentially this they would be similar but wouldn't be the same so they would return an array for Array.prototype, right? Is that clear? +RRD: Yes. It would return an array not a tuple. Because originally those come from the Tuple that prototype so we wouldn't be copying the exact same spec text word for word to array prototype and essentially this they would be similar but wouldn't be the same so they would return an array for Array.prototype, right? Is that clear? WH: So they would always return an array even if you use them on a Tuple? @@ -215,17 +231,17 @@ WH: Okay, so you get back with whatever the kind of object you called them. RRD: Yes, so in the case of arrays you would get arrays. -RRD: And yes, we found out that we could see benefits without even considering the existence of tuples in JavaScript. For example, the possibility to reverse without having to make a copy or sort without having to make a copy beforehand is actually quite useful. So it over to us that it might be a good idea so we would like to get general, you know temperature in their room here and finally and I already start to see the queue and essentially led to as the committee whether we should pursue this as a separate proposal or as part of a bigger than tuples and that's all I have for that presentation. +RRD: And yes, we found out that we could see benefits without even considering the existence of tuples in JavaScript. For example, the possibility to reverse without having to make a copy or sort without having to make a copy beforehand is actually quite useful. So it over to us that it might be a good idea so we would like to get general, you know temperature in their room here and finally and I already start to see the queue and essentially led to as the committee whether we should pursue this as a separate proposal or as part of a bigger than tuples and that's all I have for that presentation. KG: Yeah, this seems great. I support doing it as a separate proposal. We have done things before where, if we think that these things are only useful in the context of record and tuple, which I do not to be clear, I think they're independently useful, but even if we did think that, we have done these sort of linked proposals before where we say this only advances if this other thing advances. So I think there's no problem there even if we do want to gate this on record and tuple, although again, I think it is independently useful. -RRD: Okay noted. +RRD: Okay noted. MM: So first of all question, which is - is the fact that (lets say) `pushed` returns either and the pushed on a tuple returns a tuple and pushed on a real returns an array, is that because of the methods are different or is that because it's based on what kind of a thing the `this` is? so to put it another way if you set Array.prototype got pushed to do not call on a tuple or rice or vice versa. What would you get? TAB: I can answer this they would be different methods so it would probably fail because the array prototype pushed would not. Oh, well if it's a rate up ??? pushed it would probably work because array dot prototype that push working at you re like, but if we did we'd give you an array back not to do cool. -MM: Ok, good. Just to verify the array methods would generic just like the original just like the existing array methods are but they would always return an array just like the existing array methods do. +MM: Ok, good. Just to verify the array methods would generic just like the original just like the existing array methods are but they would always return an array just like the existing array methods do. RRD:I mean, yeah that this is like this is not defined at the moment. @@ -235,13 +251,13 @@ MM: OK thank you. That's what was very clear. To answer your question. I would a RRD: All right. I don't think we have time to discuss that specific last part, but we would like to talk with you, MM, as a later point about this last frame. -SYG: Wasn’t Waldemar in the queue before me? +SYG: Wasn’t Waldemar in the queue before me? WH: I had a question very similar to MM’s. It was addressed. -SYG: So before I go into my topic to quickly address what DE said, I would like us to be open to the possibility that new things don't have species even if we don't remove the old ones. But we don't need to go into that here. I would like to be us to be open to that possibility. I would like to urge the records and tuples champions - I wish you luck with the names. We have had tremendous difficulty historically to add new things to array dot prototype. This is adding a lot of things to read that prototype and I understand you would like to do it for consistency with Tuple dot prototype. So while I strongly agree that this should be a separate proposal that is not sequenced before or after records and tuples if you are you if you want to do this and you want the same names, you should probably come up with a plan because it's possible that you might not get the same names between the tuple dot prototype and array dot prototype. +SYG: So before I go into my topic to quickly address what DE said, I would like us to be open to the possibility that new things don't have species even if we don't remove the old ones. But we don't need to go into that here. I would like to be us to be open to that possibility. I would like to urge the records and tuples champions - I wish you luck with the names. We have had tremendous difficulty historically to add new things to array dot prototype. This is adding a lot of things to read that prototype and I understand you would like to do it for consistency with Tuple dot prototype. So while I strongly agree that this should be a separate proposal that is not sequenced before or after records and tuples if you are you if you want to do this and you want the same names, you should probably come up with a plan because it's possible that you might not get the same names between the tuple dot prototype and array dot prototype. -RRD: Yeah, it does. That's essentially why we are taking this as early as as we can because we understand that or read a prototype is more difficult to change then the new thing we're proposing with tuples. +RRD: Yeah, it does. That's essentially why we are taking this as early as as we can because we understand that or read a prototype is more difficult to change then the new thing we're proposing with tuples. SYG: That's all right. All right. @@ -253,24 +269,27 @@ RRD: the idea is to get a new array that has the items. Right, so it's very simi DE: So to maybe elaborate on that - `spliced` is a kind of similar situation to `popped`. These may be simpler where you know Array.prototype.pop returns the last element and gets rid of the Array and gets rid of it from the Array. Tuple.prototype.popped just returns the Array and it doesn't give you the last element. `spliced` operates similarly you I think it makes sense because you already could access the last element through other methods and for splice you already could access these things through other ways. So You know, you can query it first and then call this method to do the mutation. So it makes sense that you only get the, you know, quote unquote mutated sequence and not the and not both things I'm waiting for a response. No response. Okay, Daniel. you're next. -DRR: Just wanted to say that I think when we solve this on our team, we probably got to prefer this or I mean we like this approach because you're you're basically giving a way for existing data structures to be to take that sort of, you know, immutable approach for mutable data structures, right? So you can leverage a lot of the same techniques that you can use anyway. You just don't have to go through indirect slices or indirect helper functions. They're just both in. We definitely like to see that also lets you avoid some of the confusion of like push on one creates a new copy push another action mutates. I will also say, you know, there's this thing about reverse versus reversed which might be a little bit strange. +DRR: Just wanted to say that I think when we solve this on our team, we probably got to prefer this or I mean we like this approach because you're you're basically giving a way for existing data structures to be to take that sort of, you know, immutable approach for mutable data structures, right? So you can leverage a lot of the same techniques that you can use anyway. You just don't have to go through indirect slices or indirect helper functions. They're just both in. We definitely like to see that also lets you avoid some of the confusion of like push on one creates a new copy push another action mutates. I will also say, you know, there's this thing about reverse versus reversed which might be a little bit strange. -RRD: Yeah, it's definitely complementary approach to what we got into bullies is putting in and if we manage to make those things go here and we think it's going to be a net benefits both arrays and regards tuples. All right, if we don't have any more questions, I'm happy to leave it there your time +RRD: Yeah, it's definitely complementary approach to what we got into bullies is putting in and if we manage to make those things go here and we think it's going to be a net benefits both arrays and regards tuples. All right, if we don't have any more questions, I'm happy to leave it there your time ### Conclusion/Resolution -* New proposal suggested -* Has independent value from R&T + +- New proposal suggested +- Has independent value from R&T + ## Async Do update towards stage 2 + Presenter: Kevin Gibbons (KG) - [proposal](https://tc39.es/proposal-do-expressions/) -[slides](https://docs.google.com/presentation/d/1GXk1UwhaXijT0Rcn3_I4HmVGsdxM9cpYqcRvVjdzIoA/) -KG: Okay, right. Do Expressions have been presented before I'm not going to keep giving a full summary of them every time but briefly they are just a way to use a block of statements in expression position giving you the value the completion value as would be observed by for example `eval`, as the completion the value of the blocks. Okay this do expression assigns x to temp * temp, but temp does escape this expression. Completion values are already in the spec. I'm not proposing to make any changes to them at this point. +KG: Okay, right. Do Expressions have been presented before I'm not going to keep giving a full summary of them every time but briefly they are just a way to use a block of statements in expression position giving you the value the completion value as would be observed by for example `eval`, as the completion the value of the blocks. Okay this do expression assigns x to temp * temp, but temp does escape this expression. Completion values are already in the spec. I'm not proposing to make any changes to them at this point. -KG: There is spec text. This has changed very slightly since I presented it. Or, rather, the screenshot here has changed very slightly since I presented it last time. An overwhelming majority of the spec text is specifying restrictions on which things you can write in a do expression, which is not in the screenshot. Those restrictions have changed slightly since last time. The first thing is when last time I said that I didn't want to allow break continue or return to cross the boundary of the do and that was mostly a sort of a style thing, a question of what code ought to be legal rather than a question of what code is possible. However when I presented it, I got some strong pushback on that restriction from WH, in particular, and I did a sort of a survey of delegates, an informal - just I wanted to know what people thought through a Google form and of the 30 or so responses I got a pretty strong majority that was in favor of allowing break continue and return to cross the boundary of the do expression. So I have made that change to the proposal. I am now proposing to allow you to use `break`, `continue` and `return` in do expressions where you are in a context where it makes sense to use these operations. With the exception that you can't use `break` or `continue` in an expression in a loop head like even if you're in a nested loop, because there is this ambiguity about what those things do and also like please don't try to write that code. +KG: There is spec text. This has changed very slightly since I presented it. Or, rather, the screenshot here has changed very slightly since I presented it last time. An overwhelming majority of the spec text is specifying restrictions on which things you can write in a do expression, which is not in the screenshot. Those restrictions have changed slightly since last time. The first thing is when last time I said that I didn't want to allow break continue or return to cross the boundary of the do and that was mostly a sort of a style thing, a question of what code ought to be legal rather than a question of what code is possible. However when I presented it, I got some strong pushback on that restriction from WH, in particular, and I did a sort of a survey of delegates, an informal - just I wanted to know what people thought through a Google form and of the 30 or so responses I got a pretty strong majority that was in favor of allowing break continue and return to cross the boundary of the do expression. So I have made that change to the proposal. I am now proposing to allow you to use `break`, `continue` and `return` in do expressions where you are in a context where it makes sense to use these operations. With the exception that you can't use `break` or `continue` in an expression in a loop head like even if you're in a nested loop, because there is this ambiguity about what those things do and also like please don't try to write that code. -KG: The second change, this is a much smaller change, is that if you have an if without an else as the last statement in the do expression I have also made that illegal because there's this uncertainty about whether you would get the previous line, the line that comes before the if, or undefined. So now this is disallowed you have to explicitly put an else block. If you want to get undefined it can just be empty and then it very clearly gives you undefined. Right. +KG: The second change, this is a much smaller change, is that if you have an if without an else as the last statement in the do expression I have also made that illegal because there's this uncertainty about whether you would get the previous line, the line that comes before the if, or undefined. So now this is disallowed you have to explicitly put an else block. If you want to get undefined it can just be empty and then it very clearly gives you undefined. Right. KG: So those are the changes. I was going to ask for stage 2 at this meeting. But YSV raised (and a couple of other delegates also pointed out) that it is not necessarily going to be obvious to readers of do-expressions what all of these things do. So for example, someone might have an intuition that `return` within a do expression would return from the do expression somehow. We can never do things that everyone will understand a hundred percent of the time, but we should try to avoid doing things which everyone will think does something other than what it says. Or at least if we end up in a situation where everyone thinks it does one thing except perhaps for the programming language nerds who think about it in a different way that's bad. So in an effort to avoid this I'm going to try to do a small, very limited scope user study where I would have a brief introduction to do expressions and then a few snippets of code and have a multiple choice for each snippet: "do you think this does this, that, the other, or possibly something else." And if it comes back that in fact there is a very strong consensus view from users that some piece of code does something other than what I am proposing for it to do here then I will change or withdraw the proposal. If there is not a strong outcome, then I will go with what we usually do–using our best judgement. @@ -278,11 +297,11 @@ KG: Yeah, so that's where do expressions are at. That's why I'm not asking for s JHD: I just want to make sure it is on the record. I think this proposal is very useful even without break, continue and return. Linters will likely have rules against this. -MM: I very much appreciate the idea of the user study. I don’t think that you need to block on stage 2 for this. +MM: I very much appreciate the idea of the user study. I don’t think that you need to block on stage 2 for this. -KG: I am also excited about advancing the proposals. I think the questions about break, return or continue are allowed is a major semantic question. In particular since there was a blocking concern to having this proposal without break, return, or continue. I don't want that question to be unsettled when this goes to stage 2, I want us to have already made up our minds for it, and since part of the point of the user study is to see if users understand that semantics, I don't want to ask for stage two before the user study. So that's my thinking. +KG: I am also excited about advancing the proposals. I think the questions about break, return or continue are allowed is a major semantic question. In particular since there was a blocking concern to having this proposal without break, return, or continue. I don't want that question to be unsettled when this goes to stage 2, I want us to have already made up our minds for it, and since part of the point of the user study is to see if users understand that semantics, I don't want to ask for stage two before the user study. So that's my thinking. -MM: Okay. Thank you. That's all reasonable. +MM: Okay. Thank you. That's all reasonable. PFC: I'm sad that this is not going to stage 2 yet, but I think a user study is a good way to resolve this question. I think making a decision based on what programmers in the JavaScript ecosystem think about the syntax is much better than speculating ourselves, so +1. @@ -290,7 +309,7 @@ KG: I should say, I think a pretty likely outcome of the user study is that ther DE: I was a little skeptical when I first heard about this waiting on a user study before stage 2 idea. But I mean, I really like the design of this study and I think it he will get at the relevant questions. If this study were asking people for example, look you can do a return in a normal do expression shouldn't you be able to do it in an async do expression and ask people to say yes, or whether they had that intuition. I mean it's easy to trick people into saying that something that's different from the semantics we can provide but it sounds like this study will focus instead on more important qualities, like if people misinterpret the return statement with the new do expression to change the value of the do expression itself, which would be pretty serious if almost everybody interpreted it that way, so I'm happy about this design and thanks for the good work you're doing. -YSV: We will be working with Felienne to make sure that the questions are well formed from a scientific perspective before sending it into the wild so we'll make sure that we're not preparing people to answer in a specific way one way or the other. Like Kevin said we might not end up with a clear yes or no answer. That's pretty common when doing these kinds of studies, but I think what Kevin said is if 50% of people or more are getting it wrong consistently, then we might want to revisit our decisions here and talk about that. That's a pretty big number. But if it's less - I came to Kevin about this and I would also feel comfortable with our previous decision if most people are getting the right intuition from it. I would be comfortable with it going forward as it is. Otherwise, we may want to adjust and think about it again. I just want to make sure we're not undermining unintentionally how users understand the code that they write by introducing this ability to use - especially `return` might be problematic. But yeah, that's it. +YSV: We will be working with Felienne to make sure that the questions are well formed from a scientific perspective before sending it into the wild so we'll make sure that we're not preparing people to answer in a specific way one way or the other. Like Kevin said we might not end up with a clear yes or no answer. That's pretty common when doing these kinds of studies, but I think what Kevin said is if 50% of people or more are getting it wrong consistently, then we might want to revisit our decisions here and talk about that. That's a pretty big number. But if it's less - I came to Kevin about this and I would also feel comfortable with our previous decision if most people are getting the right intuition from it. I would be comfortable with it going forward as it is. Otherwise, we may want to adjust and think about it again. I just want to make sure we're not undermining unintentionally how users understand the code that they write by introducing this ability to use - especially `return` might be problematic. But yeah, that's it. JRL: So I wanted to offer the counterpoint to Jordan. I agree with Waldemar here if there is no control flow, I don't see the point of this over an IIFE besides saving the four characters to create the IIFE. It just doesn't seem like there's a whole lot of point. @@ -298,36 +317,36 @@ YSV: Just a quick response to that. We're not discussing - the discussion we've WH: A lot of discussion has been about the confusion about `return` statements. I think the behavior of `return` is pretty clear, but there's a much bigger source of confusion here, which is where iteration statements are allowed or not. Let me give some examples: -``` +```js a = do { lbl: { - while (f()) g(); - break lbl; - 44; + while (f()) g(); + break lbl; + 44; } }; a = do { lbl: { - while (f()) g(); - {break lbl;} - 44; + while (f()) g(); + {break lbl;} + 44; } }; a = do { lbl: { - while (f()) g(); - if (x) break lbl; - 44; + while (f()) g(); + if (x) break lbl; + 44; } }; a = do { lbl: { - while (f()) - break lbl; - 44; + while (f()) + break lbl; + 44; } }; ``` @@ -340,7 +359,7 @@ WH: You wrote the proposal so I know you know, but I don't think anybody else in KG: I have a response to this as well. -AKI: Are we waiting for over 50 people to answer though? +AKI: Are we waiting for over 50 people to answer though? WH: I'd like to give folks a chance to at least read and think about it for a moment. Anybody willing to hazard a guess? @@ -354,15 +373,15 @@ DE: Yep, I want to add I don't think it's essential that everyone be able to pre WH: There's a lot of spec text to forbid these weird cases, and I'm just wondering whether it's worth it to forbid those. -KG: I heard some pretty strong sentiment that it should be disallowed, that loops in particular should be disallowed. I also feel strongly that declarations should be disallowed as the final statement. And figuring out what "final statement" means for a declaration is only very slightly easier than figuring out what it means for a loop. So I think if we have agreement that declarations should be forbidden as the final statement, we already get almost all of that complexity. And that is a restriction I am unwilling to give up. +KG: I heard some pretty strong sentiment that it should be disallowed, that loops in particular should be disallowed. I also feel strongly that declarations should be disallowed as the final statement. And figuring out what "final statement" means for a declaration is only very slightly easier than figuring out what it means for a loop. So I think if we have agreement that declarations should be forbidden as the final statement, we already get almost all of that complexity. And that is a restriction I am unwilling to give up. WH: Yeah, you want to prohibit those even in these weird cases? KG: Yes. -WH: Because you're hoping to change the semantics of how declarations work, or some other reason? +WH: Because you're hoping to change the semantics of how declarations work, or some other reason? -KG: Mostly that, yes. +KG: Mostly that, yes. WH: OK. Yeah, that makes sense. @@ -371,16 +390,18 @@ AKI: We're at time. You have 20 minutes later but we'll have to come back to it KG: I personally don't think that we should spend more time on this topic at this meeting because I'm not trying to advance it at this meeting. If the people on the queue really want to get their points, then talk to the chairs and we can try to find more time at this meeting. But otherwise I yield my 20 minutes. ### Conclusion/Resolution -* Not asking for advancement; Kevin to proceed with study design + +- Not asking for advancement; Kevin to proceed with study design ## Top-level await status update + Presenter: Yulia Startsev (YSV) - [proposal](https://github.com/tc39/proposal-top-level-await) Displaying PR https://github.com/tc39/proposal-top-level-await/pull/161 -YSV: Hi everyone. This is an update for top-level await, which I'm hoping to get to stage 4 in the next meeting. We have been working pretty hard on getting through the remaining issues. In particular Guy Bedford has been a hero in responding to some of the bigger issues. The first one that I want to bring up is a bug fix which will describe very quickly. It was reported by Sokra (Tobias Koppers) who is working on webpack. They noticed that with the current spec of top-level the invariant that the children in a module graph complete before the parent completes has been broken. The behavior that they noticed it's detailed here (shows issue). What you would expect is to go through the children of the tree in the correct order. So you would first have `a` run which is importing a module which has an async node. The async node runs because it's the second in the order. It has a top level await, so it should wait and then the next resolving module should print C1 followed by X and the result is as follows. This isn't only in V8, but is due to a spec bug. Guy fixed it in the spec and the contents of the change have been merged. So you'll see this if you open the proposal now, we have a new field, which is the cycle-root. It holds a cyclic module record, and we've made a few changes. So instead of calling getAsyncCycleroot. We just access this root as a field. So there's that. I just want to check are there any concerns about this? +YSV: Hi everyone. This is an update for top-level await, which I'm hoping to get to stage 4 in the next meeting. We have been working pretty hard on getting through the remaining issues. In particular Guy Bedford has been a hero in responding to some of the bigger issues. The first one that I want to bring up is a bug fix which will describe very quickly. It was reported by Sokra (Tobias Koppers) who is working on webpack. They noticed that with the current spec of top-level the invariant that the children in a module graph complete before the parent completes has been broken. The behavior that they noticed it's detailed here (shows issue). What you would expect is to go through the children of the tree in the correct order. So you would first have `a` run which is importing a module which has an async node. The async node runs because it's the second in the order. It has a top level await, so it should wait and then the next resolving module should print C1 followed by X and the result is as follows. This isn't only in V8, but is due to a spec bug. Guy fixed it in the spec and the contents of the change have been merged. So you'll see this if you open the proposal now, we have a new field, which is the cycle-root. It holds a cyclic module record, and we've made a few changes. So instead of calling getAsyncCycleroot. We just access this root as a field. So there's that. I just want to check are there any concerns about this? (silence) @@ -388,7 +409,7 @@ YSV: Okay, so this has been merged, it is a normative change and a bug that need YSV: What happens instead is that we end up swapping `X` and `B` because `B` needs to evaluate - `X` has this set of children and it just evaluates first. That's a little bit surprising and in particular if this last module that is shared, this leaf module that's shared by all three of these, is not async, then the order will be like this. So it can be argued that this is an oversight. It doesn't change the behavior of module loading that we agreed to, which is the promise dot all semantics, but it is a much more significant change than the cycle-root update which was a bug fix that we saw earlier. -YSV: (showing spec change) Here is the change and I think the most significant of the best way to look at this is down in - async module fulfilled is a good place to start reading this. We've removed some of the work that's happening in the async module execution fulfilled and moved it into another function, namely it is called gather async parent completions. We also have an execution list. The execution list consists of which parents are ready to complete. The other important point here is we then sort the list of parents according to their post order. This is detailed in the spec text in particular: "Note the order in which modules transition to async evaluating is significant". Additionally, the other change here is that non async modules which have async children are now also included in the async evaluating set, so they also have their async evaluating Flag set to true. So that's another change. The bulk of the work that happens here happens in the gather async parents completion parent completion method. And yes, that is it. This is a much more substantial change to the spec than the other one, but it is an important one because it would adhere more closely to user expectations. That's all I've got to share. +YSV: (showing spec change) Here is the change and I think the most significant of the best way to look at this is down in - async module fulfilled is a good place to start reading this. We've removed some of the work that's happening in the async module execution fulfilled and moved it into another function, namely it is called gather async parent completions. We also have an execution list. The execution list consists of which parents are ready to complete. The other important point here is we then sort the list of parents according to their post order. This is detailed in the spec text in particular: "Note the order in which modules transition to async evaluating is significant". Additionally, the other change here is that non async modules which have async children are now also included in the async evaluating set, so they also have their async evaluating Flag set to true. So that's another change. The bulk of the work that happens here happens in the gather async parents completion parent completion method. And yes, that is it. This is a much more substantial change to the spec than the other one, but it is an important one because it would adhere more closely to user expectations. That's all I've got to share. YSV: I'm asking for the committee's feedback on this normative change because if we get consensus on this, I will merge. @@ -404,7 +425,7 @@ DE: I have to say, I've had a lot of trouble trying to understand. This is PR. I SYG: I don't even really disagree that due to a relative lack of uptake of native esm deployment on the web. It's probably not a compat issue. I don't want this to drag on. Among other things, part of it is for the release thing, part of it is that I want us to very strongly adhere to the norm that we do not change semantics after stage 3 if we can help it, even if we think we made the wrong choice. -DE: So just respond to that. I think I see it as we need consensus on a change after stage 3. So I do want us to close on this issue promptly like this meeting or next meeting at the latest. I think we're always open to changes, like we make normative changes to things that are in the main specification such as your proposal to remove subclassing built-ins, and we could think of a change like this as such a normative change even though this proposal is stable in some sense. +DE: So just respond to that. I think I see it as we need consensus on a change after stage 3. So I do want us to close on this issue promptly like this meeting or next meeting at the latest. I think we're always open to changes, like we make normative changes to things that are in the main specification such as your proposal to remove subclassing built-ins, and we could think of a change like this as such a normative change even though this proposal is stable in some sense. KM: I think the thing I'm going to talk about is somewhat of a bug fix that is independent of the if we don't take this change because it doesn't work. We should probably figure out something to do with the bug fixes. I think incidentally happens here, but I could also misread the aspect. It's very hard to read, not necessarily the fault of the people right inspectors because it's complicated, just generally sure. Okay. @@ -427,9 +448,13 @@ YSV: Additionally. It was breaking an important invariant. Without this change w YSV: Please take a look if you have any questions. I'm very happy to clarify anything or you through the spec. It is hard to read. I did need to implement it to fully understand it myself. So please get in touch. ## ECMA Recognition Awards + ### Conclusion/Resolution + Consensus on 3 nominations + ## Module Fragments (For Stage 1) + Presenter: Daniel Ehrenberg (DE) - [proposal](https://github.com/littledan/proposal-module-fragments) @@ -462,7 +487,7 @@ DE: So a big part of this is about the semantics of modules. The semantics of mo DE: I think the import meta URL should be the URL of the enclosing module then hash from the fragment. That's the whole point of using fragments and it talked about the shadowing semantics would probably want it really are if you use a duplicate name that could be defined process environment or host-specific don't have a strong opinion. So this isn't a new idea. Inline modules have been discussed in the es6 cycle before even I joined the committee. I don't know if people talked about the idea of using fragments before but that might not be a good idea. Anyway, that's not really the core of it. The core is that we should have a concept of inline modules with really statically specified names that can have static Imports applied to them. And I think they're complimentary to Resource bundles that can virtualize Network level resources just because there's this two order of magnitude difference in the scale and there's a huge - you could think of it as a two order of magnitude difference in the breadth of the semantics, the network of the web platform is quite broad and module loading is quite narrow compared to that. So I think they're pretty complimentary. I want to ask the committee: should we add module fragments to JavaScript? I want to propose that when I'm asking for stage 1 I'm really asking about inline modules that are somehow in the module map and I'm not asking for committee buy in on the details of this. I also want to ask if anyone wants to work with me on the proposal, I would be very happy to have co-champions involved. So please get in touch with me offline. Questions? -MM: I want to verify first that module fragments only create initialized linked module instances. Unlike module blocks, of module fragments do not lead to any notion of a static unlinked uninitialized module record that can be multiple initialized in (?), is that correct? +MM: I want to verify first that module fragments only create initialized linked module instances. Unlike module blocks, of module fragments do not lead to any notion of a static unlinked uninitialized module record that can be multiple initialized in (?), is that correct? DE: Well You know, there's no way to get it a first-class uninitialized unlinked module record. I want to be careful when you talk about these things are always linked. So when you parse a module that has several module fragments in it. I don't think we should eagerly import and link each one of those modules fragments that it contains. I think you may want to later dynamically import some of those which will then fetch those dependencies and lead them to be linked. So in HTML semantics, the module map can represent modules that are fetched and linked but not yet have their dependencies fetched and linked and I think that's the thing that makes sense across environments, at least for environments that dynamically fetch JavaScript modules, which is not all environments. I think that's what I think that's kind of the natural semantics for dynamic fetch these module environments. @@ -502,7 +527,7 @@ DE: Right, that's exactly the kind of idea. The idea more broadly. Is that resou KM: I think I have a strong opinion than Dan which is if we don't think that bundlers are going to use this and we don't have like strong feedback from at least one, Preferably many,. I don't know if there's actually many, but that they're going to use this then we potentially just should we should make sure they're going to use it before we ship it because it feels like we should not repeat the mistakes that we did with modules. -DE: That's what I've been trying to do. Parcel has been extremely positive about this and others have been somewhat positive but not to the point where I feel comfortable name dropping them. +DE: That's what I've been trying to do. Parcel has been extremely positive about this and others have been somewhat positive but not to the point where I feel comfortable name dropping them. SYG: Yeah, that'll make sense to me. Thanks for the explanation. I do think that. Yeah, given all of that I think I'm completely in support of stage 1 I do share the same concerns as Bradley about reusing URL fragment in space here. But yeah, you can figure that out later. Just want to make sure everything works together in the ecosystem. @@ -516,10 +541,12 @@ DE: Please let me know offline if you want to be a co-champion. MM: I would like to be a co-champion. - ### Conclusion/Resolution -* Stage 1 + +- Stage 1 + ## Collection normalization methods + Presenter: (BFS) - [proposal](https://github.com/tc39/proposal-collection-normalization) @@ -530,7 +557,7 @@ MM: I heard your entire presentation and I don't get it. I don't get what the di BFS: It's not just that the coercer is named. It's also that if you do have some sort of naming to the coercer, it implies some it is being renamed for reusability purposes and doing so for reusability purposes is a little difficult. You have two data points in a map and one data point in a set. There was a demand that we not use value to identify the items contained within a set and also a demand that we do use values for the items contained in a Set. So those don't work together. There was a separate proposal that, what if we allow either/or but not both, but in that case the argument was that then you can't reuse the normalizer from a map which could contain both. -MM: so my overall reaction to this controversy is I would rather settle on a resolution of this naming issue that I might not think is ideal rather than split those set off of this proposal. I think this proposal really needs to go forward with both set and mapped together. I would be perfectly happy with regard to the particular issue. I would be perfectly happy with map being coercedKey and coerceValue and set being coerceElement. +MM: so my overall reaction to this controversy is I would rather settle on a resolution of this naming issue that I might not think is ideal rather than split those set off of this proposal. I think this proposal really needs to go forward with both set and mapped together. I would be perfectly happy with regard to the particular issue. I would be perfectly happy with map being coercedKey and coerceValue and set being coerceElement. BFS: we did have that a couple presentations back, but there was a demand that it be key at the time. We could revisit if coerceElement would be acceptable after this duration of trying to get another name. @@ -555,7 +582,7 @@ WH: Yeah, so if that's the case then this should not advance. BFS: Mark suggested, perhaps we could use a different name, and you're saying they must be the same name. WH: That is my only constraint. The name of the address should be the same across Set and Map. I don't care what it's called as long as it's the same. - + BFS: Yes, then it is unresolvable, which is why I would like to split this. WH: I find the notion that this is unresolvable to be unreasonable. @@ -582,7 +609,7 @@ WH: I just find this position unreasonable. BFS: I believe both of us, do. -BFS: Maybe we can continue the queue. +BFS: Maybe we can continue the queue. AKI: Yes. Let's YSV. You're up next. @@ -647,7 +674,7 @@ PFC: This agenda item is about Temporal. My name is Philip Chimento. I work at I PFC: Here's an overview of what I'll be talking about during this presentation. There will be a short recap for people who have not seen earlier presentations, about what Temporal is and what it does; a summary of the changes that we've made in response to delegate reviews; and a summary of what is still open. There will be time reserved for discussion, and then at the end we plan to ask for advancement to Stage 3. -PFC: But first just to address any questions about the time box and questions were raised about that. As I said, we are planning to ask for a stage advancement at the end of the presentation. During the last two weeks or so, we noticed delegates starting to get into more details of the proposal during the reviews. The editors recommended us to reserve plenty of time so that if people had concerns about details during this plenary, we wouldn't end up at the end of the plenary without having time to address all of the concerns and get to the stage advancement. I'm hoping that we won't have to use the whole time box. +PFC: But first just to address any questions about the time box and questions were raised about that. As I said, we are planning to ask for a stage advancement at the end of the presentation. During the last two weeks or so, we noticed delegates starting to get into more details of the proposal during the reviews. The editors recommended us to reserve plenty of time so that if people had concerns about details during this plenary, we wouldn't end up at the end of the plenary without having time to address all of the concerns and get to the stage advancement. I'm hoping that we won't have to use the whole time box. PFC: So the recap: various Temporal champions have presented it at several of the meetings before this one, but I'm sure there must be some new delegates this time. So here's a short recap. The purpose of Temporal is to be a modern replacement for the much maligned Date object in JavaScript, while incorporating lessons learned from other popular date libraries such as Moment. In fact the proposal I believe was first championed by maintainers from Moment. This group of champions is one of the largest ones for any TC39 proposal certainly in the year that I've been a delegate, and it includes invited experts, delegates from Bloomberg, Google, Microsoft, Igalia where I work, and I think the large number of champions is probably fitting since the proposal is also one of the largest of proposals that I have been aware of in the year that I've been a delegate. @@ -669,7 +696,7 @@ PFC: A similar item that's being standardized elsewhere in Intl and Temporal in PFC: Again, similarly to the previous two items, we have a string format for month codes. That's currently shared between temporal and ICU4X. There are different codes for leap months such as this “M05L” example in the second bullet point and there are still different ones for combined months. These "special" months are not present in the ISO 8601 calendar and therefore nothing changes in the Temporal specification itself, because everything not in the ISO calendar is implementation defined. Nonetheless, this is something where we don't currently expect any changes, but since it is being discussed with external groups, we do want to make sure that we are using the same format as whatever is aligned on with these external groups. About these previous three items, just to sum up our expectations around parallel standardization processes: to be clear, we think these formats are far along enough in their respective processes that we don't expect further changes. However, should any of these other processes mandate a well-motivated change to the one of these formats, then we expect to ask for consensus to make the associated change to make Temporal match it. -PFC: Here I have a list of normative changes proposed by delegates during the review period that we need to investigate but didn't have time to complete before the plenary started. We believe that advancement to stage 3 can be conditional on these. There are links if you want to open the slide webpage and click through. Similarly, here is a list of editorial issues that we believe are okay to finish or iterate on during stage 3. Once again all these are clickable if you want to check out the details of each GitHub ticket. +PFC: Here I have a list of normative changes proposed by delegates during the review period that we need to investigate but didn't have time to complete before the plenary started. We believe that advancement to stage 3 can be conditional on these. There are links if you want to open the slide webpage and click through. Similarly, here is a list of editorial issues that we believe are okay to finish or iterate on during stage 3. Once again all these are clickable if you want to check out the details of each GitHub ticket. PFC: And a note about implementer feedback in stage 3. As I'm sure most people know, part of the reason stage 3 exists is for implementers to be able to implement a stationary target, but still be able to give feedback if there are concerns that only become apparent during implementation. We expect that concerns might come up during stage 3 and we will address them. So by asking for stage 3 we're giving implementers the go ahead to start implementing and raising these concerns so that they can be addressed. If I can take off my Temporal champions hat for a moment and put on my Igalia hat: at Igalia, we do plan to work on an implementation now if this reaches stage 3, so we'll be helping to provide this feedback. Okay, Temporal hat back on. We do expect that not only implementers might find bugs in the date and time algorithms, but also get bug reports from people using Temporal in polyfill form. One of these was opened just a few days ago, and we haven't had a chance to address it yet. I'm assuming that the process is that this kind of fix should require consensus at a plenary as well, which we will be happy to ask for as it comes up. So that is my slide material. And as I said at the beginning I was advised that there would be a lot of discussion time necessary. So I think we have 45 minutes for the rest of today for discussion and then some more time from tomorrow. @@ -693,15 +720,15 @@ MM: And with regard to the time zone issue, does that vary dynamically after a p PFC: We have something in the spec text that says that it must not. -MM: Okay. +MM: Okay. -PFC: I can go into details on that, if it's relevant +PFC: I can go into details on that, if it's relevant MM: Since you're asking for stage 3, I would say it is relevant. PFC: According to the specification: The timezone data, once retrieved for a particular time zone, cannot change during the lifetime of the surrounding agent. -MM: Given that, is there anything in Temporal, other than Temporal.now, if you leave aside Temporal.now, or when you hypothetically removed it. Is there anything remaining in Temporal that might change dynamically during the running of a single program? +MM: Given that, is there anything in Temporal, other than Temporal.now, if you leave aside Temporal.now, or when you hypothetically removed it. Is there anything remaining in Temporal that might change dynamically during the running of a single program? PFC: I don't believe so. @@ -717,7 +744,7 @@ SYG: Let's take the calendar as a concrete example. Do you have, in the screen t PFC: Let's see. We'd have to look where this operation is called from. I'll show [ToTemporalCalendar](https://tc39.es/proposal-temporal/#sec-temporal-totemporalcalendar) which is called here. -SYG: So step three there, that one takes the built-in `Temporal.Calendar` and then calls `from()` on it. So if I subclass `Temporal.Calendar`, how do these other classes use my subclass? +SYG: So step three there, that one takes the built-in `Temporal.Calendar` and then calls `from()` on it. So if I subclass `Temporal.Calendar`, how do these other classes use my subclass? PFC: You can pass the instance of your subclass into the constructor of these other classes. Let's see if I can show you an example of that. Let's look at the [PlainDate constructor](https://tc39.es/proposal-temporal/#sec-temporal.plaindate). You are passing the instance of your class here, as this calendarLike parameter that's passed here to [ToOptionalTemporalCalendar](https://tc39.es/proposal-temporal/#sec-temporal-tooptionaltemporalcalendar), which is passed to [ToTemporalCalendar](https://tc39.es/proposal-temporal/#sec-temporal-totemporalcalendar). And since it's an object, then it's returned as it is. And that's the calendar that this PlainDate instance will use. So what you can't do is patch `from()` so that you could pass in the string identifier of your calendar subclass here and end up with an instance of your subclass. @@ -731,17 +758,17 @@ PDL: Yes, the intention is that everything else is as unsurprising with respect PFC: I imagine if the remove-built-in-subclassing proposal is presented again, it could be part of advancing that to ask for consensus on a patch to Temporal to remove that stuff. -SYG: I would imagine so. Of course, I would very much like that to happen before stage 3, but like I said, I am actually on the side of not holding up Temporal for this undecided question. So if anything I would like this to be a catalyst for us to have that discussion and if we cannot remove species, that this be the last built-in that we add with species, or something like that. +SYG: I would imagine so. Of course, I would very much like that to happen before stage 3, but like I said, I am actually on the side of not holding up Temporal for this undecided question. So if anything I would like this to be a catalyst for us to have that discussion and if we cannot remove species, that this be the last built-in that we add with species, or something like that. -JHD: The language doesn't have a consistent subclassing strategy. Even setting `species` completely aside, I made a quick list of the built-ins and it's a big mix. For example, Promise has `then` as its interoperability protocol or you can shadow it from a subclass, but it all falls back to slots like a proper subclass. Similarly, regular expressions - we all know it has a ton of interoperable protocols. But it also falls back to slots, raises and the same is true all the way through. Even in Maps and Sets, the constructor will call `set()` or `add()` which can be overridden by a subclass, but you still have to have a proper subclass with slots for all the base methods to work. +JHD: The language doesn't have a consistent subclassing strategy. Even setting `species` completely aside, I made a quick list of the built-ins and it's a big mix. For example, Promise has `then` as its interoperability protocol or you can shadow it from a subclass, but it all falls back to slots like a proper subclass. Similarly, regular expressions - we all know it has a ton of interoperable protocols. But it also falls back to slots, raises and the same is true all the way through. Even in Maps and Sets, the constructor will call `set()` or `add()` which can be overridden by a subclass, but you still have to have a proper subclass with slots for all the base methods to work. JHD: I agree that we shouldn't necessarily block any one proposal on resolving the question of what is the actual thing we want in the language for how to subclass stuff, or how extend stuff or whatever we want to call it. That said, the pattern that BFS’s proposals if we want to, but they were talking about about coerce key and coerce value separate from the actual controversy. We were discussing about that. I actually really like that pattern of passing hooks during construction time, because it means that new built-in methods could be added and they would still work. Similarly, there wouldn't need to be any observable calls to shadowed methods for all of those current cases. If that pattern had been followed in Map or Set, for example, you wouldn't need to look up the `set()` or the `add()` method because it would either have been constructed with a hook or not. So I'm kind of that and then species as well. (?) I wonder if there's some things that would make sense to remove from Temporal, like species, or the expectation of subclassing, and so on, temporarily, so that the bulk of the proposal could be unblocked in a way that would still allow us to answer that broader question, without locking us in. SYG, you said maybe this would be the last thing with species. Well if that's even a possibility, maybe this should be the first thing without it, right? Because we can always add it later. -SYG: It's "just" resourcing, right? I mean, somebody needs to put in the time to figure out the extent of the web compat, somebody needs to put in the time to figure out what extension points are needed for Temporal. Speaking personally as a delegate, I don't feel that it's fair for me to ask the Temporal delegates to do that at this time, which is why maybe I’m on the other side. But if they are open to it, I certainly would be happy about that situation. +SYG: It's "just" resourcing, right? I mean, somebody needs to put in the time to figure out the extent of the web compat, somebody needs to put in the time to figure out what extension points are needed for Temporal. Speaking personally as a delegate, I don't feel that it's fair for me to ask the Temporal delegates to do that at this time, which is why maybe I’m on the other side. But if they are open to it, I certainly would be happy about that situation. -JHD: The summary of it is that I think that it would be really critical for lots of proposals to this language that we answer that question of how do you subclass, or extend, or whatever, built-ins and what's the way we want to do that even if we're constrained by existing patterns on existing built-ins. So I hope that we can address that. +JHD: The summary of it is that I think that it would be really critical for lots of proposals to this language that we answer that question of how do you subclass, or extend, or whatever, built-ins and what's the way we want to do that even if we're constrained by existing patterns on existing built-ins. So I hope that we can address that. -DE: I'm confused by JHD's characterization of how things work now. I mean, I tried to explain this at my presentation last meeting where I really think there's a consistent pattern of, you have some things which just call methods and then you have those— you could think of them as protocols, or maybe protocol's the wrong word, but then there's other things where you're just an implementation of it where we use internal slots. If Temporal sticks quite consistently to this pattern. It seems like there's quite a large set of people in TC39 who disagree quite strongly with the patterns that ES6 set in terms of the way that classes and subclassing work, and subclassing built-ins. I think it'd be worth it for us to get together and consider this in a side meeting, so that we can come up with a concrete design to propose that the committee adopt, because I really think there is quite a clear pattern that the language follows today. I think it's pretty unfair to put it on the proponents of the Temporal proposal that they navigate all this. The people who want to change the the object-oriented convention should come up with a proposal for that. I'm a bit concerned that the different subclassing things could have performance impacts in ways that are perhaps different in Temporal compared to the ones that we found already for other classes. I think that's something that, as PFC was saying, we could really examine better in the context of a particular implementation. So I think there are multiple things that would make sense to do after stage 3. We could advance proposals, like the remove subclassible built-ins proposal that retroactively removes the use of species both in arrays and in stage 3 proposals, if Temporal is in stage 3. And we could also try implementing the calendar and time zone APIs, see if they end up being too slow, and investigate changes and how optimizable they would be. Reconsidering conventions is very separate from Temporal and it's not that Temporal needs the conventions; it just tries to stick to the current conventions. So people who don't like the current conventions, it's up to them to really make a proposal to change them. Then the other thing is about the performance overhead of this and that's something that we can speculate about, but it's much better to investigate in the context of an actual implementation. So overall, I think these object-oriented concerns, while significant and we may want to propose changes, shouldn't hold back Temporal for stage 3. +DE: I'm confused by JHD's characterization of how things work now. I mean, I tried to explain this at my presentation last meeting where I really think there's a consistent pattern of, you have some things which just call methods and then you have those— you could think of them as protocols, or maybe protocol's the wrong word, but then there's other things where you're just an implementation of it where we use internal slots. If Temporal sticks quite consistently to this pattern. It seems like there's quite a large set of people in TC39 who disagree quite strongly with the patterns that ES6 set in terms of the way that classes and subclassing work, and subclassing built-ins. I think it'd be worth it for us to get together and consider this in a side meeting, so that we can come up with a concrete design to propose that the committee adopt, because I really think there is quite a clear pattern that the language follows today. I think it's pretty unfair to put it on the proponents of the Temporal proposal that they navigate all this. The people who want to change the the object-oriented convention should come up with a proposal for that. I'm a bit concerned that the different subclassing things could have performance impacts in ways that are perhaps different in Temporal compared to the ones that we found already for other classes. I think that's something that, as PFC was saying, we could really examine better in the context of a particular implementation. So I think there are multiple things that would make sense to do after stage 3. We could advance proposals, like the remove subclassible built-ins proposal that retroactively removes the use of species both in arrays and in stage 3 proposals, if Temporal is in stage 3. And we could also try implementing the calendar and time zone APIs, see if they end up being too slow, and investigate changes and how optimizable they would be. Reconsidering conventions is very separate from Temporal and it's not that Temporal needs the conventions; it just tries to stick to the current conventions. So people who don't like the current conventions, it's up to them to really make a proposal to change them. Then the other thing is about the performance overhead of this and that's something that we can speculate about, but it's much better to investigate in the context of an actual implementation. So overall, I think these object-oriented concerns, while significant and we may want to propose changes, shouldn't hold back Temporal for stage 3. KG: Not exactly contrary to DE's point, but sort of sidestepping it: I think there's a pretty good case to be made for Temporal not falling cleanly into the pre-existing conventions, and in particular... I should lay out my thesis first, which is I think that we should drop all subclassing or at least all explicit support for subclassing from Temporal. So all of the species stuff and all of the figuring out how to make a new instance that isn't just using the relevant intrinsic. The case for this is that, unlike, as far as I am aware, everything else in the language, this isn't just one class. This is a whole bunch of classes which interrelate in this particular way. That is, you can project from instances of one class onto instances of another. There's methods like `toPlainYearMonth()` on PlainDate or whatever, that gives you a new instance of PlainYearMonth from a PlainDate. And because you're making a related thing rather than making the same thing, there's no way to do the species song-and-dance where you figure out what class the user was hoping to get. You just can't, there's no protocol for that. So if you project, e.g., from your subclass of PlainDate to a subclass of a ZonedDate and then back to a PlainDate, you don't end up with an instance of the subclass, and there is no practical way for the language to help a subclass to do that. You really have to override all of these methods on all of the classes. And if you're going to do that anyway, it's just not that much additional advantage for the language holding your hand for the few cases where species is relevant. So, concretely there is something different about Temporal that makes it less suited for the language-assisted subclassing than the other things in the language, irrespective of the question of what we should do for things like Map and Set. I would like to suggest just dropping subclassing outright on the basis of that. No one has argued that they really want it for Temporal outside of consistency with the language, and I think that the argument from consistency is not that strong given that there is this unique group of classes that makes the subclassing support not as clean. @@ -765,7 +792,7 @@ RGN: I was going to say, I'm not in favor of anything that would break `extends` RGN: Yes, not a syntax error, but but a runtime error. [new topic] I'm interested in documenting what happens if Temporal does advance today, implementations ship, and then the syntax changes through the IETF process. -USA: I can answer this. The timeline that we have in mind for IETF puts the tentative date for putting this into an RFC around July, and I don't feel realistically that implementations can ship before that. Also, I think that this back pressure works both both directions, if Temporal goes to stage 3. That's more pressure for IETF to accept the syntax, as this just put in a good amount of design work into coming up with that format. But of course you know people could still have concerns with that. We mentioned it, but I don't feel that any any changes to that design would be groundbreaking in any way. But another thing that I wanted to offer is that in case IETF, by the time it's an RFC, goes to to change the format to something else, we could come back to the committee and talk about it. Because I think there's general consensus around using whatever the IETF standard ends up being. +USA: I can answer this. The timeline that we have in mind for IETF puts the tentative date for putting this into an RFC around July, and I don't feel realistically that implementations can ship before that. Also, I think that this back pressure works both both directions, if Temporal goes to stage 3. That's more pressure for IETF to accept the syntax, as this just put in a good amount of design work into coming up with that format. But of course you know people could still have concerns with that. We mentioned it, but I don't feel that any any changes to that design would be groundbreaking in any way. But another thing that I wanted to offer is that in case IETF, by the time it's an RFC, goes to to change the format to something else, we could come back to the committee and talk about it. Because I think there's general consensus around using whatever the IETF standard ends up being. RGN: My concern is that advancing to stage 3 is a signal for implementations to ship, and we're then putting ourselves in a race condition where if they do ship and developers start counting on syntax that ultimately looks different when an RFC is published, that we'll be stuck with it. You know, there will be the standard, and then there will be the stuff that was proposed as a standard by ECMAScript, and and I'm not clear on how to prevent that, but I think that preventing that is critical. @@ -787,18 +814,18 @@ SYG: I wanted to ask— I'd like the context of the IETF process here and the sp USA: The timeline that we have in the charter right now is July 2021 for submitting this to the ISG. By July by submitting, I mean at that point IETF would have a quick turnaround and decide, we will change it or we will not change it. -Bron Gondwana: I would probably say are we happy with the content of it. Our other problems are with the extensibility points and what it includes already, rather than the syntax, hopefully. But you can't guarantee anything with the IETF, it does not do deadlines. +Bron Gondwana: I would probably say are we happy with the content of it. Our other problems are with the extensibility points and what it includes already, rather than the syntax, hopefully. But you can't guarantee anything with the IETF, it does not do deadlines. -RGN: The resolution that made sense to me is that there is an explicit instruction to to not ship unflagged, and then at some future point things will advance far enough in IETF, and maybe that point is an actual RFC, is reached where the restriction on shipping unflagged is removed because at that point the format is stable. I'm not saying specifically July, I'm saying specifically just that there's instructions not to ship unflagged that will only be removed once the once the external process matured sufficiently. +RGN: The resolution that made sense to me is that there is an explicit instruction to to not ship unflagged, and then at some future point things will advance far enough in IETF, and maybe that point is an actual RFC, is reached where the restriction on shipping unflagged is removed because at that point the format is stable. I'm not saying specifically July, I'm saying specifically just that there's instructions not to ship unflagged that will only be removed once the once the external process matured sufficiently. SYG: I'm happy with that. I like that you and the other champions are taking the stage 3 signal seriously. I assume of course, that all the places do you expect possible change due to upstream things like this are to be explicitly communicated. KG: I do want to talk about some more of the in-the-weeds details. I should preface this with, I'm in general very supportive of the Temporal proposal and almost all of the decisions that were made for the Temporal proposal. I apologize for not raising these more detailed issues at an earlier stage. There's just been a lot of stuff happening with the Temporal proposal and I couldn't keep up with it. So I didn't get a chance to do this very detailed review until when it was settled this past meeting or recently. Anyway, with that said, there was another issue that I raised on the issue tracker that didn't didn't make it into the slides that I did want to discuss briefly, especially since JHD is here and I believe he took a different opinion in earlier discussions. The issue was that there is this `compare()` method on Temporal objects that is presumably intended to be used for sorting, as with `Array.prototype.sort()`. And the current specification says that if you have two dates which represent the same point in time, so for example they are both January 5th or whatever, but they have different associated calendars, this means that they are not equal in the sense that the `equals()` method will not return `true` for them because they have this additional data, but they also don't compare as equal by the `compare()` method that’s on the screen now, because the calendar is taken into account as the last thing once you've looked at all of the other fields. If all of the other fields agree, then the calendar is used for ordering these things, and I think that's wrong. I think that two dates which represent the same day should not be arbitrarily ordered by the lexicographic ID of their calendar. Since `Array.prototype.sort()` is now stable, if you had two of these things in an array, the sort should leave them in the order that they were, as a stable sort will, if and only if the calendar is not taken into account. I think leaving them in the same order is the right thing. So, I think that the calendar ought not be taken into account and in particular, I think it's okay for the `compare()` method to return zero for two objects, even if the objects have an `equals()` method that says they are not equal, because they do represent the same point as far as sort order is concerned. These are points on the timeline and they represent the same point in time, so they shouldn't be treated as unequal for sorting purposes. It's okay for them to be unequal according to `equals()` because `equals()` is not concerned just for the position on the timeline. -PFC: I can respond quickly and then we could pick up the discussion here tomorrow. My personal preference would have also been for what you're saying, but as with many of these things they are the way they are because we had extensive discussions about them the champions group, and that's what we were able to reach consensus on. So I'm kind of hesitant to overturn that. But maybe there's some new information that you could bring to the discussion, which is what happened in the monkeypatching discussion. +PFC: I can respond quickly and then we could pick up the discussion here tomorrow. My personal preference would have also been for what you're saying, but as with many of these things they are the way they are because we had extensive discussions about them the champions group, and that's what we were able to reach consensus on. So I'm kind of hesitant to overturn that. But maybe there's some new information that you could bring to the discussion, which is what happened in the monkeypatching discussion. KG: The reason I think it merits revisiting is because, from reading those issues — and I was not present for the discussions outside of the issues — but from reading the issues it looked like the discussion was based around a false premise, which is that `compare()` returning zero for two things must mean those things are conceptually equal, and that's just not true. That's not a contract in, for example, Java, which has built-ins that violate it for its own `Comparable` type. It's not a thing which really makes sense as a requirement specifically because we have made `Array.prototype.sort()` stable, which is a thing that is only sensible to do when you have two things that are unequal but compare as zero. That's why I wanted to revisit it. It's because I think there was a false premise in the discussion. PFC: Should we save the queue including this item and pick up here tomorrow? -**yes** +**yes.** diff --git a/meetings/2021-04/apr-19.md b/meetings/2021-04/apr-19.md index 10b87cce..f1832bf3 100644 --- a/meetings/2021-04/apr-19.md +++ b/meetings/2021-04/apr-19.md @@ -1,6 +1,6 @@ # 19 April, 2021 Meeting Notes -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Bradford C. Smith | BSH | Google | @@ -37,7 +37,6 @@ | Daniel Ehrenberg | DE | Igalia | | Jason Yu | JYU | Paypal | - ## Opening AKI: Has everyone had a chance to review last meeting's notes? [silence] That sounds like yes. One thing I really wanted to quickly mention—we had announced on the reflector that DRW will be helping us facilitate this meeting. I just want to make sure that we're all good with that. Does anybody have any concerns? @@ -47,6 +46,7 @@ AKI: Has everyone had a chance to review last meeting's notes? [silence] That so AKI: Ok let’s take that as a yes then. Alright, I think we're ready to move on to the secretariat's report. ## Secretary Report + Presenter: Istvan Sebestyen (IS) - [slides](https://github.com/tc39/agendas/blob/master/2021/tc39-2021-April_Secretariats_Report.pdf) @@ -61,7 +61,7 @@ IS: the next one and this is what I said that I will go over them very quickly b IS: So next one is, what is the situation regarding the approval schedule? The formal approval of ES2021 - as I said a couple of times - it will be in on the 22nd of June by the Ecma General Assembly. I am not sure whether it will be held as a face-to-face meeting. Probably not. I mean this would be only my guess. In Europe the corona figures are bad: in Switzerland very bad, in Germany very bad. So I am rather pessimistic, but the meeting will be on the 22nd, and we have done everything in terms of publication of the necessary documents. So the publication of the ES2021 specification. So that was done two months before the vote. So that is okay. Also, we have launched and I have already reported just at the last meeting that the so-called “opt-out” for the royalty-free IPR policy we have initiated that on the 10th of March. So that was the second day of the March meeting and it will end in two months on the 10th of May 2021. I have communicated with Patrick Charolais on that and so far we have not received anything back. So my expectation is that this just a “formal opt out” and most likely nothing will come in. -IS: here are in the bottom of the slides the list of the appropriate documentation that have published announcing the out-out period and the draft standards on the Ecma file server. So this is the status. So, we are now quite well prepared for the June approval. +IS: here are in the bottom of the slides the list of the appropriate documentation that have published announcing the out-out period and the draft standards on the Ecma file server. So this is the status. So, we are now quite well prepared for the June approval. IS: So now this is for the regarding the feedback from the ExeCom that I got. As you may remember we had a request to the ExeCom regarding to produce a nice good quality version for the download versions of ECMA-262 and -402 and especially we were very interested in a good pdf version. Okay. So now what? That 15,000 swiss francs was approved. The request was actually to get it on a periodical basis every year, but we have not received that so we have only received that for the first year. Probably the ExeCom wanted to see, how it works etc. and if it works well, that I'm quite sure we will get it also in 2022. The ExeCom request was that the output should follow the Ecma template and the editorial guidelines. Then they would like to have first a Word document because out of the Word document it is much easier to edit by the Secretariat and they can also easily convert it into a pdf version and even in a HTML version Etc. So this is a positive feedback from the execom. It could have been even better, but I am already happy with that. Now we should proceed, there are a couple of people in TC39 who are already involved in that and they should give concrete advise how to proceed. You know, what kind of types setting services we should be use etc. They should consult this with the Ecma Secretariat. The secretariat has to negotiate the contract with the typesetting service. But also the TC39 editors, that should also be in the process in the sense that they should review before the final blessing the finished product. They should have a look on the Word version in order that everything is okay. So this is regarding the typesetting support. @@ -69,7 +69,7 @@ IS: We also also have additional requests and this will be discussed maybe the t IS: and now the feedback regarding the Ecma recognition awards. So the ExeCom has agreed to JHD’s nomination. The other nomination. I don't want to mention here the name etc. basically was that the Ecma Recognition Award is a recognition for Ecma members only. It was also a problem that that Ecma-wide only 2-3 awards are given at every GA, and many awards are already “scheduled” for well-deserving not-TC39 people. So the number of Ecma Recognition Awards for a large group as TC39 not really optimal. -How to solve this problem, I am thinking about it for a long time…. So my suggestion is the following. Actually we are in TC39 a very large group and they are many excellent contributions. So probably it would be better if we ourselves invented and introduced a TC39 internal recognition program and that we would have more freedom also to include more people within TC39 but also for those who are not working for Ecma members, but do a useful work for TC39 projects. So we would have more flexibility. But concretely how to design a process for that and implement it - because this is really a very sensitive subject really to give those people the awards who have best deserved it - we need a good process, and the details are not clear yet. It is a very sensitive issue. I have honestly speaking not really a good idea how we could do this in TC39. But this is now a very rough idea and if the rough idea is acceptable then we should think about the implementation. The easiest part of the implementation that I am quite good at it is, where and how to purchase the awards and then how to order it and then how to distribute it Etc. So on that part, I certainly would offer my services, but the other one, how to make the selection Etc. I would like to invite ideas to work out something in the TC39 community which makes sense. So this about the recognition award. +How to solve this problem, I am thinking about it for a long time…. So my suggestion is the following. Actually we are in TC39 a very large group and they are many excellent contributions. So probably it would be better if we ourselves invented and introduced a TC39 internal recognition program and that we would have more freedom also to include more people within TC39 but also for those who are not working for Ecma members, but do a useful work for TC39 projects. So we would have more flexibility. But concretely how to design a process for that and implement it - because this is really a very sensitive subject really to give those people the awards who have best deserved it - we need a good process, and the details are not clear yet. It is a very sensitive issue. I have honestly speaking not really a good idea how we could do this in TC39. But this is now a very rough idea and if the rough idea is acceptable then we should think about the implementation. The easiest part of the implementation that I am quite good at it is, where and how to purchase the awards and then how to order it and then how to distribute it Etc. So on that part, I certainly would offer my services, but the other one, how to make the selection Etc. I would like to invite ideas to work out something in the TC39 community which makes sense. So this about the recognition award. IS: now the next meeting schedules. So the next GA is on the 22 of June 2021. This is important for us because ES 2021 will be approved there. The next execom meeting will be in October 6th-7th of October in Geneva. I don't know if that will be still remote. @@ -79,7 +79,7 @@ IS: Oh, that's that's a good question. I don't know. I will ask back. Yeah, I me YSV: Regarding the TC39 recognition award. I think it's very unfortunate that we won't be able to recognize the work of individuals, especially those who have been supporting TC39 and the specification with a great deal of work on their own free time as independent contributors. I think I would support the formation of a TC39 recognition award in particular so we can recognize those contributions which have been very significant for the committee. -IS: I fully agree with you. Yeah, I mean unfortunately the situation is that for these ECMA recognition awards to have two or three per semester is not much and I know for the June awards they are already taken by others from other TCs who also deserve it (like people who have been working for Ecma on certain things also in a TC chair position, for 20 years), so for us with such a big committee, you know to share it with ECMA a whole it is not a good solution. So therefore I fully agree with you. +IS: I fully agree with you. Yeah, I mean unfortunately the situation is that for these ECMA recognition awards to have two or three per semester is not much and I know for the June awards they are already taken by others from other TCs who also deserve it (like people who have been working for Ecma on certain things also in a TC chair position, for 20 years), so for us with such a big committee, you know to share it with ECMA a whole it is not a good solution. So therefore I fully agree with you. DE: I'm quite disappointed in what sounds like the results of the recent execom meeting. I used to be on the execom but because there were more volunteers. I left it to the committee this most recent meeting to not have a competitive election for the seats and I hope that there's good TC39 representation on the execom one way or the other. It seems like the execom is not respecting the TC39 consensus, not for the for the Ecma recognition award where we achieved consensus to propose Feliene for that at the same level as JHD. I think if there were numerical limitations. I feel like we put them at the same level for this award. So I do support creating a TC39 level award as Yulia said I think ECMA should also be more more active in communicating to us what they want from award recommendations because the award was for both on the agenda. @@ -101,12 +101,12 @@ IS: I don't know, you know about this this sentence, you know that chairs are in AI: We didn't get an email this time around. Yeah, I was a little bit surprised to see me IS's comment because I didn't realize I had missed a execom meeting. -DE: In meeting I was in there were decisions that were being discussed about things that pertain to TC39 and I think it's important that the execom understands the needs of the committee, and they were very interested in understanding our needs. So in the future, I hope they get come will Sure we'll be more careful about making that emails reach the chairs. I know the Ecma has long had problems with its email system and resisted using an external email system and insisted on using its own Consultants to to fix their technical problems, which nevertheless persist and I don't I don't think it's a good state for ECMA for to stay like this to you know, be losing emails, to be not inviting people to these meetings even if by accident by technical mistakes, these things that need to be fixed. +DE: In meeting I was in there were decisions that were being discussed about things that pertain to TC39 and I think it's important that the execom understands the needs of the committee, and they were very interested in understanding our needs. So in the future, I hope they get come will Sure we'll be more careful about making that emails reach the chairs. I know the Ecma has long had problems with its email system and resisted using an external email system and insisted on using its own Consultants to to fix their technical problems, which nevertheless persist and I don't I don't think it's a good state for ECMA for to stay like this to you know, be losing emails, to be not inviting people to these meetings even if by accident by technical mistakes, these things that need to be fixed. AKI: We can talk more about this later. - ## ECMA262 Editors Update + Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1KGiqTUvbHgwyHEWwc4gfS4yoti-07N3ULa6JunaeLB8/edit) @@ -118,11 +118,12 @@ AKI: There's nothing on the queue KG: Okay, that's it. ## ECMA402 Status Update + Presenter: Shane Carr (SFC) - [slides](https://docs.google.com/presentation/d/1l23yd3GsczpHDcXaHmT_llaOyNaKRs-uMW0RmKzv96U/edit?usp=sharing) -SFC Hi. My name is Shane. I'm a convener of the TC39 task group 2, we work on the ECMA 402 standard also known as Intl. I'll be giving the status update. +SFC Hi. My name is Shane. I'm a convener of the TC39 task group 2, we work on the ECMA 402 standard also known as Intl. I'll be giving the status update. SFC: What is Ecma 402? we're ecmascript built-in internationalisation Library. We own everything under the Intl namespace. as well as things related to it like toLocaleString on various ecmascript primitives. We're developed as a separate specification by TC39 task group 2, however our proposals move through the TC39 stage process. We have monthly phone meetings to discuss the details. We'd like to see you get involved. Here are some of the people involved with TC39 TG 2, a list of people who've attended recent meetings as well as the editors RGN and LEO. Thank you for your work so far. I'm SFC, the convener, and USA is the Apprentice; I'm calling USA the Apprentice as kind of the jack of all trades who does basically everything that needs to be done. We're also fortunate that we were able to sponsor Igalia to continue improving ECMA-402 this year. Google's been able to take on that sponsorship again. @@ -138,31 +139,29 @@ SFC: intl number format V3. This is a proposal that I'm championing. We hope to SFC: Intl.DurationFormat. This one is largely inspired by Temporal and USA is now going to be the champion of this proposal and to help to put this up for stage 3 very soon. This has been a challenging proposal for some reason. It doesn't seem like it should be that hard. But there's a lot of challenges with the API design on this proposal. So I hope to finally get those resolved and then proposed for stage 3 at the next meeting. -SC: intl enumeration API. Also by FYT. We hope to put this one up for stage 3 at the next meeting. The good news is that we should hopefully have the privacy and fingerprinting concerns resolved. There's also an update later this meeting where FYT will share more details on that. This is a feature that many people including from this committee have been requesting. +SC: intl enumeration API. Also by FYT. We hope to put this one up for stage 3 at the next meeting. The good news is that we should hopefully have the privacy and fingerprinting concerns resolved. There's also an update later this meeting where FYT will share more details on that. This is a feature that many people including from this committee have been requesting. -SFC: we're not done yet. +SFC: we're not done yet. -SFC: Now, we're going to our stage 1 proposals. Extend TimeZoneName by FYT, we've had several discussions about this; it is a small proposal but it's also an important one. So we have this one for advancement this meeting for stage 2. +SFC: Now, we're going to our stage 1 proposals. Extend TimeZoneName by FYT, we've had several discussions about this; it is a small proposal but it's also an important one. So we have this one for advancement this meeting for stage 2. SFC: As you can tell FYT has been championing a lot of these proposals so we can thank him for all the time that he's been putting into solving these problems. So, thank FYT for your work there. SFC: eraDisplay: I'm Champion for this one. This one is still at stage one and it's been at stage one since January, but I think I'll start focusing more of my time on this one after I get number format to stage three. This is also another important one for date time formatting. The implementation of this one is going to be a little interesting to make sure that it's efficient. But yeah, it's an important feature. -SFC: Intl Locale matcher. Long Ho (LHO) is the champion for this one. It's currently at stage one. There are still several open design questions, you know, we're mostly pending on LHO to take action on advancing this proposal. +SFC: Intl Locale matcher. Long Ho (LHO) is the champion for this one. It's currently at stage one. There are still several open design questions, you know, we're mostly pending on LHO to take action on advancing this proposal. SFC: We also still have smart unit preferences at stage 1. This is currently blocked. I expect that this will start getting unblocked soon as we start thinking more about the design space for this proposal for exactly what we should standardize in 402. The main questions here have been on scope. Are we doing too much by trying to both convert the units and display the unit? So there's some open scope questions. - SFC: As a reminder, we have a proposal and PR progress tracking. I'll open the link to this after the presentation so you can see how we track our progress. Here's the slide again for how to get involved. The thing we need most help on right now is writing mdn documentation. We're a bit behind on that. intl segmenter is still the big one that we need help with writing MDN documentation. We also need help on a lot of the smaller pull requests and other things, so if that's you if you have a knack for writing mdn, that would be really helpful to help us. We also need help with the 262 tests. And implementing in JavaScript engines and so forth. SFC: So yeah, that's that's the update. Let me go quickly over here to the pr and progress tracking as you can see how we track our we keep our work updated. This includes also older PR. So there's a lot of green on the top here because those are things that have already been merged. everything where there's red X's means that they're still work that needs to be done. So there's a lot of green because we don't delete things when they get done and they get finished but most of the things that have red are currently things that we're working on. Also the things with hourglasses need work, so you can see that there's still a lot of work to do on our stage two proposals, even on our stage three proposal as well as on a lot of these pull requests. There's a lot of hourglasses and red x's here. So you can always check this for the latest updates. This is also a great place to go look for browser compatibility because if MDN is not yet updated, this chart is also a great way to see what version of each browser shipped each feature. So that's another thing you can get from this wiki page. Okay, so that's my update. SFC: Just One Last shout out to get involved with our meetings. I think there's a lot of work that we have to do and I think there's a lot of mentorship and support in our committee. It's something that I'm striving for. So if you want a way to get involved with spec work, I think this is hopefully a great way to do it and please reach out to me or to USA or to the editors, you know, if you want some more support there and we're really excited to see you get involved. - ## ECMA 404 Status Update -CM: Ecma 404 lies sleeping. As long as we do not disturb it the foundations of reality will remain intact. +CM: Ecma 404 lies sleeping. As long as we do not disturb it the foundations of reality will remain intact. ## CoC Committee @@ -172,21 +171,16 @@ JHD: We have no updates to report. AKI: Okay, next up real quick an update from the Temporal champions. Normative changes upon which stage 3 was conditional have been resolved. Temporal is now formally stage 3. - ## Security TG Presenter: Yulia Startsev (YSV) -- [proposal]() +- proposal - [slides](https://docs.google.com/presentation/d/13s8STWY1zVab3KRK62Q0mhWeKQ2aLKS1wTTKgyJe7iQ/edit#slide=id.g6e7d7a6a09_0_93) - - - YSV: So context for this is in the January meeting as we were discussing the potential for chartering TG3, which is the security TG. The chair group was tasked with coming with a chair or leadership management selection process Brian and I worked on that together since I have a bit of experience with making sure that we've got a fair processes in place for selecting people who do extra work for the committee and I will quickly present what we came up with for discussion. So the management breakdown is as follows: the leadership of the security TG would be tasked with facilitating the meeting, managing the agenda for the security TG meetings. That means determining the time zone requirements for participating delegates and rotating the remote meetings accordingly. really and also handling or delegating the note-taking and reporting of the results back to this committee. Finally they will be tasked with coordinating with this committee which is TG1 or I believe we're a TC now. -YSV: Okay, so I'm on the "selection" slide, if you can't can't see it, please refer to the slide deck. So what does that selection process of the chairs look like? We propose that tg3 decides its own chair group very much Like the Intl group is deciding its chairs. It does not need to go through the TC39 committee and that the results are presented to TC39. If only one group is presented and there are no objections. This can just be an election through consensus. You don't have to go through any formalized process. In the case that an election does need to happen one of the TC39 TG1 chairs will preside. We would expect that some presentation is done the competing sides and we propose that this will be a simple majority. Each member organization participating in TC39 is an elector. you can also abstain; abstaining does not count to the majority. We're just following the election process that we have elsewhere in the case that we do need to elect the do we want to vote question is posed in the case that you have only one group and it's not contentious, and that's election by unanimous cote or by consensus. Alternatively if there is discussion we have a couple of guiding questions about how that discussion could be had: the ballots are performed asynchronously via email to the chair group so that they can be counted and it's not shared with anybody else. It'll be a be counted completely privately and this discussion should be time-boxed to about 30 minutes. The ballot looks exactly the same as as in TC39 and are further recommendations for the selection of the chair is to have at least two chairs from different member organizations and to take a lightweight approach to selecting the chairs in order to facilitate getting started on the work, which is already been delayed by one quarter. This group was ready to go in January. So we want to make sure that they can get started right away. - +YSV: Okay, so I'm on the "selection" slide, if you can't can't see it, please refer to the slide deck. So what does that selection process of the chairs look like? We propose that tg3 decides its own chair group very much Like the Intl group is deciding its chairs. It does not need to go through the TC39 committee and that the results are presented to TC39. If only one group is presented and there are no objections. This can just be an election through consensus. You don't have to go through any formalized process. In the case that an election does need to happen one of the TC39 TG1 chairs will preside. We would expect that some presentation is done the competing sides and we propose that this will be a simple majority. Each member organization participating in TC39 is an elector. you can also abstain; abstaining does not count to the majority. We're just following the election process that we have elsewhere in the case that we do need to elect the do we want to vote question is posed in the case that you have only one group and it's not contentious, and that's election by unanimous cote or by consensus. Alternatively if there is discussion we have a couple of guiding questions about how that discussion could be had: the ballots are performed asynchronously via email to the chair group so that they can be counted and it's not shared with anybody else. It'll be a be counted completely privately and this discussion should be time-boxed to about 30 minutes. The ballot looks exactly the same as as in TC39 and are further recommendations for the selection of the chair is to have at least two chairs from different member organizations and to take a lightweight approach to selecting the chairs in order to facilitate getting started on the work, which is already been delayed by one quarter. This group was ready to go in January. So we want to make sure that they can get started right away. YSV: I think in order for this for us to continue and for us to help TG3 get started and start moving. We do need to have candidates from that TG, and I don't know what Process. is there MF if you're on the call, Do you already have proposed candidates who you would like to get started or is this something that still needs to be determined? @@ -200,9 +194,9 @@ BT: We just mostly just copied what we've done before and wrote down in slides. YSV: Terribly sorry about the presentation quality. I'll try to figure out what's wrong there. -SYG: If we have time for this item, is there a way we can have a single election process for selecting management position things for all of our TGs. I'm not yet convinced - TGs have different subject expertise, but in terms of management and process do they need to be tailored for each TG or can we just have one? +SYG: If we have time for this item, is there a way we can have a single election process for selecting management position things for all of our TGs. I'm not yet convinced - TGs have different subject expertise, but in terms of management and process do they need to be tailored for each TG or can we just have one? -YSV: I like that idea a lot. I think having one for the entire committee that we follow consistently would be ideal. I think in general we want to have lightweight decisions on this. We don't spend too much time on this stuff if it's not controversial and we've done that in the past, so I would be totally up for that. I actually quite like what we have right now because it allows us to very quickly make a decision if it's uncontroversial and if we're sure the resolution is pretty fair. So I would see it being modeled on what we had what we just proposed for the tg3. I don't know what people think. +YSV: I like that idea a lot. I think having one for the entire committee that we follow consistently would be ideal. I think in general we want to have lightweight decisions on this. We don't spend too much time on this stuff if it's not controversial and we've done that in the past, so I would be totally up for that. I actually quite like what we have right now because it allows us to very quickly make a decision if it's uncontroversial and if we're sure the resolution is pretty fair. So I would see it being modeled on what we had what we just proposed for the tg3. I don't know what people think. YSV: Just just before we move on. I just want to make it clear that this is the last action item for making this TG a reality and we've agreed to all of the previous things. This was the last thing. So we will now have a security TG, that's fantastic. And the other action item out of this is let's unify our election process, which I think is a fantastic idea if there are no objections, I will do that. @@ -225,7 +219,6 @@ Presenter: Shane Carr (SFC) - [proposal](https://github.com/tc39/proposal-intl-numberformat-v3) - [slides](https://docs.google.com/presentation/d/1i7VkN9T39eIuusFS-bucy_KoAwUcTF121NZk-1WiFlY/edit#slide=id.p) - SFC: My name is SFC and I'm presenting a stage 2 update on the Intl number format V3 proposal. So a little bit of History here. This was advanced to stage two last summer and hoping to advance to stage three soon, but I thought I would give a stage two update because there's been a number of changes to the proposal since I presented on stage 2. So I just wanted to give an update to the committee on this proposal so that if there are any more issues I can resolve those and then hopefully next meeting I can ask for stage 3 and you've already seen all the changes. SFC: So first, what is a NumberFormat V3? So the way that this proposal was formed is, I went through the dozens and dozens of feature requests that we get filed against ECMA 402 every year and identified all the ones that are related to number formatting and then among those applied a mechanism to try to figure out which ones to prioritize. These three criteria are also the same three criteria that we now apply to all Ecma 402 proposals. These are three bullet points that every proposal and Ecma 402 needs to satisfy: one is that it needs to have broad appeal, two is that it needs to have prior art and three is that it needs to be expensive to implement in user land. So here's an example of how we applied this mechanism. Here are two features that were requested: one of them was additional scientific notation styles. This was requested by Google, but it doesn't have very high quality CLDR support and is also fairly cheap to implement and user lands. So the verdict was to not include it. However, number range has a lot of stakeholders. It has good CLDR Support and it's not easy to implement in user land, so we decided to include this one. @@ -236,9 +229,9 @@ SFC: The next section is the grouping enum. The grouping enum now has four optio SFC: Next, new rounding/precision options. On my last update last summer I said this was sort of still a work in progress with details to be ironed out. All these details are now ironed out and this is what we're currently proposing. We're proposing three new options for Intl.NumberFormat. One is rounding priority, which I will discuss on the next slide. The second is roundingIncrement , the rounding increments allows the number to be rounded not only to the nearest digit. so to the nearest five digits or ten digits or 50 digits the rule here is that you can specify any integer value which is either a 1 a five followed by any number of zeros. So for example in order to achieve nickel rounding you would write this line here. Where you specify minimum and maximum fraction digits, which tells the number format that you want round to the hundredths place ten to the minus two is essentially what that option says with minimum maximum fraction digits and then at that position you round to the nearest five. So you round to the nearest five hundredths, also known as a nickel. So that's how we support nickel rounding. The third is trailing zero display. This is another feature that's been often requested. This allows you to strip trailing zeros only when the number is a whole number. This is popular in currency formatting when you want to display for example $3 instead of $3.00. -SFC: So, let me talk a little bit more about rounding priority. This is a puzzle. I've spent a lot of time with Richard Gibson among others working through the rounding priority. I'll try my best to explain this to you in a clear way. So what happens when you specify { maximumFractionDigits: 2, maximumSignificantDigits: 2 }? When you do this you specify two conflicting strategies for how you want to round the number. Maximum fraction digits two means that you want to round the number to the hundredths place. in which case you'd get 4.32 as the outputs from the input number. If you gave maximum significant digits two, it means you want to round after the second significant digit, which is 4.3, so those strategies are conflicting, and the way we resolve that conflict is by using the new roundingPriority option. Currently the "significantDigits" option is the current behavior and that will cause the significant digits to always win, but the two new options are morePrecision and lessPrecision. morePrecision means more nonzero digits, and lessPrecision means the one with fewer nonzero digits. morePrecision is useful for compact notation. So compact notation will essentially be { maximumFractionDigits: 0, maximumSignificantDigits: 2 } with morePrecision. And this this solves this puzzle I think in a very elegant way. +SFC: So, let me talk a little bit more about rounding priority. This is a puzzle. I've spent a lot of time with Richard Gibson among others working through the rounding priority. I'll try my best to explain this to you in a clear way. So what happens when you specify { maximumFractionDigits: 2, maximumSignificantDigits: 2 }? When you do this you specify two conflicting strategies for how you want to round the number. Maximum fraction digits two means that you want to round the number to the hundredths place. in which case you'd get 4.32 as the outputs from the input number. If you gave maximum significant digits two, it means you want to round after the second significant digit, which is 4.3, so those strategies are conflicting, and the way we resolve that conflict is by using the new roundingPriority option. Currently the "significantDigits" option is the current behavior and that will cause the significant digits to always win, but the two new options are morePrecision and lessPrecision. morePrecision means more nonzero digits, and lessPrecision means the one with fewer nonzero digits. morePrecision is useful for compact notation. So compact notation will essentially be { maximumFractionDigits: 0, maximumSignificantDigits: 2 } with morePrecision. And this this solves this puzzle I think in a very elegant way. -SFC: Okay, the next section hasn't changed since the last presentation. "interpret strings as decimals". So you pass a string into the number format function instead of trying to parse that as a number will interpret it as a decimal. This hasn't changed since the update. +SFC: Okay, the next section hasn't changed since the last presentation. "interpret strings as decimals". So you pass a string into the number format function instead of trying to parse that as a number will interpret it as a decimal. This hasn't changed since the update. SFC: Okay rounding modes. This is new. This is updated. I've talked a lot with DE, CLA and RGN among others and we've arrived at the following list of rounding modes. So there's 9 which are described here. This set encompasses all the rounding modes of ICU and CSS and Ecma 262 math and this is the naming scheme we agreed on. There are four directions that you can round the number: ceiling floor expand and truncate, and we have modes for both the regular version and then the tiebreaking version which we call halfCeil, haveFloor, halfExpand, halfTrunc. And in addition to that we have halfEven which is the ICU mode that's used to reduce bias when rounding numbers; it's useful in scientific and financial applications. So these are the rounding modes which we are currently proposing. The precedent that we set forward with these rounding modes on this slide. I expect it to be followed by the Decimal proposal that DE is championing, as well as with Temporal: Temporal is planning to follow NumberFormat V3 by adopting these modes. So yeah, there is a lot of discussions that went into this, you know, there's a discussion about like should we have a larger set or a smaller set? And if you have a smaller set, what should the smaller set be? And then we decided just to implement these nine and have them laid out clearly like this. So that's the proposal for rounding modes. @@ -260,7 +253,7 @@ SFC: and the proposal is to expand it, because use grouping true and false is no SYG: Okay. Thanks. -WH: SLIDE 11 - If we're rounding 1.225 to nickel, what would you expect the result to be? +WH: SLIDE 11 - If we're rounding 1.225 to nickel, what would you expect the result to be? SFC: Yeah, so the way that I described the nickel rounding is rounding to even cardinality and even cardinality does not necessarily mean an even number. It does when you're rounding to the nearest single digit, but if you're rounding to the nearest nickel then well, I guess actually nickel ends up being even cardinality as well because it alternates between odds and evens. But if for example you're rounding to some interval that doesn't alternate between the odd and even. 1.225 rounded to the nearest nickel, so that would round to 1.20 I believe, because 1.20 is the option with even cardinality. @@ -278,10 +271,9 @@ SFC: Yeah. The way that the rounding currently is packed to work in Ecma 402 is SYG: This may be a naive question. So what I'm struck by with these updates that there are just many knobs to turn for using number format V3. Maybe there are no correct defaults in internationalisation. What is the kind of general take on discoverability of all these knobs, deciding if all these knobs are needed. I know you're responding to feature requests and stuff like that the general usability of the API. - SFC: yeah, so most of these options that I'm proposing are, many of them are extensions of the existing options. Like for example, the reason that we decided to reuse the useGrouping proposed options because we already have that option; it just wasn't expressive enough and we have clear users and use cases for this option in particular. This grouping enum is one of the top two feature requests that we get on Ecma 402: we get bug reports very frequently asking for this particular feature. So this is just an extension. And I can walk through all these: formatRange. There's sort of two central motivations here: one is that we have a lot of users requesting it, at the second is that we support format range in date formats. intl number format range is the natural extension of that. So there's already users who are already familiar with Intl who are already familiar with date-time format range. So number format range is a natural extension of that which should reduce the cognitive burden. That's how we hope to deal with their cognitive burden. On rounding options, these options likely will incur cognitive burden. The options are here for users who want or need them. Again, you know, if you don't use these options you can ignore them. They're just options in the options bag. We hope that the names of these options will make themselves more or less self-describing. Rounding priority is a bit challenging to explain. I hope that my explanation on slide nine mostly answers those questions. The other ones should be fairly straightforward; nickel rounding is important and it is used in several currencies around the world that use nickel rounding; rounding increment is a slightly more general version of nickel rounding. We could have called the option nickel rounding, but then that doesn't scale to dime rounding for example, so we chose to use rounding increments which is slightly more general, but it's mostly focused on the use case of nickel rounding which is an actual clear use case with internationalisation applications. Interpret strings as decimals. This doesn't incur cognitage burden because users don't actually see this one. This is just that they pass in a string, we currently have the wrong behavior and we're changing it to the right Behavior. Rounding modes, this one incurs a bit of cognitive burden, but it's also something that already exists elsewhere in 262 and CSS. So users already know what rounding modes are; we're just expressing them now. And we just didn't support them in Intl.NumberFormat. Sign display negative again is an extension to the existing sign display enum. So this enum already exists and we're simply adding a new option to the enum. So we don't expect this one to increase cognitive burden. So yeah, the question of cognitive burden is definitely an important one and I mean, I really hope with this proposal that we've been able to add these features that clients have requested and that we've established there's broad appeal without significantly increasing the already existing cognitive burden of intl number format. Now one could argue that intl NumberFormat already has a lot of cognitive burden. Maybe that was a mistake in design of intl number format, but this proposal does not in my opinion have a significant net increase to that cognitive burden. -SYG: Yes, part of your answer that I really did like was that it seems like there are other parts of Intl that have similar if not the same options that are accepted in these option bags that you now extend to number format. And that kind of reuse I think does go a long way in getting the cognitive burden down. That's it for me. +SYG: Yes, part of your answer that I really did like was that it seems like there are other parts of Intl that have similar if not the same options that are accepted in these option bags that you now extend to number format. And that kind of reuse I think does go a long way in getting the cognitive burden down. That's it for me. DE: I think, like SFC explained, this proposal is quite well motivated. I mean when I talk to JavaScript developers about Intl.NumberFormat it seems pretty widely used more than a few years ago. And a lot of people are at the point where they run into cases where they want to use Intl number format, but it's missing this kind of feature of rounding one somewhere other this happens in a lot of financial applications. often use JavaScript for front-end things which sometimes need to need to do these kinds of operations. So I support this proposal. About intelligibility, an important part of that is documentation, and you know one advantage that Intl has over random npm packages for number formatting is formatting is that it's documented on Mdn, it's supported with types everywhere and different things like that. MDN documentation for intl is not currently perfect. There's like a little paragraph about about each one, but not currently is good code samples and other things so I think there's a lot of room for improvement and I hope that we and Igalia will be able to work on this based on our partnership with Google in the coming year, in terms of improving the documentation of both existing and new NumberFormat and DateFormat options. I think it's really good to add these as options to the existing formatter also because it works well with graceful degradation: if you have an older browser and it runs the code with the options, it will just ignore the new options, but get things mostly semantically right, hopefully So this is all very good. Thanks SFC. @@ -291,7 +283,7 @@ SFC: Yeah, good question that stage three reviewers discussed at the previous ti YSV: There is an issue that has come up since I volunteered with Jeff. -WH: `(Note: clarification to the claim that I worked at Mozilla)` I never worked at Mozilla. +WH: `(Note: clarification to the claim that I worked at Mozilla)` I never worked at Mozilla. YSV: Since we had some movement of people, I don't have the expertise on my direct team in the Intl spec to to properly review and we lost the person we have at Mozilla who I was going to do that work with. I can see if I can get in touch with him, but I think that my review can't be guaranteed anymore. @@ -299,11 +291,11 @@ SFC: Okay. Thanks for the update YSV. You may also be able to work with USA and MM: I thought I heard the phrase "implementation defined" go by in your presentation. Could you clarify what it is that is implemented and why is there anything implementation defined: we are trying to make things as determined by the spec as possible. -SFC: Ecma 402 is is filled with implementation-defined behavior. The reason it's filled with implementation-defined is because there's is because there's no right answer for Locale data and we rely on browsers to bring Locale data that solves the needs of their users. So for example the list of locales supported is implementation defined because different browsers may have different needs for locales they support. The exact display in the format of these numbers is implementation defined. We try to strike a balance between putting things in the specification and allowing browsers to basically ship Locale data; that's always a challenging balance to strike. Okay. +SFC: Ecma 402 is is filled with implementation-defined behavior. The reason it's filled with implementation-defined is because there's is because there's no right answer for Locale data and we rely on browsers to bring Locale data that solves the needs of their users. So for example the list of locales supported is implementation defined because different browsers may have different needs for locales they support. The exact display in the format of these numbers is implementation defined. We try to strike a balance between putting things in the specification and allowing browsers to basically ship Locale data; that's always a challenging balance to strike. Okay. MM: So for with regard to locales, I understand and accept that . But you separately mentioned number formats was that still just a locale issue? Or is there any other source of implementation defined variation other than locale? -SFC: It's a good question. So like the symbols obviously are Locale dependent. We understand that. +SFC: It's a good question. So like the symbols obviously are Locale dependent. We understand that. DE: You were talking about the float to decimal conversion algorithm. @@ -321,13 +313,13 @@ SFC: All the implementations currently use ICU and ICU implements this determini DE: These algorithms are complex and I don't think it would be appropriate for us to just transcribe it into spec text. -MM: Is there some other standard or spec that defines that that we can reference? +MM: Is there some other standard or spec that defines that that we can reference? WH: The algorithm is the implementation, but what it implements has a very simple spec. We want the least number of digits is such that if you convert that number back to an IEEE double then you get the same value, with an additional caveat for what happens if there are two such numbers. The actual spec is very simple. MM: In that one case where there is more than one number that translates back and you mentioned a caveat. Is that a deterministic caveat? -WH: You can make it deterministic. +WH: You can make it deterministic. MM: That sounds good. Okay, Okay, I'm done. @@ -346,6 +338,7 @@ DE: There is a considerable amount of work needed from here. I think people who AKI: Thank you for the update SFC. I look forward to the stage advancement in the future and finding that volunteer champion. ### Conclusion/Resolution + Proposal was not seeking advancement, but will likely come back for advancement with the changes presented here ## Class fields, Private methods, and Static class features for Stage 4 @@ -358,26 +351,25 @@ CLA: Okay, so how do they look in code? What is the syntax of those features? so CLA: Let's see a little bit of code with this new syntax mixed into the JavaScript code. So here we have an example of class Pokémon that has two Fields. So one public field name and one private field #hp, that's the way to declare fields and then after that we have a declaration of Constructor and also a damage public instance method. We can see here that it could access both public field and private field. It's pretty common with what we have for ordinary properties right now. Actually public fields are ordinary properties. There are some limitations and there are some syntactic differences between data operation with a private field a public field. So well, I can be deal that those kind of things later if you would like to know which things I'm talking about. But yeah, this is how we can use well public and private fields with the proposal. -CLA: Here we have the examples of using private methods as well. So private methods as I mentioned before, are declared with hash started identifiers. We can see that the same is valid for a private accessor. So following the same thing we have with public accessors, if it's a setter and then it's important to have a parameter for the setter. We also follow the same for private getter as well and we can create private getters in the same way that we create accessors right now. Starting with the hash in the beginning is going to turn them into private members. +CLA: Here we have the examples of using private methods as well. So private methods as I mentioned before, are declared with hash started identifiers. We can see that the same is valid for a private accessor. So following the same thing we have with public accessors, if it's a setter and then it's important to have a parameter for the setter. We also follow the same for private getter as well and we can create private getters in the same way that we create accessors right now. Starting with the hash in the beginning is going to turn them into private members. CLA: Okay, so the static version we can see here is the class Pokémon. It has couple of static members. So the first one is a public static field initialized with an array of string and then private fieldrivate field that is also initialized as an array of strings. Then we have a private method. - CLA: So here are links for all the three proposals that we have. So the idea of this slide here is to give information and be the point of information for people access the whole story and also some MDN documentation that is already done for public and private instance fields, static features as well and private methods are a work in progress and we will have these soon, so people are actively working on this right now and of course here are some links to the explainer and motivation for each proposal and each new feature. I would like to highlight here a blog posts made by SYG a couple of years ago explaining each class features with a lot of code samples and also discussing a little bit some edge cases and how you can use them. So thank you very much Shu for working on this. This is a very nice source of information if you would like to learn and to understand more about those features in general. CLA: Well since the presentation to try to move the proposal to stage 4 of course we need have the requirements to achieve that and one of the requirements is to have test262. So let's just take a look on how the test262 status is currently for this proposal. CLA: So yeah if we consider both field instance fields and private instance fields, we can see that have about 6,000 tests covering them and we have a nice percentage of coverage from implementers as well. And yeah, even though we can see some implementations like SpiderMonkey doesn't have ??. The reason here is that they are shipping with this feature disabled and this situation can actually change pretty soon as I will discuss a little bit later about implementation. Also while the numbers for private methods are also quite close to the class fields in general. So considering also static private methods and getters and setters we have also around six thousand tests as well. We have an interesting number of coverage for the implementation side. And yeah last but not least the static fields coverage both in private fields static fields coverage. We have combined between them around a thousand tests as well. -CLA: Regarding specification text. this specification right now didn't have any open question or semantic changes since the stage 3. So since the proposals which is stage 3 there were no semantic changes and also ??. There were some editorial improvements that happened mainly coming from review and we have already a unified PR request open that got already a lot of attention from the editors. So thank you very much for everyone like investing time on _____ etc. And yeah, those things are going to be addressed as soon as I can. +CLA: Regarding specification text. this specification right now didn't have any open question or semantic changes since the stage 3. So since the proposals which is stage 3 there were no semantic changes and also ??. There were some editorial improvements that happened mainly coming from review and we have already a unified PR request open that got already a lot of attention from the editors. So thank you very much for everyone like investing time on _____ etc. And yeah, those things are going to be addressed as soon as I can. -CLA: Implementation status, I'm happy to show to you this table here so we can see a lot of green markers. And yeah, we have like a huge implementation support already for those features. So if we consider a babel 7.6 we are already able to use all the class features implemented. esbuild, is already supporting all the class features. So including all, you can have static and public fields and static and public private methods. Typescript, if we consider the version 4.3, that is a Beta release and also change in a couple of flags as well. It's possible to use the semantic that is supposed by the proposals right now. But yeah, it's possible to use that by setting the right Flags to experiment the features we are proposing introduced into class fields from the fields and private methods in static members as well accessors . XS and QuickJS have support for class features for a long time already. So yeah, they shipped the whole set of those features I think more than a couple of years ago. If we consider V8 version in before we are able to use all the features and are shipped since version 84. You can use them already and if you have a Google Chrome 84 plus version are able to try out those features there as well. So spider monkey if we consider version 75 it's possible to use both instance fields and static Fields public Fields enabled by default and there is Support for all other remaining features behind runtime flag, but I got some information that they are shipping and those features enabled by default also quite soon. So, yeah, we could see this table be a little bit greener than we can see right now. So yeah nice work of nice on the bottom of people and most of those things were behind a flag because they were doing work on improvements to perform some optimizations on both private methods and static / methods. And do you see Safari 14 shipped public instance fields, I think last year. If we consider Safari Tech Preview of version 122, it's possible to use all the class features. So they are shipping all the class features enabled by default. And hopefully we can see Major version of safari coming with the support for these features as well. +CLA: Implementation status, I'm happy to show to you this table here so we can see a lot of green markers. And yeah, we have like a huge implementation support already for those features. So if we consider a babel 7.6 we are already able to use all the class features implemented. esbuild, is already supporting all the class features. So including all, you can have static and public fields and static and public private methods. Typescript, if we consider the version 4.3, that is a Beta release and also change in a couple of flags as well. It's possible to use the semantic that is supposed by the proposals right now. But yeah, it's possible to use that by setting the right Flags to experiment the features we are proposing introduced into class fields from the fields and private methods in static members as well accessors . XS and QuickJS have support for class features for a long time already. So yeah, they shipped the whole set of those features I think more than a couple of years ago. If we consider V8 version in before we are able to use all the features and are shipped since version 84. You can use them already and if you have a Google Chrome 84 plus version are able to try out those features there as well. So spider monkey if we consider version 75 it's possible to use both instance fields and static Fields public Fields enabled by default and there is Support for all other remaining features behind runtime flag, but I got some information that they are shipping and those features enabled by default also quite soon. So, yeah, we could see this table be a little bit greener than we can see right now. So yeah nice work of nice on the bottom of people and most of those things were behind a flag because they were doing work on improvements to perform some optimizations on both private methods and static / methods. And do you see Safari 14 shipped public instance fields, I think last year. If we consider Safari Tech Preview of version 122, it's possible to use all the class features. So they are shipping all the class features enabled by default. And hopefully we can see Major version of safari coming with the support for these features as well. CLA: I would like to highlight here that Igalia worked on implementation for both V8 and JSC, Igalia working on implementation of private methods and also some optimizations for class fields and Etc and on JSC implementation. ?? lot of feedback from every reviewer as well to make sure we are implementing the right thing and just implementation. I was personally involved on this and I would that it's implemented with acceptable performance so far. This work was sponsored by Bloomberg as well. CLA: Also, Babel 7 defaulted to `[[Define]]` semantics on August 2018 and Node 12 LTS shipped with private fields in April 2019 and Node 14 shipped with private methods in July 2020. -CLA: So why ask for Stage 4 now? So just to give a little bit of background here. So all the three out of those three proposals moved to stage three almost three years ago. So class Fields reached Stage 3 on November 2017. Private methods and accessors reached stage three in September 2017 and Static class features reached stage three in May 28. First of all is to wait until at least two browsers implementation ships the features, and if we consider V8 and Safari Tech Preview, we're shipping and like Firefox coming also quite soon in the next release. They potentially could enable this feature so we would have instead of all actually three browsers in the ? and Safari Tech preview version 122 to shipped like Three or four weeks ago, so that's why we think it's on a nice time to ask for stage 4. Also, given this amount of time that was given to to collect some feedback from implementers and we had a couple of feedback regarding some parts of these specifications. Well when we did some minor changes, so it was quite the kind of ?? and also implementers are shipping the current spec as is and we believe that we have achieved all the stage four requirements, to ask for stage 4. So yeah, I think here comes like the official thing. Should we move class features to stage 4? +CLA: So why ask for Stage 4 now? So just to give a little bit of background here. So all the three out of those three proposals moved to stage three almost three years ago. So class Fields reached Stage 3 on November 2017. Private methods and accessors reached stage three in September 2017 and Static class features reached stage three in May 28. First of all is to wait until at least two browsers implementation ships the features, and if we consider V8 and Safari Tech Preview, we're shipping and like Firefox coming also quite soon in the next release. They potentially could enable this feature so we would have instead of all actually three browsers in the ? and Safari Tech preview version 122 to shipped like Three or four weeks ago, so that's why we think it's on a nice time to ask for stage 4. Also, given this amount of time that was given to to collect some feedback from implementers and we had a couple of feedback regarding some parts of these specifications. Well when we did some minor changes, so it was quite the kind of ?? and also implementers are shipping the current spec as is and we believe that we have achieved all the stage four requirements, to ask for stage 4. So yeah, I think here comes like the official thing. Should we move class features to stage 4? MM: Thank you for all the work that you've all done on this. @@ -395,7 +387,7 @@ CLA: Yeah, personally think that it's good to have these because even though eve MLS: We support of course for stage 4 as well. Thank you to Igalia and Caio for the work on this. We reviewed it, you know if we were to go back probably 7 or 10 years some design class features things would be a little bit different but nothing major. This is the best we think we can do given the constraints of what was already in life. -AKI: Approximately 20 minutes before the end of lunch an issue was posted to the class fields repo (https://github.com/tc39/proposal-class-fields/issues/329). I'm going to let JHX speak. +AKI: Approximately 20 minutes before the end of lunch an issue was posted to the class fields repo (https://github.com/tc39/proposal-class-fields/issues/329). I'm going to let JHX speak. CLA: Okay, give us a sec just to take a look at this statement okay. I think I got it right here. @@ -409,27 +401,27 @@ JHD: There are two angles I want to talk about. One is, as you mentioned AKI, is AKI: That's what I was referring to when I said, I don't believe it's a formal block because it specifically says _“which makes our objection invalid”_ in the issue. -CLA: So I'll go ahead and then I was just saying the same thing over, I have bonus slides going over most of the comments on this. I mean I didn't have enough time to read the entire thing. Sorry this didn’t come in good time for me, but I read I the issue and I think I have bonus slides that covers a lot of things and like since we have enough time and I think it's fine to like go over there in like at least you document was the reason even though we discuss this like no other place in a lot of previous meetings and Etc at least to document the reasons why we decided to take some design decisions. And so what do you think about like me presenting those bonus? Like I think we have enough time to do so. +CLA: So I'll go ahead and then I was just saying the same thing over, I have bonus slides going over most of the comments on this. I mean I didn't have enough time to read the entire thing. Sorry this didn’t come in good time for me, but I read I the issue and I think I have bonus slides that covers a lot of things and like since we have enough time and I think it's fine to like go over there in like at least you document was the reason even though we discuss this like no other place in a lot of previous meetings and Etc at least to document the reasons why we decided to take some design decisions. And so what do you think about like me presenting those bonus? Like I think we have enough time to do so. AKI: We absolutely have enough time, but I just I would like to mention that I have a great concern with this issue because its timing is inappropriate, and I'm not saying this to change things happening right now, but I want people to think about it in the future and anything that they are bringing to the committee. This was 20 minutes before the agenda item after it had been on the agenda for like 19 days. So let's just think about this going forward and make sure that if you have something to say that the presenter has a chance to read and comprehend it. CLA: Oh, yeah. I think we could use the schedule constraints if time zone was a problem as well. Okay, so we have two items, but I think I will go over like because some of those items actually are like process-related. So not sure if you we would like to keep discussing. So like there is a specific item that is calling an eye out some of the syntax that we decided to return fields which address in the presentation. So I will go over this and we can resume the queue quite fast. Okay, I promise that I won't be so long on this. -CLA: So yeah, the first thing actually it's kind of a coincidence. I would say it's regarding the way that we have the Syntax for private members. So the way we have the syntax for private members is starting with the hash character so the feedback that we can see in the repo and also outside the repo is that using the hash is ugly. However, like most of the Champions and also I put myself on this as well. It's kind of easy to get used to the new hash and so after writing a lot of code for private fields and tests, so I see hash as most part of JavaScript right now. Now and it's not that strange and there is like not something that came with JavaScript itself. So the language already does use this punctuation to keep privacy of members and they use `@` instead of `#`. They use the `@` character and yeah, we didn't decide to use the `@` because of decorators proposal etc. So well, the situation for privacy is pretty much the same. So we just change the character and also the ability to use hash here is mainly because we need to differentiate when we would like to access the public and private entities. So after talking with those developers generally and pointing them to the FAQ and also the reasons why we decided to use hash instead of a private modifier or something like that in the class member, they actually get convinced on this. Yeah. She's pretty much what we have a solution at least like the one that is less syntax. Well the the event that creates less restriction on access of public and private members and Etc. So yeah the idea to keep public and private access different have is that we would like to these strong encapsulation as design. So there is also like an order this equation why +CLA: So yeah, the first thing actually it's kind of a coincidence. I would say it's regarding the way that we have the Syntax for private members. So the way we have the syntax for private members is starting with the hash character so the feedback that we can see in the repo and also outside the repo is that using the hash is ugly. However, like most of the Champions and also I put myself on this as well. It's kind of easy to get used to the new hash and so after writing a lot of code for private fields and tests, so I see hash as most part of JavaScript right now. Now and it's not that strange and there is like not something that came with JavaScript itself. So the language already does use this punctuation to keep privacy of members and they use `@` instead of `#`. They use the `@` character and yeah, we didn't decide to use the `@` because of decorators proposal etc. So well, the situation for privacy is pretty much the same. So we just change the character and also the ability to use hash here is mainly because we need to differentiate when we would like to access the public and private entities. So after talking with those developers generally and pointing them to the FAQ and also the reasons why we decided to use hash instead of a private modifier or something like that in the class member, they actually get convinced on this. Yeah. She's pretty much what we have a solution at least like the one that is less syntax. Well the the event that creates less restriction on access of public and private members and Etc. So yeah the idea to keep public and private access different have is that we would like to these strong encapsulation as design. So there is also like an order this equation why AKI: I'm really glad you got through that slide, but I think we've been through this enough times. I think I'd rather switch back to the queue and then if we need to come back to this week can, thank you. SYG: Hi, I would like to +1 JHD's web reality point. I would urge the folks who are objecting to consider what it is they hope to achieve by trying to object at the stage 3 to 4 here. If the objection is to change web reality, that ship has sailed. If the objection is to, for example, block the proposal from being merged into the main specification, and therefore having the main specification not reflect web reality, I will consider that a major failure of not only the committee process, but also our jobs as specification authors. So the web reality point is the most salient one to me. -WH: I agree with JHD. This is way too late to bring up such objections, inappropriately posting long documents 20 minutes before the presentation, repeating issues we'd gone over and resolved many times. This is completely inappropriate from a process point of view. I’m also unclear on who's objecting since it seems like JHX is no longer employed by an Ecma member. +WH: I agree with JHD. This is way too late to bring up such objections, inappropriately posting long documents 20 minutes before the presentation, repeating issues we'd gone over and resolved many times. This is completely inappropriate from a process point of view. I’m also unclear on who's objecting since it seems like JHX is no longer employed by an Ecma member. -AKI: The issue was posted by 360 employees if I understand correctly. +AKI: The issue was posted by 360 employees if I understand correctly. YSV: I want to highlight in particular what JHD and WH and SYG have said and in particular I want to spend a little time reflecting on what SYG said about the divergence between the specification and the language that exists on the web. We have in the past had web reality features that were far from ideal to have in the language and they were merged into the spec because if the spec is a document that has no reflection of the web and implementations, it doesn't have any power. -MM: So, is there anybody here who represents the objections, who can speak for the objections, and is there anybody who feels like they could summarize what the strongest remaining technical objection actually is? +MM: So, is there anybody here who represents the objections, who can speak for the objections, and is there anybody who feels like they could summarize what the strongest remaining technical objection actually is? -AKI: I'm just going to quickly interject and just let you know that those are listed on the issue and they are the same things that have come up in the past. +AKI: I'm just going to quickly interject and just let you know that those are listed on the issue and they are the same things that have come up in the past. MM: Which of those objections are currently considered blocking? You mention that for some of the objections there was text that says that they're no longer relevant or they're no longer blocking or something. @@ -457,9 +449,9 @@ AKI: All right. Thank you. Thank you for that clarification of the point about t DE: So the DevTools point, I think the challenge is figuring out a UI to show which class the particular private name is, used a private name in multiple classes. There's multiple ways to solve it. I mean I don't have experience working in implementing this, but I think the problem exists at that level, not the possibility level. There was also concern raised about sharing private fields and methods outside of the class and the stage 3 static class block initializer allows a quite convenient way to do exactly that kind of share. During the development of the proposal, I was especially hopeful for Decorators. I remain very hopeful for decorators, but they're taking a little bit longer and it seems like we'll have static class initialization blocks first, so I'm happy to talk through more more issues and CLA is beginning to present good arguments for these, but I agree with the points that others are making in general. -AKI: Okay. Thank you. +AKI: Okay. Thank you. -JYU: I'm just curious about, you know, do we have such cannot be a process? You know, he's such a it's such a kind of condition that we can maybe just postpone the conclusion of the discussion to the end of the TC39 meeting so can give them a lot more time discussing and raising questions when the Champions are able to get online and involved in discussion. So I think, you know, at least for this proposal which is causing a lot of arguments it would be better to be cautious because it's rather critical about moving a proposal to Stage 4 even if it has been implemented by almost all the browsers. +JYU: I'm just curious about, you know, do we have such cannot be a process? You know, he's such a it's such a kind of condition that we can maybe just postpone the conclusion of the discussion to the end of the TC39 meeting so can give them a lot more time discussing and raising questions when the Champions are able to get online and involved in discussion. So I think, you know, at least for this proposal which is causing a lot of arguments it would be better to be cautious because it's rather critical about moving a proposal to Stage 4 even if it has been implemented by almost all the browsers. BT: We really have been cautious. If it weren't for hearing these objections, this proposal would have advanced a year ago, maybe even longer. So I think we need to take a decision this meeting, and to that end the chairs tend to agree with those calling for advancing this proposal. Having read the GitHub issue, none of that stuff was particularly new. It's good points, good things to bring up that have been brought up before. We have discussed them before repeatedly both in Committee and in the GitHub issues and in side channel conversations with the Champions. Really, as a practical matter, this proposal is web reality. There's nothing that really changes if we delay to another meeting, and our job as a committee, one of the most important things our job is, in my opinion, documenting web reality. It is super important that we have a document that people can follow that will get them an implementation that can run code that exists on the web now. If we don't do that, we have major problems. Just as evidence for that, we have this quick merge-to-master process for things that are documenting web reality, and it's to the point where private fields could even just skip the stage process and just be merged as a PR because it is web reality. So we understand that there are challenges with this proposal. There were challenges with this proposal and we regret, I think, that some members can’t agree with this proposal as it is. But I think it's time to advance this proposal regardless, and that's because as I mentioned the process document is pretty clear about what is required for stage 4 feedback, and none of this is new feedback. It's all been discussed and we have this web reality concern and process that we need to respect. But that said, we look forward to working with the committee on any process or other changes that we need to make this sort of work go through more smoothly in the future. So if anyone has any feedback about what we can do as chairs, or what we can do as a committee, to ensure that going forward, proposals like this, everyone can be happy with them. We'd love to hear that. Okay, that's my spiel. @@ -487,7 +479,7 @@ BT: It's in [“how-we-work”](https://github.com/tc39/how-we-work/blob/master/ AKI: We have a lot of precedent going back to at least 2012. We don't have as good of notes before that, but I did a little research on our web reality decision-making processes. And yeah, it's not new. -YSV: I would like to bring something up regarding introducing a process to accommodate this discussion. I was on the queue and I think it's an important point. Specifically, this was posted 20 minutes before this item was being discussed. Secondly the people who posted it have been members of the committee for over a year and are very familiar with our practice of requesting time boxes that will work for difficult time zones for specific topics, and working asynchronously. If someone can't make a specific time, they can request that the item be moved. They didn't do that. They also posted it 20 minutes before the meeting, not giving the champion even time to read it prior to the presentation. It was a very long topic. This points to something that I hope we don't see in the committee again, which is that rather than allowing discussion to take place, they put down a trump card to stop discussion. They made themselves, and their position, unavailable, forcing a final word onto the committee so they would not be questioned. This is not how we work and it should not be allowed to set a precedent. +YSV: I would like to bring something up regarding introducing a process to accommodate this discussion. I was on the queue and I think it's an important point. Specifically, this was posted 20 minutes before this item was being discussed. Secondly the people who posted it have been members of the committee for over a year and are very familiar with our practice of requesting time boxes that will work for difficult time zones for specific topics, and working asynchronously. If someone can't make a specific time, they can request that the item be moved. They didn't do that. They also posted it 20 minutes before the meeting, not giving the champion even time to read it prior to the presentation. It was a very long topic. This points to something that I hope we don't see in the committee again, which is that rather than allowing discussion to take place, they put down a trump card to stop discussion. They made themselves, and their position, unavailable, forcing a final word onto the committee so they would not be questioned. This is not how we work and it should not be allowed to set a precedent. AKI: Yeah, like I said, they had 19 days, and also four years, but really 19 days since this was added to the agenda and I think everyone who's ever been accommodated knows I work really hard to get people's schedule constraints in so that they can attend and they can be part of the discussion of things that are important to them. So this comes at a time when I believe that this could have been approached better. And therefore it feels like it was a strategy instead of a mistake. @@ -497,15 +489,15 @@ BT: Okay, I think it is worth noting to be clear that this is in no way attempti AKI: Okay, I think we talked about process enough. We're over time though, and I wanted to give JHX an opportunity to say a few words as you've been patiently waiting in the queue and then we need to move. -JHX: Thank you. So I'm not a 360 representative anymore, but I try to speak for them. Actually, it's very hard for their guys to participate in meetings because of the limitation of the time zone and the language problems. I hope everyone could read the issue posted in the repo carefully and the many points. The technical issues and the points about the process have been written in the issue include some arguments about the process limiting in which situation one can block a proposal for stage four. I think it's, I think I cannot say any more on that, but my last question is about the web reality, as if it's a good reason why we eventually maybe need to allow that to stage four. But I think the issue has discussed that process problem when the class fields and related proposals achieved stage 3 in 2017, actually, there is no chance to fix anything. So that it eventually will become web reality in that time. So the 360 or any other members who were not members at that time, they can do nothing about that. So I think this is really a problem in that it seems that if a proposal makes it to stage 3, there will be no way to to fix anything. I think the issue posted on the repo also points out that the quality of the proposal actually is not good enough for stage 3 in that time. So this is what I can say. Thank you. +JHX: Thank you. So I'm not a 360 representative anymore, but I try to speak for them. Actually, it's very hard for their guys to participate in meetings because of the limitation of the time zone and the language problems. I hope everyone could read the issue posted in the repo carefully and the many points. The technical issues and the points about the process have been written in the issue include some arguments about the process limiting in which situation one can block a proposal for stage four. I think it's, I think I cannot say any more on that, but my last question is about the web reality, as if it's a good reason why we eventually maybe need to allow that to stage four. But I think the issue has discussed that process problem when the class fields and related proposals achieved stage 3 in 2017, actually, there is no chance to fix anything. So that it eventually will become web reality in that time. So the 360 or any other members who were not members at that time, they can do nothing about that. So I think this is really a problem in that it seems that if a proposal makes it to stage 3, there will be no way to to fix anything. I think the issue posted on the repo also points out that the quality of the proposal actually is not good enough for stage 3 in that time. So this is what I can say. Thank you. BT: Yeah again, as I mentioned, any updates to the proposal, sorry to the process, that any member has, the chairs would love to hear that. So, you know, feel free to offer those suggestions to us. -JHD: Yeah, just pointing out that because all this stuff is shipped, we can't change or fix it regardless. It is on the web. It is permanent. So anything that can be changed can still be changed in stage 4: we've changed web-compatible things that are already in the spec already. This is sort of underscoring the practical points that have been made already, but if there's anything that can be changed, it can be changed at any time. And anything that can't be changed, can never be changed regardless of the stage. So it doesn't seem to me to be any justification for delaying stage 4 here. +JHD: Yeah, just pointing out that because all this stuff is shipped, we can't change or fix it regardless. It is on the web. It is permanent. So anything that can be changed can still be changed in stage 4: we've changed web-compatible things that are already in the spec already. This is sort of underscoring the practical points that have been made already, but if there's anything that can be changed, it can be changed at any time. And anything that can't be changed, can never be changed regardless of the stage. So it doesn't seem to me to be any justification for delaying stage 4 here. TLY: Sorry, just a very brief thing, is if the process does allow for sending things back stage 2 if there was truly a mistake made when going to stage 3, correct? -AKI: Yes. It's happened in the last couple of years. I can't remember which proposal but it certainly has happened in the last couple of years that something has gone from stage 3 back to stage 2 to resolve some issues. +AKI: Yes. It's happened in the last couple of years. I can't remember which proposal but it certainly has happened in the last couple of years that something has gone from stage 3 back to stage 2 to resolve some issues. TLY: I'm not proposing that it should be done in this case. I just wanted to address the suggestion that mistakes in a stage 3 spec cannot be addressed. @@ -515,13 +507,14 @@ DE: Yeah, so the process document addresses this. The process doc says that the JHX: Yeah, I think theoretically it could revert a stage but it needs a new consensus. But also if the topic has some essential disagreements, it's actually impossible to have any new consensus. So is it real consensus or not? I am confused, especially in the class fields proposal. So good consensus is still… only the old consensus… Is this a real consensus on very thought about it, especially in confused, especially in the Class Fields history? -AKI: I'm sorry. I lost your last sentence again. Could you just repeat that last sentence? +AKI: I'm sorry. I lost your last sentence again. Could you just repeat that last sentence? JHX: Okay, I mean, to revert a stage it needs consensus, but in the case of this proposal there are many essential disagreements. So it's impossible, in my opinion, to achieve any new consensus. So after stage 3 I think the 360 delegates are saying that the stage 3 consensus of this class fields actually is weak, but it's impossible to achieve any new consensus. So that's the problem. BT: All right, so we're well over time for this item so we have to move on I think. JHX, thank you for bringing process concerns, you know, we should work to address those going forward. Now is not time to litigate what stage 3 means or which proposals were properly stage 3 or not. We're very much past that point, so I think at this time I'd like to congratulate this proposal for getting to stage 4. It's been quite a journey. And we can work on addressing all of these process concerns going forward. That's something that I am personally very interested in doing and so, you know, anyone can feel free to talk to me about that, to talk to the chairs generally about that. This is, I know, something that the chairs are watching out for a lot, ways that we can improve our process. To help everyone work better together. So please do bring up those concerns. Are there any final non-process related questions before we move on? Sounds like no. Congratulations to everyone who worked on this project - it was a lot of work. ### Conclusion/Resolution + - All three proposals advance to Stage 4 - Any general process concerns should be brought up to the chairs @@ -558,21 +551,21 @@ BFS: I'll respond. I do think it's virtualizable I don't think it's virtualizabl SYG: A clarification question for BFS. I didn't quite understand the plan for it comment that you said from what I understood. You said I thought it sounded more impossible than we needed to plan for it if we standardize on the Behavior. -BFS: No, I don't think it's impossible in particular my example with yaml versus JSON is kind of the example that I would point to, it's where you have, Let's say a consumer it says type yaml, because they expect to consuming yaml or type CSS because I expect to consume CSS in reality. There's some kind of transform that happens in your host. Not not a build step in your host. There's some virtualization going on that returns for example a JavaScript module back or a JSON module back instead of the specific type that existed before it was transformed through the virtualization layer. That’s really all you need to need to plan for. +BFS: No, I don't think it's impossible in particular my example with yaml versus JSON is kind of the example that I would point to, it's where you have, Let's say a consumer it says type yaml, because they expect to consuming yaml or type CSS because I expect to consume CSS in reality. There's some kind of transform that happens in your host. Not not a build step in your host. There's some virtualization going on that returns for example a JavaScript module back or a JSON module back instead of the specific type that existed before it was transformed through the virtualization layer. That’s really all you need to need to plan for. SYG: And the point is that that JavaScript module would be stamped somehow as blessed by this host virtualization hook and if any code execution were to happen. That's just like you have a buggy host. That's your fault because you try to virtualize it. Is that the point? BFS: Yes, it requires your virtualization be done properly, but it would basically state that although my virtualization returned a JavaScript module it is acting as if it were a yaml module, right, right. Okay, I understand. -DDC: So I think the thank you for Bradley for raising the point like I think yeah, I think the plan is taken that like to the extent. I understand this that we would have to think about how to accommodate that scenario better. Yes, I think where I would come away with this is that we don't - we're not ready to do this yet. It still may be worth pursuing to drive alignment here, but I need to think about think about this case furher just to see if there's a way that we can we can make that work. +DDC: So I think the thank you for Bradley for raising the point like I think yeah, I think the plan is taken that like to the extent. I understand this that we would have to think about how to accommodate that scenario better. Yes, I think where I would come away with this is that we don't - we're not ready to do this yet. It still may be worth pursuing to drive alignment here, but I need to think about think about this case furher just to see if there's a way that we can we can make that work. -?: Just to be clear. I think you have to decide it for you. Go to stage 4. Yeah, like yep, that makes sense. Yep. +?: Just to be clear. I think you have to decide it for you. Go to stage 4. Yeah, like yep, that makes sense. Yep. MM: Yeah suggest that you come to one of the SES meetings that where we prearranged that this s this s meetings on the top. DDC: Sounds good to me. Okay, so I can just move to the next issue. I guess if there's there's no more on that. This one was this one is kind of related. But I think we could probably consider it separately. -DDC: so some host might decide to allow a like a lot of time to be specified for just JavaScript modules, right? Like right now you obviously don't have to specify the type of those currently hosts are within their rights to like a lot of something like asserting type JS JavaScript ecmascript to whatever and I could Envision a future in which different hosts do different things for this and some of these different strings work on some hosts and not others and things just it kind of confusing because developers don't know like what to expect for these things working or not. I think it would be interesting to see if we could state that like these sorts of like that kind of the thing that more straightforward way to like try to avoid that potential future is just say that like none of these are allowed to like potentially saying something in the spec with prose like a string value provided to well. I guess it wouldn't be host supported extra modules group type since we are not adding that but we can still add language that is like if a type of search is present like then it must not be done. The specifier must not be loaded as a source text module record. And I think that I think this sidesteps the problems with transforms that Bradley had raised previously, although maybe not I need to think on that but I think it's worth seeing if we can not allow any of these type values to be In order to load a JavaScript module just to avoid ecosystem Divergence there among different between different hosts a couple different ways. This could be done in prose. Is that kind of suggested the idea of like a registry to standardize these among hosted been kind of mentioned in the past and there's also a like we could do it like the hard way and just like come up with a list of strings that the suspect could good and bad, but I think really the most likely way is just to like do this is somehow I think the biggest question is whether well one question is like is this doing second question would be like is this also falling afoul of the issues with Transformations discussed previously? I'll go to the queue Kia cuz I'm still thinking about that. +DDC: so some host might decide to allow a like a lot of time to be specified for just JavaScript modules, right? Like right now you obviously don't have to specify the type of those currently hosts are within their rights to like a lot of something like asserting type JS JavaScript ecmascript to whatever and I could Envision a future in which different hosts do different things for this and some of these different strings work on some hosts and not others and things just it kind of confusing because developers don't know like what to expect for these things working or not. I think it would be interesting to see if we could state that like these sorts of like that kind of the thing that more straightforward way to like try to avoid that potential future is just say that like none of these are allowed to like potentially saying something in the spec with prose like a string value provided to well. I guess it wouldn't be host supported extra modules group type since we are not adding that but we can still add language that is like if a type of search is present like then it must not be done. The specifier must not be loaded as a source text module record. And I think that I think this sidesteps the problems with transforms that Bradley had raised previously, although maybe not I need to think on that but I think it's worth seeing if we can not allow any of these type values to be In order to load a JavaScript module just to avoid ecosystem Divergence there among different between different hosts a couple different ways. This could be done in prose. Is that kind of suggested the idea of like a registry to standardize these among hosted been kind of mentioned in the past and there's also a like we could do it like the hard way and just like come up with a list of strings that the suspect could good and bad, but I think really the most likely way is just to like do this is somehow I think the biggest question is whether well one question is like is this doing second question would be like is this also falling afoul of the issues with Transformations discussed previously? I'll go to the queue Kia cuz I'm still thinking about that. BFS: Something I don't think it's a problem for Transformations. I think there's some serious confusion on my part on what you're asking to do here. It seems like you're stating if somebody specifies that a dependency is expected to be JavaScript it always fails. @@ -598,14 +591,13 @@ DDC: I think it might have been this. MM: It doesn't look specific enough. I'm sorry. I don't remember but I think if there is a precedent regarding options object specifically that that says more than just use is object assert that it's is object. It would be nice to take a look at that because you'd be great to have more regularity understood regularity among options. -AKI: All right. Thank you. Mark. Next topic is from Gus, just a note that there's also an issue for import argument evaluation order and they say they would PR open for that. Next up is BFS. +AKI: All right. Thank you. Mark. Next topic is from Gus, just a note that there's also an issue for import argument evaluation order and they say they would PR open for that. Next up is BFS. BFS: Sure, So this is an aside we can do this later if we pull off the Prototype that Ihave varied opinions on if it should or shouldn't reject depends upon if we pull off the Prototype all … I’ll follow up offline. SYG: I want to hear from the 402 folks because reading 402 there is there are two abstract operations that deal with option bags. One of them is called CoerceOptionsToObject, which does the ToObject, and then there's a sentence in that AO that says its use is discouraged for new functionality in favor of the separate AO called GetOptionsObject which raises a TypeError if the input is not an object, so it seems like the Intl folks learned perhaps the hard way that they should not be coercive. -SFC: The new version of the GetOption construct came from Temporal, where previously in -Temporal we were using the legacy Ecma 402 GetOption abstract operation. Then some of the Temporal champions basically raised some salient concerns that we should not be autoconverting the argument to an object. We had this discussion and decided that for new Ecma 402 proposals, including those that are shipped in ES2021, we will use the new version of GetOption, but we're keeping the old operation around for backwards compatibility. I'm glad that we're having this discussion now with the greater group because we were seeking feedback from the greater group on this and we didn't get much feedback at the time on this subject. That's the background. This is relatively new. This just happened a few months ago when we added the new constructions. PFC was one of the people behind that as well. +SFC: The new version of the GetOption construct came from Temporal, where previously in Temporal we were using the legacy Ecma 402 GetOption abstract operation. Then some of the Temporal champions basically raised some salient concerns that we should not be autoconverting the argument to an object. We had this discussion and decided that for new Ecma 402 proposals, including those that are shipped in ES2021, we will use the new version of GetOption, but we're keeping the old operation around for backwards compatibility. I'm glad that we're having this discussion now with the greater group because we were seeking feedback from the greater group on this and we didn't get much feedback at the time on this subject. That's the background. This is relatively new. This just happened a few months ago when we added the new constructions. PFC was one of the people behind that as well. SYG: So given SFC's explanation. I am happy to also support reject and also happy to support it as a precedent for new features that use options bags. I would like for both 262 and 402 to treat options bags uniformly, legacy stuff notwithstanding. @@ -617,7 +609,7 @@ AKI: Thank you. ## Requests for services from Ecma for TC39 -DE: Okay, so request for services from ecma for TC39, so I want to talk about how we can work with Ahmed to make sure that Community needs are met. So each one talked about how this was done recently for the typesetting request which is going to require continued collaboration. I don't want to come to a conclusion about what services to request that's a much more detailed discussion and it will be for future presentations but more about what kind of process we could use. So why should we request services from Ecma? Ecma is leaders would like to meet TC39s needs we're clearly the biggest and most successful TC and Ecma, you know, most downloads. most people joining these days and paying membership fees and Ecma does want to see us succeed and does want to provide what we need to succeed. At least they have in the past said that when I talked to them about it . that are nine is recognized to have different needs from other TCs in ECMA So, you know, we have three co-editors who are working very hard and Ecma levels So, you know TC39 recognized have different needs from other TCS and works differently other TCS are largely TCS are largely run by Ecma staff and contractors. Actors just to get to get Services being provided by the Secretariat and we're you know, we need different services so So it came over once explicit signals from the committee about what need. and in writing would be the best way to get these kinds of requests for example, they mentioned conclusions in the TC39 meeting minutes. And in particular we talked previously like in the November 20 meeting about different ways that TC39 could get budget like maybe we could get our own budget to allocate ourselves. And so far what I've heard from ecma and I don't know if he's trying is there but, you know, correct me if I'm ? didn't think anything is that the ecma secretary would prefer if instead of asking for a budget we ask instead for services. So that's why this presentation is to bill services that are planned and we're encouraged not to think about the overall economic budget because that's the domain of the Ecma Secretariat the equity exact calm in the I could General Assembly. All TC39 member organizations can go to the Ecma General Assembly meetings in order. Your members vote on the budget. So I want to propose a kind of flow for making these requests. So, you know, someone will propose a request in DC that are 9 mm?. Plenary somehow the committee will come to conclusion. I'll talk more about that in a minute. So then the chairs or a chair appointed liaison would explain this request to to ECMA; ECMA can consider these requests and make a final judgment. And then you know from there it may take three to six months for the Ecma Secretariat or exact calm or GA or whichever decision-making bodies equities appropriate to decide on it in this type of setting case. I guess it was the Secretariat of the execom who needed to be engaged. +DE: Okay, so request for services from ecma for TC39, so I want to talk about how we can work with Ahmed to make sure that Community needs are met. So each one talked about how this was done recently for the typesetting request which is going to require continued collaboration. I don't want to come to a conclusion about what services to request that's a much more detailed discussion and it will be for future presentations but more about what kind of process we could use. So why should we request services from Ecma? Ecma is leaders would like to meet TC39s needs we're clearly the biggest and most successful TC and Ecma, you know, most downloads. most people joining these days and paying membership fees and Ecma does want to see us succeed and does want to provide what we need to succeed. At least they have in the past said that when I talked to them about it . that are nine is recognized to have different needs from other TCs in ECMA So, you know, we have three co-editors who are working very hard and Ecma levels So, you know TC39 recognized have different needs from other TCS and works differently other TCS are largely TCS are largely run by Ecma staff and contractors. Actors just to get to get Services being provided by the Secretariat and we're you know, we need different services so So it came over once explicit signals from the committee about what need. and in writing would be the best way to get these kinds of requests for example, they mentioned conclusions in the TC39 meeting minutes. And in particular we talked previously like in the November 20 meeting about different ways that TC39 could get budget like maybe we could get our own budget to allocate ourselves. And so far what I've heard from ecma and I don't know if he's trying is there but, you know, correct me if I'm ? didn't think anything is that the ecma secretary would prefer if instead of asking for a budget we ask instead for services. So that's why this presentation is to bill services that are planned and we're encouraged not to think about the overall economic budget because that's the domain of the Ecma Secretariat the equity exact calm in the I could General Assembly. All TC39 member organizations can go to the Ecma General Assembly meetings in order. Your members vote on the budget. So I want to propose a kind of flow for making these requests. So, you know, someone will propose a request in DC that are 9 mm?. Plenary somehow the committee will come to conclusion. I'll talk more about that in a minute. So then the chairs or a chair appointed liaison would explain this request to to ECMA; ECMA can consider these requests and make a final judgment. And then you know from there it may take three to six months for the Ecma Secretariat or exact calm or GA or whichever decision-making bodies equities appropriate to decide on it in this type of setting case. I guess it was the Secretariat of the execom who needed to be engaged. DE: So we previously sent this typesetting request to ECMA, we discussed it in November 2020, chairs and editors agree this is necessary. And so we raised the topic to them in December and these slides are outdated because now know that they apparently approved it in the April 2021 execom meeting. So there's some ideas for what we could ask for in the future. But later in meeting. We will see a presentation about the nonviolent communication funding request for service request. So what I would propose for the process here is that it's about gathering feedback on the committee's needs rather than voting or consensus oriented around blocking things. So delegate presents the need for the service to the plenary, the committee gives feedback and then at the end either synchronously or asynchronously the chair group can consider all feedback and judges whether to forward the request to ECMA. So the rationale for this process is that the goal is to share to expose real needs. It's not as core to the decision making function of the committee as when we change the language or the process. ECMA processes tend to discourage voting, but they don't they also don't require this hundred percent consensus, which is each one has told us is pretty unique to TC39 and maybe related groups that came off of it. And finally, we don't have the authority to come to consensus on spending ECMA’s money, that Authority lies with the ECMA Secretariat in GA. It was already told us that even if we do come to consensus on something like need for video conferencing software, they'll probably reject that request so doesn't seem like thinking of us as consensus and making these firm decisions and even I don't see if that even makes sense. So I want to ask if people have concerns about this process for requesting services from ECMA and if people are interested in collaborating on formulating these requests. @@ -625,7 +617,7 @@ SYG: The linchpin to me seems to be in that liaison part in relaying the request DE: Oh, short answer me. The chair group is In the in the case and I've done a bad job of this so far in the chair group in the case of typesetting is communicated to ECMA that they authorized me to help with this communication. I you know ?’s presentation was the first time that I heard from the chair group. I mean from ECMA they were making these requirements so I clearly did quite a poor job of liaising and I'll try to improve in the future. Criticism your job here. Well, you know it was I think there's there's more steps that I could take like going forward now that we have this result from the exec com know I'll start an email thread with the relevant parties and we can try to talk this over. And I certainly don't think that this deserves another six-month delay until the next execom meeting. -WH: Can you go to slide 8? Process for considering requests for services. I am not comfortable with delegating decisions on whether to request services or not to the chair group. I think this should be done by TC39. +WH: Can you go to slide 8? Process for considering requests for services. I am not comfortable with delegating decisions on whether to request services or not to the chair group. I think this should be done by TC39. DE: Can you say more? @@ -637,9 +629,9 @@ WH: I'd rather not get into the reasons. DE: Okay, it's hard for me to respond in that case. So we can just leave it at that. -WH: Can you say why this committee should not decide whether to forward these things? +WH: Can you say why this committee should not decide whether to forward these things? -DE: I have rationale written right there. I mean, I think the committee should be should be making these proposals and the decision should be based on the feedback from the committee, but I think the standard that we use for consensus for changing the language in the process is just is just different from what we need to make organizational decisions like that. The chair group doesn't ask for consensus from the committee when organizing the agenda of the meeting. And there is a strong decision making process for the budget. It happens in the exec com and the secretariat and GAl. So I don't think we need to be that very strong filter ourselves. I think we need to just you know, these are administrative things. +DE: I have rationale written right there. I mean, I think the committee should be should be making these proposals and the decision should be based on the feedback from the committee, but I think the standard that we use for consensus for changing the language in the process is just is just different from what we need to make organizational decisions like that. The chair group doesn't ask for consensus from the committee when organizing the agenda of the meeting. And there is a strong decision making process for the budget. It happens in the exec com and the secretariat and GAl. So I don't think we need to be that very strong filter ourselves. I think we need to just you know, these are administrative things. WH: Some of what you said is just factually incorrect. The GA is not a strong filter. diff --git a/meetings/2021-04/apr-20.md b/meetings/2021-04/apr-20.md index 2a53e774..8d6d027f 100644 --- a/meetings/2021-04/apr-20.md +++ b/meetings/2021-04/apr-20.md @@ -1,6 +1,6 @@ # 20 April, 2021 Meeting Notes -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -26,15 +26,13 @@ | Istvan Sebestyen | IS | Ecma | | Frank Tang | FYT | Google | - - ## Intl Locale Info for Stage 3 -Presenter: Frank Tang (FYT) + +Presenter: Frank Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-locale-info) - [slides](https://docs.google.com/presentation/d/1h-iaDM5RiD5rpb0aYr1GMRLRRBh72zVEKtMyMJkCkfE) - FYT: My name's Frank Tang, coming from Google and working on V8. Today I have several proposals to bring to you. I think this morning I have three, and the first one to talk about is all about ECMA-402. The first one I'll talk about is the Intl locale info API. This is seeking to advance to stage 3. FYT: So what's the motivation? The motivation is that we already have a helpful Intl.Locale object in Intl at 402 for a while. And so it basically represents a Locale and what we look to do is try to expose a little bit more information in a Locale which mean we have a Locale that the user gives us but there are certain properties that the system may know about that. So what kind of information? For example, the week data. And and if you need to have some way to render a calendar-like thing in different applications, each locale convention may have a different way to identify what is the first day of the week. For example in the U.S. usually the first day of the week is Sunday, but in Europe it is Monday. And which day is the weekend starting day? In a lot of Western cultures usually is Saturday, but in a lot of Middle East cultures, actually, it's Thursday. The hour cycle is 23 hours or 12 hours cycle, and so on and so forth. Actually the measurement system would decide to take it out. @@ -57,7 +55,7 @@ YSV: Yep. Just wanted to chime in and say we're happy to see this go to stage 3 RBR: Thank you. Yeah. We liked explicit statements of support. -FYT: Yes, and it's from Mozilla, right? +FYT: Yes, and it's from Mozilla, right? YSV: Yeah, that was those from Mozilla and @@ -67,33 +65,32 @@ LEO: +1 for stage 3 SFC: +1 -RPR: So we've only heard positives. Let's just explicitly ask now, any objections to stage 3? [silence] Congratulations, you have stage 3 . - +RPR: So we've only heard positives. Let's just explicitly ask now, any objections to stage 3? [silence] Congratulations, you have stage 3 . Okay. thank you. ### Conclusion/Resolution -Stage 3 +Stage 3 ## Intl Display Names v2 for Stage 3 + Presenter: Frank Tang (FYT) - [proposal](https://github.com/tc39/intl-displaynames-v2) - [slides](https://docs.google.com/presentation/d/1_BR2bq6gi_i9QjDDluv683cuO2AXNwZl-3hXC4gLl3M) - FYT: So this is a second proposal like to talk about in Intl.DisplayNames. We need to stage 3, so give a little bit background history the Intl.DisplayName proposal from August 2020, this V2 called as a version two is created a state to 0 proposal and the reason was that around that time we have passed Intl.DisplayNames to Stage 3, I believe and and also after that I think the V8 is shipping that. In September the stage this proposal Advanced stage one and in January this year again a similar time this advanced to stage 2. so here we proposed to the in stage 3 this timeframe. -FYT: So what is entailed in Intl.DisplayNames? then is there a lot of people in different application localized and translate string and we're not going to touch those things, but there are common items that most of the applications have to deal with - of course not all - in we see they're having a lot of time and it's better to have be able to just, you know, reduce the payload that application has to carry that in the browser therefore, we have a straightforward way API to provide that to to provide this API for display names for commonly used items. +FYT: So what is entailed in Intl.DisplayNames? then is there a lot of people in different application localized and translate string and we're not going to touch those things, but there are common items that most of the applications have to deal with - of course not all - in we see they're having a lot of time and it's better to have be able to just, you know, reduce the payload that application has to carry that in the browser therefore, we have a straightforward way API to provide that to to provide this API for display names for commonly used items. FYT: So what kind of so weird to talk about this in January and so what happened after here's will (?) while app changes after the January meeting? First of all there is the draft text here. You can take a look at it. In there some scope change happens during the January meeting, I believe but currently what we have is we I think that to would decide to drop the time zone because they're little bit more complicated support timezone for display and because you need to figure out like daylight saving time and that kind of thing. So we think this should be a simple model and shouldn't include those as well and so in stage 2 timeframe we believe that one should be support is calendar name. So which means they're different calendars, Japanese calendar, Coptic calendar, and when you localize it to a language how to display that name of that particular calendar. And also unit. We have number, formatting a number with a unit, but the here we have to provide a starting - [audio glitch]. So we have to return a label for that unit for example in certain user interface. They may need it. This means they are not associated with a number but just a standalone name for that particular unit. Also this channel one that I think in the version 1 we have something but later on we'll decide to remove it that time and that's how we have a lot of different things together like weak name, weekday name, month name, and also date time field name in because the complication of the calendar we all agree in the V1 time and also actually before w decided to remove the week week a month then but also that time somehow remove the daytime field name, it has a name mean the label for a particular data feel not that particular day time as for example the name of "year" or "Month" or "week", but think about that localized different languages so that we decide to add it back. Because that's not something you can get from other APIs, any other API one of the reason well, so there's a lot of reason we drop order like week or month itself the name of different week or different month, but date time field in didn't fit that criteria. So after discussion with we decided it is a good idea what I did back here and shouldn't be controversial say it's all the different Locale show have a feel for that. -FYT: Another one is very commonly required. There are several other people asking is to spec out, the dialects support for the language and where see what's that mean. So here's a change. We need to make I say new again. This does not mean the new to the proposed all the thing new to proposal is highlight as green text and the deleted text. So you drove the part. I label "new" mean is new since stage 2. Okay, so this is the change since January. We add this if car if there's a language we have (?) and can step dialect name or dialect standard in and the default dialect in and we do something different and we'll also add a calendar date time field and unit as a new value for the type option. So here for example is time whole what does die like this handling mean and here is showing that there was showing differently to different thing is you see for example en-GB in dialect then it will just show British English, but in standalone it will show English and in parenthesis United Kingdom, so and so forth. So I show a couple other examples here. By the way those data are in CLDR. I show in particular here that there are three new green highlight but also the new one which mean after this H2. added this back in the draft is that date time field names. Here is an example: locale Calendar it may return different kind name for the different calendar and so on. Unit here is showing the Chinese and French for example. +FYT: Another one is very commonly required. There are several other people asking is to spec out, the dialects support for the language and where see what's that mean. So here's a change. We need to make I say new again. This does not mean the new to the proposed all the thing new to proposal is highlight as green text and the deleted text. So you drove the part. I label "new" mean is new since stage 2. Okay, so this is the change since January. We add this if car if there's a language we have (?) and can step dialect name or dialect standard in and the default dialect in and we do something different and we'll also add a calendar date time field and unit as a new value for the type option. So here for example is time whole what does die like this handling mean and here is showing that there was showing differently to different thing is you see for example en-GB in dialect then it will just show British English, but in standalone it will show English and in parenthesis United Kingdom, so and so forth. So I show a couple other examples here. By the way those data are in CLDR. I show in particular here that there are three new green highlight but also the new one which mean after this H2. added this back in the draft is that date time field names. Here is an example: locale Calendar it may return different kind name for the different calendar and so on. Unit here is showing the Chinese and French for example. FYT: Yeah, this is the one I'm looking for. You also need to decide a decoding table to the to see whether that data on field name are valid, so this is the only thing we're going to support for the fields, including year, era, quarter, timezone name, or second. -FYT: Here's an example of data field here. Oh, sorry, I didn't include English should be pretty easy to figure out what to call? I've included Spanish and Chinese examples here. +FYT: Here's an example of data field here. Oh, sorry, I didn't include English should be pretty easy to figure out what to call? I've included Spanish and Chinese examples here. FYT: Of course, we need to change the internal slots. So there's some changes in the internal slots to represent the data. resolvedOptions have the change but not in the algorithm itself, just an additional table entry. And of course an instance have the kit that information the variable. @@ -145,7 +142,6 @@ FYT: Good question. I don't know. Are these plurals? RRD: Yes this is plural form in French. - FYT: This is I think this is a stand alone form inside this planean. I don't think we simplify. I believe I see how the CLDR I think this is used to for just showing the particular item. I don't know whether that have a plural form or a singular form in the data. They're That's an interesting question. WH: Okay, and in languages with multiple plurals, which plural do you get and how do you get the other ones? @@ -198,7 +194,7 @@ FYT: I believe we talked about that. I mean the three criteria we have talked ab SFC: I've talked with ZB a lot about this offline. There are a lot of parts of this proposal that we've actually removed. We've removed more parts of this proposal than we've added because of various issues within TG2, and the latest proposal reflects the TG2 consensus which we've discussed multiple times in the TG2 meetings, including with ZB and others. The motivation for calendar display names is largely motivated by Temporal. Both calendar and datetime fields are both very important for calendaring applications when displaying dates in different calendars. This is how you can label what calendar they are in; the datetime field is very important for filling out date forms and date pickers, for example. To be clear, when we say datetime fields we mean the translation of the word "months" or "hours" into other languages, like "Stunde" or "Hora", for example. They also require a fairly small amount of data for a high impact, which helps make their case of broad appeal because one of the conditions for stage advancement is that the larger amount of data, the bigger the use case it requires. And both of these require a fairly small amount of data. The use case for units is for example on unit converters and units pickers that you see, so we've generally been happy with the use case for unit. That one has the weakest use case of the three, but it's but it's also a fairly small amount of data that we already include so it's basically just exposing that. But I'd say calendar and datetime fields are most strongly motivated and unit also has some motivation. -SFC: But also, process-wise, I don't think it's the job of this committee to rule on whether these are motivated. That's the job of the TG2 committee, and we have discussed this extensively in the TG2 meetings. So if you have concerns about the use cases, I would encourage you to file issues in the repository and come to the TG2 meetings when we have these on the agenda. +SFC: But also, process-wise, I don't think it's the job of this committee to rule on whether these are motivated. That's the job of the TG2 committee, and we have discussed this extensively in the TG2 meetings. So if you have concerns about the use cases, I would encourage you to file issues in the repository and come to the TG2 meetings when we have these on the agenda. DE: Yeah, I apologize for missing the TG2 meeting. It's just in your in your presentation in the number format V3 you identified which parties were interested in things and wondering if you had parties interested in the unit display one, the unit picker? @@ -216,7 +212,7 @@ DE: You're right. I'm raising this too late. Sorry. RPR: DE are you withdrawing that as a concern? -DE: Yes. +DE: Yes. RPR: Okay, Frank, so we still have the outstanding issue of the whether we're okay with conditional advancement @@ -238,13 +234,13 @@ YSV: Yes. FYT: Dialect is the current value in the spec text. So so the only thing my understanding is the change is to is to remove the change the string from dialectName to dialect and stand on the end and our and that's the only change because number also raised the issue about additional support of menu and my understanding that's not required for (?) and we can we can support our later right, you know, if we have a look and have another proposal to support that it is not supporting. It's an additional feature request basically coming in 24 hours ago. We didn't know reason block that right? -YSV: Yeah, so it's not a block on the menu thing. That is an additional request. It's primarily the unnecessary repetition of "name" thing and also our agreement that we can standardize around dialect. +YSV: Yeah, so it's not a block on the menu thing. That is an additional request. It's primarily the unnecessary repetition of "name" thing and also our agreement that we can standardize around dialect. -FYT: as the author of this proposal. I am happy to make that change to remove the name I do. And I personally don't see any issue to make a name change of that. So I'm glad that anba brought that up. If he'd brought that a little earlier I'd've already changed it. So I'm pretty happy to comply with that. I don't know what other people think about this. And would that be okay too +FYT: as the author of this proposal. I am happy to make that change to remove the name I do. And I personally don't see any issue to make a name change of that. So I'm glad that anba brought that up. If he'd brought that a little earlier I'd've already changed it. So I'm pretty happy to comply with that. I don't know what other people think about this. And would that be okay too SFC: Yeah, so there's a lot of new information in the last 24 hours on this proposal on multiple different fronts. I'm really glad that we're getting all this extra information. I think that a reasonable conclusion here for the committee are (1) what YSV had just stated, and with with that change that we can do conditional advancement to stage three; or (2) say that given the extra information from from Daniel from Anba among others that we'll continue to work on this proposal. It is a bit late for the feedback, but it's not too late because this proposal is still at stage two, and it's not uncommon for stage 3 advancement presentations to result in going back to the drawing board and coming back next meeting, which is in just six more weeks. It's still in the same quarter. So I think that both of those are reasonable conclusions, and I don't mean to play a process card and say that oh, we can't take this feedback because it's too late. I want to make sure that we have a high quality proposal. So I think that it's ready for stage 3, but I also think that maybe it could improve even more if we continue to work on this new feedback that we just got in the last 24 hours. -YSV: That actually echoes also my feeling here. Fundamentally, Mozilla does support this proposal but given the feedback that's come from multiple directions. It feels like the appropriate thing here to do is to defer to next meeting. Although I gave the requirements from our side, because the issue was posted from us, I think it'd be better and we would be more certain of what we're doing if we wait until next meeting. +YSV: That actually echoes also my feeling here. Fundamentally, Mozilla does support this proposal but given the feedback that's come from multiple directions. It feels like the appropriate thing here to do is to defer to next meeting. Although I gave the requirements from our side, because the issue was posted from us, I think it'd be better and we would be more certain of what we're doing if we wait until next meeting. DE: I also support this proposal. I think it provides useful things and I apologize for not being sufficiently engaged in the past and I'm looking forward to working together in the next month and a half on it. @@ -252,35 +248,34 @@ RPR: Okay, so it seems like there's a lot of positive intent to do the steer tow FYT: We would you like to decide the Next Step. Yeah. I mean if people feel that we need to change it to next meeting then our preferred to have some action lighting the owning action items so far here is to remove the name which we can just do it by proving here, but I just can't go back in six week coming , but I'm not quite sure of what is the action items is coming it easy prefer me to do. I would rather not go back a six-week do nothing in combat again. Just remove the Nan the chance of man. So if if the come eat again, give me some action item happy to do that. - SFC: Let me summarize what I think the action items are. I think one is to work and verify with Anba and ZB on all the rest of the points in the issue that Anba opened to make sure we're in alignment on all three of the points including whether or not we want to handle the menu display, which it sounds like it's currently inconclusive. And two is to work with DE to verify, you know, whether to continue to include the unit names because of the issues that DE and WH raised here. FYT: DE already dropped that issue, right? -DE: I'm happy to keep discussing it over the next month and a half. I don't want to be this so objector blocking the whole proposal. So it's in that context that I dropped it. +DE: I'm happy to keep discussing it over the next month and a half. I don't want to be this so objector blocking the whole proposal. So it's in that context that I dropped it. -WH: A correction: I did not raise any issues today. All I was doing was asking clarifying questions. I did not raise issues about the proposal. +WH: A correction: I did not raise any issues today. All I was doing was asking clarifying questions. I did not raise issues about the proposal. RPR: Okay, so I think then that Shane has listed the action items. Okay, happy that concludes this topic, FYT? FYT: Okay sure. We can move on to the next topic. ### Conclusion/Resolution -proposal does not advance, will discuss further with DE, Anba, and ZB and return next likely at meeting +proposal does not advance, will discuss further with DE, Anba, and ZB and return next likely at meeting ## RegExp unicode set notation + properties of strings update + Presenter: Mathias Bynens (MB) & Markus W. Scherer (MWS) - [proposal](https://github.com/tc39/proposal-regexp-set-notation) - [slides](https://docs.google.com/presentation/d/1nV0NHUG5bd201rUSfJinLl8NTmnnyL5gTIhD0llsW1c/edit) - MB: So yeah, we're here to present an update about the regular expression set notation proposal. We've had a lot of meetings including a TC39 incubator call since last time we presented at TC39 and all those meetings were really good, active, I think. We have a new coherent proposal that also includes the Unicode sequence properties proposal which is now called the Unicode properties of strings proposal. So here's the general direction that we're going in right now. We're not asking for stage advancement today; this is just an update, but we hope to start working on drafting actual spec texts. Currently we only have to do spec text, of course on the way to the next meeting and then potentially next time ask for stage 2, but for now, here's MB: The proposal is about adding syntax and semantics for difference intersection and nested character classes. Union is already supported in a limited form in current regular Expressions, but it only works within a single character class. And since last time we brought this up at TC39, here's some of the issues we've been discussing. We'll go into detail. We have a slide for each of these with some code examples that I think is easier to follow when we're talking about this stuff. But anyway if you look at the slides they're in the agenda, there's links for everything. So if you want to find the background on the discussion and the Alternatives we could say or you can find it all under the present directory. -MWS: Sure, so we had started the discussion with the suggestion of using a prefix like a `\u` with curly braces and we were using that for a while but in discussion, especially with WH, we tossed around pros and cons of that and we realize that could be misleading and lead to frustration, in particular because even though that sort of syntax looks like it should work on its own and have the desired effect of the new semantics. It actually would just be matching a literal uppercase U followed by the various characters if the `u` flag is not specified. So we came back to considering other ways of saying that the character classes have different syntax and semantics and we are now proposing a flag that implies the Unicode mode and builds on top of it. So basically the `u` semantics would apply when this new flag currently is specified but in addition we would also have the modified syntax in the character classes and also use the same flag to enable the properties of strings that also came out of discussions that we have with WH and other people that it's it's cleaner to have the new semantics go together and enable both the set notation and the properties of strings together with the new flag. We also talked about a modifier, which some Regex engines use but because ecmascript hasn't had any modifiers inside the pattern like `(?x`, we are not proposing that at this point. We are not totally set on V but V is kind of the next "U" in the alphabet and that makes sense. There is also a limited set of letters available. But we are not totally set in stone on the particular letter. +MWS: Sure, so we had started the discussion with the suggestion of using a prefix like a `\u` with curly braces and we were using that for a while but in discussion, especially with WH, we tossed around pros and cons of that and we realize that could be misleading and lead to frustration, in particular because even though that sort of syntax looks like it should work on its own and have the desired effect of the new semantics. It actually would just be matching a literal uppercase U followed by the various characters if the `u` flag is not specified. So we came back to considering other ways of saying that the character classes have different syntax and semantics and we are now proposing a flag that implies the Unicode mode and builds on top of it. So basically the `u` semantics would apply when this new flag currently is specified but in addition we would also have the modified syntax in the character classes and also use the same flag to enable the properties of strings that also came out of discussions that we have with WH and other people that it's it's cleaner to have the new semantics go together and enable both the set notation and the properties of strings together with the new flag. We also talked about a modifier, which some Regex engines use but because ecmascript hasn't had any modifiers inside the pattern like `(?x`, we are not proposing that at this point. We are not totally set on V but V is kind of the next "U" in the alphabet and that makes sense. There is also a limited set of letters available. But we are not totally set in stone on the particular letter. MB: We actually have a slide with some more details on the flag because we're not just talking about the new letter for a new flag, but we also need the corresponding letter on reg. It's a prototype. We've linked to the bikeshed issue, it's number 14. So if people have any ideas or opinions there, please post them on GitHub. Here's an overview of all the current flags that we have in ecmascript including the latest addition, the `d` flag. Which has to have a prototype getter. So this is one example where just because the flag name is `d`, it doesn't mean that's the getter name has to start with D necessarily so we can kind of choose whatever we want. But if people have a better idea than something like "unicode sets". Then yeah, please let us know some other ideas in the thread. There are extended character class or Unicode character class. Yeah, you can do whatever you want basically. @@ -357,9 +352,11 @@ RPR: Okay. Thank you. so we're basically the end of the time box Mathias or Mark MB: This was just a status update. So if anyone has any feedback or ideas about any of what we've shown today, please participate on the github. There's a place in the slides to all the specific issues there. So yeah, we look forward to hearing what you have to say. We’ll start drafting spec text soon. Thank you. ### Conclusion/Resolution + Was not seeking advancement but no expressed objections to the described syntax ## Extend TimeZoneName Option Proposal for Stage 2 + Presenter: Frank Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-extend-timezonename/) @@ -367,7 +364,7 @@ Presenter: Frank Tang (FYT) FYT:this is a proposal for to extend option for the time zone name in date time format and year and the other two this is the curling stage one and we're asking for stage two stage two events that. -FYT: so the motivation is to try to extend `timeZoneName`, which is already existing in Intl.DateTimeFormat. And currently I think we have two different valid values. We try to add additional for to support more formatting option. Really we intend to change in ecma402 was to add additional valid option for time zone name. this is a change from stage one. We change the mains currently. We are as proposing to change additional for pre-existing have `short` and `long` for the time domain, but would like to add a `shortOffset`, `longOffset` — we used to call these `shortGMT` and `longGMT` and someone pointed out that's not a good name. So at ecma402 we changed it after consensus, we decided to change to `shortOffset` and `longOffset` — and also `shortWall` and `longWall`. I'll explain to you what this means. So here's one of the code example, so let's say we go through this all six options, what it displays in English. You will see the `shortOffset` and `longOffset` examples reference the relation to GMT and the `shortWall` and `longWall` refer to PT or Pacific time without identifying whether that's standard time or not Standard time, which is what is actually identified by short and long. +FYT: so the motivation is to try to extend `timeZoneName`, which is already existing in Intl.DateTimeFormat. And currently I think we have two different valid values. We try to add additional for to support more formatting option. Really we intend to change in ecma402 was to add additional valid option for time zone name. this is a change from stage one. We change the mains currently. We are as proposing to change additional for pre-existing have `short` and `long` for the time domain, but would like to add a `shortOffset`, `longOffset` — we used to call these `shortGMT` and `longGMT` and someone pointed out that's not a good name. So at ecma402 we changed it after consensus, we decided to change to `shortOffset` and `longOffset` — and also `shortWall` and `longWall`. I'll explain to you what this means. So here's one of the code example, so let's say we go through this all six options, what it displays in English. You will see the `shortOffset` and `longOffset` examples reference the relation to GMT and the `shortWall` and `longWall` refer to PT or Pacific time without identifying whether that's standard time or not Standard time, which is what is actually identified by short and long. FYT: There's another example in Chinese. I didn't include one other example whose you can find in the repo that actually I think in Russia or in France that the offset may not be using the token GMT. There could be say UTC, it is localized based on that particular Locale, but it's always referring to the GMT offset, but there are different representations of that. @@ -375,7 +372,7 @@ FYT: so again talking about the stage 2 or 3 requirements within 402 that we hav FYT: We also consider about the data size increase and as I mentioned to you this one of the concerns early on they are actually several other proposal to option to include one of them is actually I forgot which particular I have something issue of that. So we actually decide to remove, I think it's something related to the time zone city one, so because after we calculated that so as I mentioned that I can afford to one of the things we care about the size. The actual order to filter out to of the proposed items. So this current for is whatever we believe take a balanced and here we showing here that some of them actually only have slight differences from the other. For example for Pacific time in the place have time zone they will be different because that you will have either a Pacific Time, Pacific Daylight Saving time, or Pacific Standard but you many times actual note there. Friends, right? So example Japan do not use daylight saving time. So either option will return the same value. So therefore the size increases not dramatic in because of part of partially because of that, and for the short and long offset, the basically for each Locale is basically have probably two different pattern and that's all. The other things are formatted according to that pattern. So here is the size increase that we calculate this is purely based on just look at the source tags and compressed. We didn't consider any additional data structure issue for that. -FYT: as I mentioned broad peers also very important. So for example additional value that someone suggests to support for ISO 8601 time zone style, which we actually decline even though it would only use a small amount of data. The reason that ISO 8601 formatting is really coming through formatting the datetime form itself, which is not a localized formatting. Right? So there are no use case to just format kind zone for that particular format and mix it with other date-time format. We believe that request is legitimate, but it's not part of ECMA 402 so we suggest the one who requests for such a feature, go back to TG1 to talk about that may be supporting that (?) to do with machine-readable data and format, but not for human readable data format. +FYT: as I mentioned broad peers also very important. So for example additional value that someone suggests to support for ISO 8601 time zone style, which we actually decline even though it would only use a small amount of data. The reason that ISO 8601 formatting is really coming through formatting the datetime form itself, which is not a localized formatting. Right? So there are no use case to just format kind zone for that particular format and mix it with other date-time format. We believe that request is legitimate, but it's not part of ECMA 402 so we suggest the one who requests for such a feature, go back to TG1 to talk about that may be supporting that (?) to do with machine-readable data and format, but not for human readable data format. FYT: The history is that we advance to stage one in January and so in the April monthly meeting we go over that and we said I think we got received a suggestion to change the name was then change it and they all support come here to support advancement to state two. So here I am. So also I if people agree with that and also have the eight April type you can see in we also if with people here agree to support to advance to stage two that we also like to ask for two stage three reviewers for that. So here's the criteria for stage two, I believe we met that, and so here I come and as for the stage two advancement also to remember for Stage 3 of the course before that will open for questions @@ -400,24 +397,24 @@ USA: I'd be happy to help somebody if they want to review. RBU: I have never reviewed something before but I'll be glad to help. ### Conclusion/Resolution + - Stage 2 - Reviewers: - Philip Chimento - Rick Button (assisted by Ujjwal) - ## Resizable Buffers + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-resizablearraybuffer) - [slides](https://docs.google.com/presentation/d/1K7t8lphY45yOfvsTOHxF4wZiMFCsVZZ_Bf_Wc7S3I_g/edit?usp=sharing) - SYG: So this is resizable array buffers and growable shared array buffers again asking for stage 3. Since last time the three action items that were identified from the last meeting was one to decide on a fixed page size for the implementation defined rounding of Max fight length. This was pushed back that was well taken from the from the Mozilla folks. others as well to as to interrupt concerns about if we let the implementation define what page size to use for this possible rounding of this optional rounding of Max byte length you pass the max byte length into the Constructor that could exacerbate interop issues. So we should fix a page size. And I'll get into that first. There was another item to make some progress on the web assembly integration given that it is one of the main motivations for this proposal to actually go to the wasm folks and to sketch out a technical solution There for having this integrate into the webassembly jsapi. And finally to do a little bit more work and to make a decision on having the Constructor names be Global like resizable array buffer or namespaced like array buffer.shared. And I'll go through each of these in turn. SYG: So to address the fixed page size issue. The decision is to just remove the implementation-defined rounding and I'll walk through why we waffle here. So the original motivation was when you are making this new buffer, and you are reserving the memory for it implementations may want to round up to a page size multiple anyway. [bot dropped for a second] -SYG: tape size differences observable. You have more fingerprinting. You have possible interop issues. now if we fix the if we address the concern that we do not want to expose implementation defined a page sizes implementation still might want to do the rounding with is on page size, right the original motivation still exists. Now if they do that you now have to still track another length. So now you just have the original problem and more complexity because you allowed this implementation to find Behavior. So the concurrent conclusion is just to remove the implementation defined rounding again, so this goes back to the original proposal where the max fight length is is never rounded whatever you pass in is what you get observably if the implementation chooses to reserve more memory than needed under the hood. The JavaScript programmer will never see that. I reached out to to the Apple folks and the Mozilla folks and think I got positive feedback from both that they are fine with removing the implementation-defined rounding. So that's the first one. The second one, for webassembly integration, I wrote up a integration PR into the webassembly spec; link there, WebAssembly/spec#1300. And the key thing for this specification draft is that the web assembly integration requires. Is that webassembly vended array buffers currently So currently there is a webassembly API called webassembly.memory where you can either from JavaScript make a new webassembly memory or if you already have a web assembly module you get a wrapper around the webassembly memory reflected via an array buffer. So there's currently already a webassembly.Memory API and array buffers that are vended by the webassembly memory API have more restricted behavior than JavaScript user-created buffers. Among other things, for example, they cannot be detached normally. They can only be detached by webassembly APIs. So for example if I transfer webassembly that vended array buffer that actually cannot be detached by web APIs it can only be detached by webassembly APIs. to integrate resizable buffer into the webassembly similarly, the resizable buffers are also more restricted than JavaScript-made buffers. Webassembly memories have more restrictions, such as that they can only be sized in page size of multiples of the webassembly page size, which I think is 64k. They cannot shrink, and if they are resized, if they grow, they can only grow in page size multiples. So to kind of handle that the current specification draft has a kind of HostResizeArrayBuffer host hook where an implementation can provide overriding behavior for how it should handle the resize. Of course there are restrictions like if you successfully resize the requested byte length must be reflected on the buffer and there's this short circuiting return value. So if you return handled and default resizing doesn't happen because the host already Took care of it with the additional restrictions. DE has recommended, or maybe just throwing out an idea, that maybe this just shouldn't be a host hook because this is not a thing that hosts do uniformly across like all implementations if I understood his concern correctly, and perhaps it could be a new pattern like the implementation just writes a custom resize method into some internal slot or something that can be reused. But again, this basically only exists for the webassembly use case. Ostensibly it could be useful for future lower level web APIs that may vend their own kind of buffers with additional size restrictions as well. But currently there are no other use cases except webassembly. +SYG: tape size differences observable. You have more fingerprinting. You have possible interop issues. now if we fix the if we address the concern that we do not want to expose implementation defined a page sizes implementation still might want to do the rounding with is on page size, right the original motivation still exists. Now if they do that you now have to still track another length. So now you just have the original problem and more complexity because you allowed this implementation to find Behavior. So the concurrent conclusion is just to remove the implementation defined rounding again, so this goes back to the original proposal where the max fight length is is never rounded whatever you pass in is what you get observably if the implementation chooses to reserve more memory than needed under the hood. The JavaScript programmer will never see that. I reached out to to the Apple folks and the Mozilla folks and think I got positive feedback from both that they are fine with removing the implementation-defined rounding. So that's the first one. The second one, for webassembly integration, I wrote up a integration PR into the webassembly spec; link there, WebAssembly/spec#1300. And the key thing for this specification draft is that the web assembly integration requires. Is that webassembly vended array buffers currently So currently there is a webassembly API called webassembly.memory where you can either from JavaScript make a new webassembly memory or if you already have a web assembly module you get a wrapper around the webassembly memory reflected via an array buffer. So there's currently already a webassembly.Memory API and array buffers that are vended by the webassembly memory API have more restricted behavior than JavaScript user-created buffers. Among other things, for example, they cannot be detached normally. They can only be detached by webassembly APIs. So for example if I transfer webassembly that vended array buffer that actually cannot be detached by web APIs it can only be detached by webassembly APIs. to integrate resizable buffer into the webassembly similarly, the resizable buffers are also more restricted than JavaScript-made buffers. Webassembly memories have more restrictions, such as that they can only be sized in page size of multiples of the webassembly page size, which I think is 64k. They cannot shrink, and if they are resized, if they grow, they can only grow in page size multiples. So to kind of handle that the current specification draft has a kind of HostResizeArrayBuffer host hook where an implementation can provide overriding behavior for how it should handle the resize. Of course there are restrictions like if you successfully resize the requested byte length must be reflected on the buffer and there's this short circuiting return value. So if you return handled and default resizing doesn't happen because the host already Took care of it with the additional restrictions. DE has recommended, or maybe just throwing out an idea, that maybe this just shouldn't be a host hook because this is not a thing that hosts do uniformly across like all implementations if I understood his concern correctly, and perhaps it could be a new pattern like the implementation just writes a custom resize method into some internal slot or something that can be reused. But again, this basically only exists for the webassembly use case. Ostensibly it could be useful for future lower level web APIs that may vend their own kind of buffers with additional size restrictions as well. But currently there are no other use cases except webassembly. SYG: I presented the integration PR to the webassembly community group it reached phase one. So they have a phase system, for those unfamiliar, that is inspired by our stage system. The difference is that when they advanced phases, they do take a formal vote between five options of strongly favor, favor, neutral, against, and strongly against. Among everyone present the call were either strongly favorable or favorable for this (??) PR and the idea is that they are now waiting for us to advance this proposal to stage 3 after which I will go back to the WASM CG and they are happy to fast-track the pr 2 phase 3, which is also their implementation phase. So the conclusion here is I think there has been good progress made on the web assembly side for us to advance to stage 3 for this proposal. @@ -427,7 +424,7 @@ SYG: And finally, I want to highlight this this question that CZW asked recently SYG: All right with that I'll take any queue questions. There hasn't been significant changes. I pointed out the spec changes since last time, which is basically reverting the max size rounding and adding this host hook which only affects webassembly. And that host hook might be changed to another mechanism, but I would judge that to be an editorial concern. -CZW: Yeah. Okay, great. And you just mentioned the size of the new place real locations from the perspective of the language. However, that impresses itself is that ability isn't reflected in the spec text. so the behavior is not observable from JavaScript and we have discussed. So that makes the prominent feature not significantly different from in-place or copying reallocations. That is to say, in the spec, there is no significant difference between what ResizableArrayBuffer and ArrayBuffer transfer can provides.. That makes assumptions array buffer prototype transfer provides a generic resizing feature that is not a very different thing from the resizable array buffer we can provide a generic reallocation feature not orthogonal to resizing purely regarding the spec. +CZW: Yeah. Okay, great. And you just mentioned the size of the new place real locations from the perspective of the language. However, that impresses itself is that ability isn't reflected in the spec text. so the behavior is not observable from JavaScript and we have discussed. So that makes the prominent feature not significantly different from in-place or copying reallocations. That is to say, in the spec, there is no significant difference between what ResizableArrayBuffer and ArrayBuffer transfer can provides.. That makes assumptions array buffer prototype transfer provides a generic resizing feature that is not a very different thing from the resizable array buffer we can provide a generic reallocation feature not orthogonal to resizing purely regarding the spec. CZW: You mentioned that resizing in place is not enforced in the spec. There is no such assumptions. So that makes the question about the orthogonality theory still there. @@ -520,17 +517,17 @@ PHD: it's I mean Keith happy to get into that. There's an aliasing mechanism, wh YSV: I just wanted to say that I do think that it's appropriate what moddable is requesting, which is to delay the decision until they have had time to validate that this is solvable within their domain. We don't have any other engine representing Internet of Things and they bring a unique perspective to the committee. And as far as I understand the course of action here would be to take Shu's preferred approach, which is a new Global, and see if they can integrate it in their engine without significant costs or significantly undermining any future efforts that they may be planning or trying to do. I think that this is actually pretty reasonable. It doesn't put a lot of strain on to the champion, beyond requesting more time to validate something and I would like to see that honored so that say if we move this to stage three, and it is found to be a significant problem, then we don't have to go through the work of you know, we have three implementations that have already implemented it one of them that happens to have a unique setting, a unique host, and they are unable to implement it but this would be needed functionality that is essential we're suddenly throwing one use case under the bus. If we can avoid that I think it would be great. I think also I wouldn't see this necessarily as setting a precedent. I think this is highlighting a larger problem and a very specific case of this problem that we may want to reference later and find a true solution. But rather we can solve this just for this case right now and then see what we do about it in the future. ### Conclusion/Resolution -will overflow to discuss remaining queue items +will overflow to discuss remaining queue items ## Change Array By Copy -Presenter: Robin Ricard (RRD) -- [proposal]() -- [slides]() +Presenter: Robin Ricard (RRD) +- proposal +- slides -RRD: Hi, I'm Robin. I'm the delegate for Bloomberg and today I would like to introduce change array by copy for stage 1. So first a little bit of History. Change array by copy is actually derived from the tuples proposal. We wanted to introduce use all of the methods that we added to other prototypes inside of arra dot prototype, and we asked what would be the best way to do so, and we agreed and understood that the solution would be to do a separate proposal. So both proposals could have their own value in the prototype. And just as a quick reminder regarding Tuple, it is immutable data structures in JavaScript that are primitive types so they don't have identity, and being immutable in order to derive a new version of the Tuple you would need to use methods that return a new version of the tuple. +RRD: Hi, I'm Robin. I'm the delegate for Bloomberg and today I would like to introduce change array by copy for stage 1. So first a little bit of History. Change array by copy is actually derived from the tuples proposal. We wanted to introduce use all of the methods that we added to other prototypes inside of arra dot prototype, and we asked what would be the best way to do so, and we agreed and understood that the solution would be to do a separate proposal. So both proposals could have their own value in the prototype. And just as a quick reminder regarding Tuple, it is immutable data structures in JavaScript that are primitive types so they don't have identity, and being immutable in order to derive a new version of the Tuple you would need to use methods that return a new version of the tuple. RRD: And so the methods that we actually added to to prototype are the following: popped, pushed, reversed, shifted, sorted, spliced, unshifted, as well `with`, and we'll go back to with. We would like to add them to array dot prototype and typedarray dot prototype and we all call them call them mutator methods. So essentially if you have a mutator method like push you're going to want to have a non mutator method like pushed but which will represent the state of the array after performing the operation on it. So pushed will not return the element but will return the array we have element, but essentially so hence the naming and hence this past tense. So we already established that On Tuple and basically warm that would support them to array and that array because we found out if you'd be very practical if we can't write functions, for example that are able to manipulate both tuples an array like with is the odd one out. Here is with: with is a way to change a value at an index. essentially replicating an assignment operation but in a non-mutating way and we have a few others that we did forget to put onto the Tuple prototype which are filled and copiedWithin which are the equivalent of fill and copyWithin, those two method we do plan to put them back into Tuple dot prototype for consistency. @@ -554,7 +551,7 @@ YSV: All right, so one comment that came up for us is, for some of the methods t RRD: I see the points and honestly we wanted to do all of the methods mostly for coherence reasons and having an ecosystem that is coherent here, but that is potentially a footgun. That being said, maybe we could explore the avenue of naming for highlighting the footgun, namely if the methods were named "copySomething" that would point out that there is - I wouldn't say danger, but there is an expensive operation going on. So yes, I read this is something we need to think about and I am probably going to open a community issue on this. And because we need to talk about that. -RBU: I think that this is a valid concern in that introducing functions that people would like to use more but isn't doesn't actually satisfy the use case and does more work for no reason it would is not necessarily a good thing. What I will say is that I think it'll be worth investigating during stage one usage patterns of existing mutator methods that are constant time because I imagine that Most cases will- specifically with like push or pop to return values are different between the past tense in the present tense versions. So it's not quite simply a swap and it's a different interaction style like the way you think about using them is different. So I could envision a world in which people start to use pushed because that's what they see and that's what they just like to use, but I don't really think it's going to cause a drastic shift in the ecosystem of people suddenly calling pushed instead of push because push is still convenient if you need to mutate an array, but I think it's definitely worth investigating as part of stage one. +RBU: I think that this is a valid concern in that introducing functions that people would like to use more but isn't doesn't actually satisfy the use case and does more work for no reason it would is not necessarily a good thing. What I will say is that I think it'll be worth investigating during stage one usage patterns of existing mutator methods that are constant time because I imagine that Most cases will- specifically with like push or pop to return values are different between the past tense in the present tense versions. So it's not quite simply a swap and it's a different interaction style like the way you think about using them is different. So I could envision a world in which people start to use pushed because that's what they see and that's what they just like to use, but I don't really think it's going to cause a drastic shift in the ecosystem of people suddenly calling pushed instead of push because push is still convenient if you need to mutate an array, but I think it's definitely worth investigating as part of stage one. DE: Yeah. I think it's already quite common to do linear time operations on arrays in JavaScript where we have things like the spread operator. While this concern is legitimate. I don't I don't think it really creates a new concern. I really like Robin's idea of naming them to to emphasize the linear time quality of them. @@ -572,13 +569,14 @@ SYG: Okay, sounds good. I would encourage I guess that the for determining the w RRD: I will reach out to implementers in general so I will probably get in touch with you and YSV soon and see what we can do. -DRW: Robin it looks like the queue is about to be empty and I don't see anyone clattering to add themselves at this moment. +DRW: Robin it looks like the queue is about to be empty and I don't see anyone clattering to add themselves at this moment. RRD: Asking for stage 1. DRW: any objections? I do not see any objections, so it sounds like you have stage 1. Thank you. Thank you, Robin. ## Object.has for Stage 1 + Presenter: Tierney Cyren (TCN) - [proposal](https://github.com/jamiebuilds/proposal-object-has) @@ -590,7 +588,7 @@ TCN: Some frequently asked questions: why not object.hasOwnProperty with the obj JHD: I am strongly in support of this proposal. There's a bunch of things in the language that care about own-ness, so `in` is just insufficient (object spread and rest syntax, for example) - so I'm really excited for it. I'm not particularly attached to the name. So I'm comfortable with `has` or `hasOwn`; either one is fine with me. I think that it would be really useful to consolidate the entire JavaScript ecosystem of `has`-like packages or `{}.hasOwnProperty.call` or `Object.prototype.hasOwnProperty.call` patterns - to push towards one nice clean pattern. -JRL: So just want to voice support for the fact that the hasOwnProperty being on Prototype requires beginners to `Object.prototype.hasOwnProperty.call`. All of this is a mouthful and beginners either aren't comfortable with doing it or don't know how to do it. And so this leads a lot to the buggy code where they `unknownObject.hasOwnProperty('foo')`, even though they don't know it has `hasOwnProperty` method. We're users of the eslint rule that prevents you from doing `.hasOwnProperty` because of the bugs its caused. It is much much easier if we have a static method on an object or anywhere else that allows people to do it without the full property prototype.hasOwnproperty.call invocation Style. +JRL: So just want to voice support for the fact that the hasOwnProperty being on Prototype requires beginners to `Object.prototype.hasOwnProperty.call`. All of this is a mouthful and beginners either aren't comfortable with doing it or don't know how to do it. And so this leads a lot to the buggy code where they `unknownObject.hasOwnProperty('foo')`, even though they don't know it has `hasOwnProperty` method. We're users of the eslint rule that prevents you from doing `.hasOwnProperty` because of the bugs its caused. It is much much easier if we have a static method on an object or anywhere else that allows people to do it without the full property prototype.hasOwnproperty.call invocation Style. YSV: to start I just want to say that I really support this proposal has owned property can be very tricky to use and I love the idea of simplifying it. This is largely about by shedding and before I get into the byte shouting naming stuff. I support stage 1 so the bikeshedding bid is we do have a method on Route the reflect object called has which does a prototype chain look up. And this will be inconsistent and possibly very confusing the other place where we use has in the spec is we have a proxy trap called has so this points towards maybe choosing something like hasOwn but beyond that I don't have a strong opinion about what the name should be. Other than probably we want to choose something else. @@ -598,25 +596,25 @@ TCN: okay, I do just want to say to that in the in the repo there is a relativel JHD: [via queue] most people don’t know about Reflect or Proxy at all. -CLA: I just would like to like second what YSV just said about Reflect.has. I am very excited about this proposal. I was just wondering if you ever take a look into compatibility issues Object.has and the environment of user land implementations. +CLA: I just would like to like second what YSV just said about Reflect.has. I am very excited about this proposal. I was just wondering if you ever take a look into compatibility issues Object.has and the environment of user land implementations. We have not. I am happy to figure out how to do that and to go do that. But yeah, I don't believe we have now. MM: Yeah, so I want to once YSV pointed out that the conflict with Reflect.has, register that I feel strongly about that. I do not want to right now. There are some methods in common between reflect and static methods on object and for each of the existing ones the semantics are very very close to and understandably will how they understand it, and how they relate to each other the other analogy that Like it was a strong argument for has was Keys values and entry and when you first put that up it seemed compelling to me until you mentioned the thing about enumerablity and symbols and since the proposed operator includes non-enumerable properties and includes symbol name properties, it would be misleading to name it in such way as to suggest that it's part of a group with keys, values, and entries in parallel to the way they relate to each other on collections. So for all of these reasons I think `has` is disqualified as far as I'm concerned but `hasOwn` looks great. -MF: I wasn't actually planning on saying much but on the table there when introducing saved one proposals that's phrase them in terms of the problem. We're trying to solve and I see that we're already getting to a point where the current title of the proposal is not looking valid anymore. So lets the both at the time we agree to move it to the stage 1 and at the end in the proposal document let's phrase it in terms of the problem. +MF: I wasn't actually planning on saying much but on the table there when introducing saved one proposals that's phrase them in terms of the problem. We're trying to solve and I see that we're already getting to a point where the current title of the proposal is not looking valid anymore. So lets the both at the time we agree to move it to the stage 1 and at the end in the proposal document let's phrase it in terms of the problem. -TCN: Yeah. Okay. Yeah, I appreciate that input. I will it seems like there's pretty overwhelming support for has owned and quite a few - ones for has so we will We will go ahead and reflect that make sure that's good and then reflect that. Cool. Looks like there's 2 new speaker things. things so go ahead. +TCN: Yeah. Okay. Yeah, I appreciate that input. I will it seems like there's pretty overwhelming support for has owned and quite a few - ones for has so we will We will go ahead and reflect that make sure that's good and then reflect that. Cool. Looks like there's 2 new speaker things. things so go ahead. LEO: My question is not blocking stage 1 at all. I'm fully supportive to Stage 1 for `has` proposed for right now or `hasOwn` it's just showed question where whether we should be having object has on and object has but it should not be we don't have time to answer this in this Milling suggest some button. -JHD (copied from TCQ): there's spec text; is there a reason we can't go right to stage 2 with `hasOwn`? +JHD (copied from TCQ): there's spec text; is there a reason we can't go right to stage 2 with `hasOwn`? TCN: Interesting. Okay. Yeah, I see I see your point. I had to take second to process. Yeah, that makes sense. I think that would probably be good. Is this a proposal to JHD’s question? I do kind of want to go back and look because of sit like because of the gap that and make sure that like this is something that we were okay with skipping and make sure that you know there's not other other areas that are going to conflict with hasOwn or you know, if that's going to introduce anything weird, but go ahead and do you think JHD: figuring out names is a legitimate concern to figure out during stage 2, before stage 3. Given that everyone seems to be pretty on board with `hasOwn` for now and given that the shape of the API seems pretty straightforward and given that there's already spec text that is three lines (if I were a reviewer, I would call it reviewed already). Is there a reason we can't just jump to stage 2, and then figure out the name at that point? We can still take as much time as we need before going to stage 3, and before the time people ship it. If that bothers people we can just call it stage 1 obviously, but I just wanted to throw that out there. -TCN: I mean, I'm not going to say no. All right. +TCN: I mean, I'm not going to say no. All right. AKI: So any objections to Stage 2? All right, cool. I think I'm going to declare it, and congratulations to `Object.has` `Object.hasOwn`, I look forward to the new title. @@ -631,9 +629,9 @@ Presenter: Mark Cohen (MPC) - [proposal](https://github.com/tc39/proposal-pattern-matching) - [slides](https://hackmd.io/@mpcsh/HkZ712ig_) -MPC: Let's Jump Right In so this is pattern matching. This proposal was formerly being authored and championed by Kat Marchán before they left TC39 it achieved stage 1 I believe in 2018. A new group of Champions has taken the proposal back up with a new direction so that group of Champions: myself, TAB, JHD, YSV, DRW, JWK, and RKG. thanks to all of you for your hard work and thoughtful contributions so far and also thank you to Kat for all the hard work. They did before this group took up the proposal before we get too far into this. I just want to state very clearly. This is an update and not a request for stage advancement. So as a Champions group, we're presenting what we think is the best version of this construct it like at the current moment, but we're not married to any particular syntax, spelling, etc. We're not seeking advancement of the examples we present we're just showing you where we are. There is going to a lot of byte shedding. So I'd like to as best we can avoid tons of bikeshedding here in plenary that can take place on GitHub and we can also address those kinds of questions when we do come back for stage advancement in the future. +MPC: Let's Jump Right In so this is pattern matching. This proposal was formerly being authored and championed by Kat Marchán before they left TC39 it achieved stage 1 I believe in 2018. A new group of Champions has taken the proposal back up with a new direction so that group of Champions: myself, TAB, JHD, YSV, DRW, JWK, and RKG. thanks to all of you for your hard work and thoughtful contributions so far and also thank you to Kat for all the hard work. They did before this group took up the proposal before we get too far into this. I just want to state very clearly. This is an update and not a request for stage advancement. So as a Champions group, we're presenting what we think is the best version of this construct it like at the current moment, but we're not married to any particular syntax, spelling, etc. We're not seeking advancement of the examples we present we're just showing you where we are. There is going to a lot of byte shedding. So I'd like to as best we can avoid tons of bikeshedding here in plenary that can take place on GitHub and we can also address those kinds of questions when we do come back for stage advancement in the future. -MPC: All right, so let's jump into priorities. Priority number one is this is pattern matching. So this might seem obvious, but we thought it was worth stating explicitly. This proposal is a entire conditional logic construct. It's more than just patterns. And so we have had to and we'll have to in the future make trade-off decisions involving ergonomics of like different use cases. And so this priority is us saying we want to prioritize the use cases of patterns because the biggest hole that we're filling. Another priority is we want to subsume switch. So first of all, we want there to be zero syntactic overlap with switch to make it more easily google-able. We feel like any overlap with switch will produce confusion and hinder the discoverability and Google-ability of pattern matching. We also want to reduce the reasons to reach for switch. A lot of us feel like switch is pretty confusing, a frequent source of bugs, and generally not the best design. So we'd like there to be really no more reason to reach for switch after pattern matching is in the language. However, switch is pretty ergonomic for working with tagged unions, for example, so we'd like to ensure pattern matching is equally or more ergonomic for those use cases where switch is good. We'd also like to be better than switch. So switch has a lot of footguns, The big one is that fall through is opt out. So if you forget a break statement that's really easy to do but potentially really hard to debug. Additionally omitting curly braces in your case statements hoists declarations to the top which is usually surprising. It's also difficult to work with things like untagged unions in switch. So we'd like that to be ergonomic and pattern matching as ergonomic as we can possibly make it. +MPC: All right, so let's jump into priorities. Priority number one is this is pattern matching. So this might seem obvious, but we thought it was worth stating explicitly. This proposal is a entire conditional logic construct. It's more than just patterns. And so we have had to and we'll have to in the future make trade-off decisions involving ergonomics of like different use cases. And so this priority is us saying we want to prioritize the use cases of patterns because the biggest hole that we're filling. Another priority is we want to subsume switch. So first of all, we want there to be zero syntactic overlap with switch to make it more easily google-able. We feel like any overlap with switch will produce confusion and hinder the discoverability and Google-ability of pattern matching. We also want to reduce the reasons to reach for switch. A lot of us feel like switch is pretty confusing, a frequent source of bugs, and generally not the best design. So we'd like there to be really no more reason to reach for switch after pattern matching is in the language. However, switch is pretty ergonomic for working with tagged unions, for example, so we'd like to ensure pattern matching is equally or more ergonomic for those use cases where switch is good. We'd also like to be better than switch. So switch has a lot of footguns, The big one is that fall through is opt out. So if you forget a break statement that's really easy to do but potentially really hard to debug. Additionally omitting curly braces in your case statements hoists declarations to the top which is usually surprising. It's also difficult to work with things like untagged unions in switch. So we'd like that to be ergonomic and pattern matching as ergonomic as we can possibly make it. MPC: Another priority is expression semantics. So this is drawing on prior art of pattern matching in other languages so you can use in general if you run into this in another language, you can use the construct as an expression so you can do `return match` or `let foo = match` we feel this is intuitive and concise and we think it can be achieved in JavaScript. @@ -643,7 +641,7 @@ MPC: For ordering, matches should always be checked in order. They're written fr MPC: And then our last priority is user extensibility. So in a nutshell userland objects and classes should be able to encapsulate their own matching semantics. Now this grew out of thinking around regular expressions, which was basically the following chain first surely it would make sense to use regex as patterns. If you're matching against the string you should be able to put a regex in a match clause, which we'll see, we have some syntax examples coming up and if the regex matches then the string matches and you go into the right hand side. Surely if you can use a regex as a pattern then if the regex has named captured groups, you would want to have those named captured groups available as bindings. So if you match on a regex that has named capture groups in it, then whatever code you're writing on the right hand side should have access to the contents of those named groups. Now, we basically thought we had two options. As TC39 we can treat this as a magic special case and basically make that functionality only available to regex. Or we can provide a generic standard by which developers can integrate userland objects with the language construct. We believe that generics that the generic standard would be a boon to developer ergonomics, especially for libraries and SDKs where if you're providing some API object or response object or like errors or something like that. You can provide matching semantics with it to the (???) All right, briefly going to go to the queue now before we talk syntax examples. -WH: The most important priority is absent, which is avoiding footguns. +WH: The most important priority is absent, which is avoiding footguns. MPC: so I think that's basically covered by our by be better than switch like we were explicitly thinking of this as avoiding foot guns many of us feel that switch has tons of foot guns. We want to avoid all them. @@ -651,23 +649,23 @@ WH: Well, unfortunately you added some worse ones in the examples. Let’s resum MPC: All Alright, first example here this is just a kind of basic example, that's not using a ton of new features. This is just some code that's like making an HTTP request and then matching on the responses. We're going to use this one to go through like each individual part of this construct and put names to particular constructs so that we have some vocabulary. So first of all, the whole thing is the match construct. I've already used that a few times within it we have for match clauses and each of those clauses contains a pattern. So you see `when` and then parenthesis and then the thing inside the parentheses is the pattern, except for the else clause which doesn't contain a pattern that just always matches anything. Patterns can use object or array destructuring. So we see status 200 body.dot rest those things yield bindings just as they would if you were using destructuring. Normally there are other ways to get bindings will discuss that in a bit. first one matches 200 of the second one matches 305 now the second one matches 301 or 304. We'll talk about that as we step through this. -MPC: so at the top of the statement the thing being matched on we're calling the matchable. So in this case the response that you're matching on. That's the (??)now a clause like I said consists of the when keyword a pattern inside parentheses, and then the right hand side. We're considering that to be sugar for a do expression. So you have curly braces and then a list of statements inside of it and we're just going to say that that is exactly a do expression, like you, you know, we originally considered having you write the new keyword, but we ultimately thought it would be nicer if you just had curly braces and then it’s sugar this pattern right here status 200 body rest that uses object restructuring syntax, which we feel pretty strongly should just work. Any object destructuring expression you write should be valid as a pattern. on top of the existing object restructuring syntax. We also can have patterns on the right hand side of a: so in this case “status: 200” is a pattern. Specifically, it's a leaf pattern or a literal pattern. I guess that just matches exactly the number 200 we can talk about semantics of that later. And then yeah patterns like I said can introduce bindings. This one introduces `body`, and `rest` to the right hand side. So those two things are available inside the new expression. +MPC: so at the top of the statement the thing being matched on we're calling the matchable. So in this case the response that you're matching on. That's the (??)now a clause like I said consists of the when keyword a pattern inside parentheses, and then the right hand side. We're considering that to be sugar for a do expression. So you have curly braces and then a list of statements inside of it and we're just going to say that that is exactly a do expression, like you, you know, we originally considered having you write the new keyword, but we ultimately thought it would be nicer if you just had curly braces and then it’s sugar this pattern right here status 200 body rest that uses object restructuring syntax, which we feel pretty strongly should just work. Any object destructuring expression you write should be valid as a pattern. on top of the existing object restructuring syntax. We also can have patterns on the right hand side of a: so in this case “status: 200” is a pattern. Specifically, it's a leaf pattern or a literal pattern. I guess that just matches exactly the number 200 we can talk about semantics of that later. And then yeah patterns like I said can introduce bindings. This one introduces `body`, and `rest` to the right hand side. So those two things are available inside the new expression. MPC: Yeah, then this - so WH, this was the one you were asking about. So this pattern contains pipe which is the logical ‘or’ combinator. It just tests patterns until one of them succeeds. So this one would match if status is 301 or 304. This also means that patterns can be nested, which will see some more examples of later on. One thing worth pointing out `destination: url` it's effectively a rename, but it's not actually directly renamed. What's happening there is that `url` is an irrefutable match, which means that it matches any value that destination is set to and binds that value to the name `url`. effectively performing a rename but it's kind of a byproduct of how irrefutable matches work. So in general bare variable names are irrefutable matches, they'll match anything and just bind whatever is being matched to that name. -MPC: And then lastly we have an else clause. This is a special fallback clause matches anything this is basically `default` in switch statements. Now one thing to note is a top-level irrefutable match. For example when Foo that's also a fallback clause and so we argue that it should be an early error to have multiple fallback clause or to have any Clauses after the fall back clause. All right going to keep going here. So write a top-level irrefutable match is also a fallback clause and right so if you're you're going to have a fallback clause, it has to either an else or a top-level irrefutable match like `when(foo)` but not both and then you can't have anything after it. This is you can basically think of this as unreachable code. We're like preventing unreachable clauses. +MPC: And then lastly we have an else clause. This is a special fallback clause matches anything this is basically `default` in switch statements. Now one thing to note is a top-level irrefutable match. For example when Foo that's also a fallback clause and so we argue that it should be an early error to have multiple fallback clause or to have any Clauses after the fall back clause. All right going to keep going here. So write a top-level irrefutable match is also a fallback clause and right so if you're you're going to have a fallback clause, it has to either an else or a top-level irrefutable match like `when(foo)` but not both and then you can't have anything after it. This is you can basically think of this as unreachable code. We're like preventing unreachable clauses. -MPC: Another example here. This is like a bad very simple text Adventure game that's taking in commands as like an array of parameters. So here we see array destructuring. We saw object restructuring earlier. This is how array destructuring works, basically as you'd expect it, and we also see the `as` keyword here, which we can use that to introduce intermediary bindings. So specifically what's going on here that first clause will match Like go north go east go west or go south and it gives you access to the specific direction that the player chose as the direction binding. +MPC: Another example here. This is like a bad very simple text Adventure game that's taking in commands as like an array of parameters. So here we see array destructuring. We saw object restructuring earlier. This is how array destructuring works, basically as you'd expect it, and we also see the `as` keyword here, which we can use that to introduce intermediary bindings. So specifically what's going on here that first clause will match Like go north go east go west or go south and it gives you access to the specific direction that the player chose as the direction binding. -MPC: Next up this is introducing guards so we can have additional conditional logic patterns aren't expressive enough. For example, I guess until number not range lands. There's no way to like in you know, right a pattern that expresses that a number is within a range. So you have to use conditional logic like this to express that and so this is just like you Agents fetching from some page needed and point that first Clause matches if you receive more than one second Clause if you receive exactly one and the third one if you maybe don't receive a page at all, that's just the generic fall back. Another way to write the previous code without a guard and without checking the page count uses nested patterns, which we talked about before. So you can further kind of drill down into that data property and match on it inside of the bigger response object. So the first clause matches if data has exactly one element. The second Clause matches when data has at least one element and gives the first page as a binding you can imagine for like presentational purposes, you display the first page and then you have an array that might be empty and might contain more values for a carousel underneath or something and and yet like I said, this is nesting you can kind of infinitely recursively nest patterns within themselves where appropriate of course. +MPC: Next up this is introducing guards so we can have additional conditional logic patterns aren't expressive enough. For example, I guess until number not range lands. There's no way to like in you know, right a pattern that expresses that a number is within a range. So you have to use conditional logic like this to express that and so this is just like you Agents fetching from some page needed and point that first Clause matches if you receive more than one second Clause if you receive exactly one and the third one if you maybe don't receive a page at all, that's just the generic fall back. Another way to write the previous code without a guard and without checking the page count uses nested patterns, which we talked about before. So you can further kind of drill down into that data property and match on it inside of the bigger response object. So the first clause matches if data has exactly one element. The second Clause matches when data has at least one element and gives the first page as a binding you can imagine for like presentational purposes, you display the first page and then you have an array that might be empty and might contain more values for a carousel underneath or something and and yet like I said, this is nesting you can kind of infinitely recursively nest patterns within themselves where appropriate of course. MPC: Here's an example of using regular expressions in patterns. So this is very terrible, arithmetic expression parser, so regex if you stick a regex inside of a pattern it works basically as you'd expect it just calls match on the on you know, whatever object you passed in presumably we could say it's stringified we haven't, you know, we're not married to any particular semantics around that but yeah, so if the if the regex matches what you projects matches the matchable then you go into the right hand side. It's considered a match. And yeah, this one has named capture groups. We want those to be able to introduce bindings to the right hand side likely bare regex has will still be a bit of a special magic case in that they're they're able to introduce bindings just from the named capture groups. Whereas you will see an expression form later on that allows you to pass a regular expression that you've declared somewhere else to be matched against that form. You'll probably have to explicitly say which bindings you're introducing with an additional keyword. But this case we feel like it'd be nice to be able to introduce bindings just from the named capture groups. MPC: Now this kind of leads us, like I said earlier, to the matcher protocol, which is the user extensibility, unless we want this to be a completely special case. That's not “replicatable”. have to provide some sort of protocol we get to that kind of by way of another example. So this code sample is a lexer of some kind and this introduces the pin operator, which is that little caret (`^`). This is this is going to be what enables the protocol which we'll see in the next sample. So the pin operator you can think of it as an escape hatch from irrefutable matches. So if I had just written when LF and CR are without the without the caret then LF + CR would be irrefutable matches that introduce a binding. That shadows the constants that are declared above. With the pin operator LF and CR are evaluated as expressions and since they evaluate to those primitive constants declared at the top matching is performed against those constants. This this Clause will succeed if the token is either 0x0A or 0x0d. -MPC: Here's the protocol. So this code is a declaration at the start. It's a declaration of the matter protocol on some imaginary class. It's a really terrible name parser that just tries to split a string in two and return those two pieces. So then the match statement down below when we have when caret name so name is evaluated and since it turns into or sensitive valuates to a class with this special like symbol.matter method on it that method is then invoked to see if the clause matches now, we were not married to any semantics about like how that matcher symbol that matter method is supposed to be written. But basically it should just tell you if you know if the matchable is matching that particular class. So in this case, it's just going to check if the matchable has exactly two components like it's a string with exactly two space separated substrings. Basically, also see the “with” keyword which is used to pattern match the value returned by the matcher protocol. So in this case the protocol is returning an array of length 2 and so where destructuring that and pulling out the first name and the last name as first and last there's a additional guard on the first one. So like the two clauses are matching hyphenated last names followed by non hyphenated last names this operator. I just want to note this is probably the thing we are least happy with as a Champions group. This turns out to be a pretty hard problem to solve the prior art for, you know matching with a protocol is a bit of a mixed bag. This is Elixir’s approach that was brought into the proposal by Kat. We like it, but it's very much like open to other spellings or other ideas on how to do this, but we think the functionality is very valuable. +MPC: Here's the protocol. So this code is a declaration at the start. It's a declaration of the matter protocol on some imaginary class. It's a really terrible name parser that just tries to split a string in two and return those two pieces. So then the match statement down below when we have when caret name so name is evaluated and since it turns into or sensitive valuates to a class with this special like symbol.matter method on it that method is then invoked to see if the clause matches now, we were not married to any semantics about like how that matcher symbol that matter method is supposed to be written. But basically it should just tell you if you know if the matchable is matching that particular class. So in this case, it's just going to check if the matchable has exactly two components like it's a string with exactly two space separated substrings. Basically, also see the “with” keyword which is used to pattern match the value returned by the matcher protocol. So in this case the protocol is returning an array of length 2 and so where destructuring that and pulling out the first name and the last name as first and last there's a additional guard on the first one. So like the two clauses are matching hyphenated last names followed by non hyphenated last names this operator. I just want to note this is probably the thing we are least happy with as a Champions group. This turns out to be a pretty hard problem to solve the prior art for, you know matching with a protocol is a bit of a mixed bag. This is Elixir’s approach that was brought into the proposal by Kat. We like it, but it's very much like open to other spellings or other ideas on how to do this, but we think the functionality is very valuable. -MM: Can I jump in with clarifying question while you're on the slide? Yeah, so you wrote this with the method being an instance method, but what creates an instance of name such that you're invoking this on an instance had I only see a mention of the class. +MM: Can I jump in with clarifying question while you're on the slide? Yeah, so you wrote this with the method being an instance method, but what creates an instance of name such that you're invoking this on an instance had I only see a mention of the class. JHD: It's just an error in the code example, the method should be static. There's no magic construction or anything. @@ -675,7 +673,7 @@ MPC: Yeah, sorry about that typo. I'll revise that before we update the repo lat MPC: Next up, nil matcher. This is from the prior art. Most languages with pattern matching have the concept of a nil matcher. It just fills a hole in a data structure without creating a binding in JavaScript. The primary use case would be skipping spaces in arrays. Fortunately for us, this is already covered in destructuring syntax by just omitting any identifier between commas. So with that in mind with how contentious this underscore identifier is we probably would only pursue this if we saw strong support for it, but, just throwing it out there. -MPC: Last up catch guards. This is also hopefully pretty simple. It's just sugar for `catch (error) { match(error) { ...` with an extra curly brace and level of indentation there would also be a slight change on the semantics, which is that on a non-exhaustive match we would rethrow the error that's in the catch clause. Rather than generating a new error so that you still have access to it. So there's that kind of default else throw error. +MPC: Last up catch guards. This is also hopefully pretty simple. It's just sugar for `catch (error) { match(error) { ...` with an extra curly brace and level of indentation there would also be a slight change on the semantics, which is that on a non-exhaustive match we would rethrow the error that's in the catch clause. Rather than generating a new error so that you still have access to it. So there's that kind of default else throw error. [To the queue] @@ -687,17 +685,17 @@ WH: I'm opposed to the confusion that adding or removing a `^` will cause. We ha TLY: Yeah, my question was about how exhaustive the matches are for array and array patterns and object patterns. So on slide 19, there is something that confused me. So it looks like that first one is exhaustive. Otherwise, you would never reach the next. And so like it means like everything in the array must be bound in order for that to match right? It's like if the data had two entries, that would not match. -MPC: Yeah, it would match the second one clause in this one. If you don't specify a ...rest parameter or something like that, if you just specify a comma-separated list of bindings with no ellipses anywhere, It'll match arrays with exactly that number of items. +MPC: Yeah, it would match the second one clause in this one. If you don't specify a ...rest parameter or something like that, if you just specify a comma-separated list of bindings with no ellipses anywhere, It'll match arrays with exactly that number of items. -TLY: Okay, and the object one has different semantics where it only cares about whether or not the ones you list exists. I just want to point out the discrepancy. Maybe it's kind of intuitive because it's all done in other languages, but it's a weird kind of mismatch between the semantics of array match bindings and object match bindings. +TLY: Okay, and the object one has different semantics where it only cares about whether or not the ones you list exists. I just want to point out the discrepancy. Maybe it's kind of intuitive because it's all done in other languages, but it's a weird kind of mismatch between the semantics of array match bindings and object match bindings. -MPC: Yeah. I can't speak for the entire Champions group, but we might be swayed to kind of make array match up with object in that way. Like I said, this is just a progress update like we're not married to that. +MPC: Yeah. I can't speak for the entire Champions group, but we might be swayed to kind of make array match up with object in that way. Like I said, this is just a progress update like we're not married to that. TAB: I would be very loathe to just because there are very obvious patterns that you can write that would been surprisingly match and yeah could be a but I suspect to be a very bad foot gun for people. -TLY: I was just kind of wondering if maybe you would consider reaching parity by having an object have ...rest there, or ..._ something like that to say and I don't care about the rest. So it would by default refuse to match partial object shapes. +TLY: I was just kind of wondering if maybe you would consider reaching parity by having an object have ...rest there, or ..._ something like that to say and I don't care about the rest. So it would by default refuse to match partial object shapes. -MPC: We could also consider that yeah. +MPC: We could also consider that yeah. JHD: Okay that would be an issue for this object destructuring checks the prototype chain. So, yeah, it's just not feasible to exhaustively check objects in that way, way, I think. diff --git a/meetings/2021-04/apr-21.md b/meetings/2021-04/apr-21.md index b780d680..84cdb076 100644 --- a/meetings/2021-04/apr-21.md +++ b/meetings/2021-04/apr-21.md @@ -1,6 +1,6 @@ # 21 April, 2021 Meeting Notes -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -19,14 +19,13 @@ | Istvan Sebestyen | IS | Ecma | | Frank Yung-Fong Tang | FYT | Google | - ## Move test262 requirement to Stage 3 -Presenter: Gus Caplan (GCL) +Presenter: Gus Caplan (GCL) GCN: Yeah, so I don't really have any slides. This is just kind of a discussion that I wanted to have with various people who are involved in this so basically something that has been brought up a few times in the past has been moving the requirement of tests. It's currently a requirement for stage 4 moving that earlier and I should stress. I'm not looking for any consensus or anything at this point. I just want to kind of get a feel for how people feel about changes to this. They might have that sort of thing and yeah, just kind of see what ideas people have. Yeah, so basically the main thing that is the main idea that has been floated that I've heard is moving the test262 requirement to stage 3 in its entirety and there are some pros and cons to that. I think the main thing that I've seen against that is just during the lead up to stage three, things can change a lot so having tests like In the test262 repo might be a lot of churn and effort for people so I don't think we need to like say anything that can create like, you know, having a PR open or having tests in the proposal repo maybe or just you know, that sort of you know open topic. So, yeah. I'm interested to hear what people are feeling about this. Do you agree with this, disagree, have other ideas? -FYT: I think for any changes if there's no reason to change there's a reason to not change. I would like to ask you to summarize at least (I don't know what happened before) what harm the status quo in other words why the thing now needs to be changed. I mean, could you at least present that I mean if they're no harm their dull no reason to make any improvement. +FYT: I think for any changes if there's no reason to change there's a reason to not change. I would like to ask you to summarize at least (I don't know what happened before) what harm the status quo in other words why the thing now needs to be changed. I mean, could you at least present that I mean if they're no harm their dull no reason to make any improvement. GCL: Yeah, sure. So just from an implementation perspective. It can be difficult to start making an implementation, especially for some of these more complex proposals. when there aren't tests and it's not like we can't, you know write the code that does something but often there will be bugs in it or even worse. It behaves like it doesn't like crash or anything, but it might behave differently from how the champions intended it to behave and then those become web reality because stage 3 is about shipping things. And so we get these little quirks sometimes things aren't caught. Sometimes they are and it's also just you know more difficult to implement because you have to sit there and read spec text, which is not a very enjoyable thing to read most of the time. So this is mostly a thing for people who are in the implementation phase. @@ -36,31 +35,32 @@ PFC: From my experience writing tests for Temporal we did have some test262 test KG: Yeah, thanks. -LEO: Just a quick note. I understand it would save you time, but the complexity maintaining these tests usually when you have tests fo the complexities will be much lower than maintaining tests for syntax. I think I would probably have one of the overviews as well like people who worked on the tests for the class fields family. Even if they were done in stage 3 like some of the proposed especially for syntax there are lots of proposals for protest movements. I have more to say in like the time. +LEO: Just a quick note. I understand it would save you time, but the complexity maintaining these tests usually when you have tests fo the complexities will be much lower than maintaining tests for syntax. I think I would probably have one of the overviews as well like people who worked on the tests for the class fields family. Even if they were done in stage 3 like some of the proposed especially for syntax there are lots of proposals for protest movements. I have more to say in like the time. JHD: I just want to reply that especially in a large proposal like Temporal there may even still be but there certainly have been discrepancies between the proposed spec text and the polyfills. So despite a polyfill being easier to maintain and test against than test262 tests, that I don't think is necessarily any more of a guarantee of correctness. So if it's easier, that's great, and that's useful. I just want to point out that that's like there are still differences there. JHD: The other one is: the requirement we decided a few years ago was to make one of the stage 4 requirements be an editor-approved pull request into the main spec. The effect of that from my perspective has been that the time gap between “stage 4” and “merged” is much smaller, which is useful, but more importantly, the quality of specification text written by proposal champions from my viewing has gone up dramatically in the years since we put that requirement in place, and it has helped to surface issues slightly sooner. Since it's only a requirement within stage 3, that doesn't necessarily mean that it will surface issues before implementation - I'd have to take some dig up some implementations but I feel pretty confident I would find a few things that we found from the PR and avoided an issue in implementations. There's been a lot of usefulness and I think a lot of benefit. I think before that requirement, the number of delegates that paid attention to spec text who were not implementers themselves was a lot smaller, and now I think a lot more people are aware of it. So I would then expect that if we had a requirement - not to merge tests into test262 before stage 3, but to get them mergeable before stage 3 and only merge them at stage 3 - the effect that I expect from that is that the gap between getting stage 3 and getting implemented will be much smaller, because when test262 tests are merged my understanding is that it's much easier for implementers to import those tests and implement; but also I expected the quality of test262 tests and also the general understanding of how to write them will go up over time which may in fact help a lot of the legitimate problems that have been reported by test262 maintainers and contributors about the difficulty of contributing, and also the difficulty of improving the complexity around that. There was discussion in IRC yesterday where some of LEO's concerns, and he's next on the queue; I'll defer most of that to the next item. One of the concerns that was very compelling that there'd potentially be a lot of changes during stage 2. It's almost it would almost be like we need an interim stage where like conditional stage 3 or something where there will be no more changes, but it's not actually stage 3 until the tests are ready to be merged, and then it becomes a stage 3, something like that, and I'm wondering if that might mitigate some of the concerns but I'll let LEO speak to those because maybe we can see what their thoughts are. -LEO: Yeah, just after some analysis. I also had some discussions with RW but I'm not sure if he's in this meeting. I am in support of having tests available before stage 3 because that actually helps with the concept of what we want for stage 3 is like in implementation ready proposal like the idea of stage 3 in my brain is like that we want implementers to try out a proposal with some estimation of things becoming reality. There is a lot of other of my concerns of what it means to have that during stage 2 because stage 2 is too unstable. We expect too many changes and for some proposals this might kill like all availability time for anyone working in the proposed system. Like my not require too much as like a some API documentation etc.., There are a lot of tests for Queens(?) one of the examples of community that I seem really would be decorators. The greatest has been changing like sinful since forever. He even calls burn out on people trying to Champion it. And it's been a proposal that we actually requires a lot of tests food great, but currently if you get the current model of the creator's it's not hard to show the API and make us API surface documentation test would bring is another thing. It's actually a bad amount and any other new changes to the decorators proposal if they also require test that's also going to require a lot of things. So going on track of with what we'll see it would probably be good to see something like a stage 2.5 where yes, we actually like this proposal and it seems like it's finally ready for tests because someone working on stage 2 if they work a lot on the proposal, but they also work a lot of the providing tests and they come to TC39 anticipate that TC39 decides that's not the way they wanted that proposal. That means way more work costing time and maintenance of that proposal changing everything including the test. So what do we signal to Champions when they reach stage 2. It's mostly like I do not advise anyone who could be writing test right after you reach stage 2, which is actually what we signal for stage 3, but it's a different. perspective there because on stage 3 we expect stability. That's why we do have test262 as a requirement during stage 3 as well, because we expect this is stability. We expect implementations to follow the very great advantage of having tests ready when you reach stage 3, and I believe this would help accelerating implementations and they also like when things are far more consistent. I think that's would be a good guidance and also helpful to get champions and engaged I would sing chorus, but then in quarters of all I can gauge to the to having tests ready one of the other advantages of this was talking to Daniel on Monday and one of the things that might be even more useful is actually having some requirement. If you go transpiler related to that proposal and show that as a proof concept, I think that is actually a little bit more valuable in this process. He doesn't like change if we want or not but having a policy or transpiler ready before stage 3works as a good proof of concept and we can extract a task that we use for that. There are many more things that I could say here. I don't think I'm going to be discussing like the complexity of adding things to test262 because I think that it's worth of a lot of separate discussion. I'm also in support of that, but I just wanted to share this support because that's what I discussed with RW, someone who also important that statistics too cold for a long time, and this is my perspective. I'm going to share [a link](https://gist.github.com/leobalter/16364bb167633cb3cb31e0f95e160a2a) with some summarization of these points. +LEO: Yeah, just after some analysis. I also had some discussions with RW but I'm not sure if he's in this meeting. I am in support of having tests available before stage 3 because that actually helps with the concept of what we want for stage 3 is like in implementation ready proposal like the idea of stage 3 in my brain is like that we want implementers to try out a proposal with some estimation of things becoming reality. There is a lot of other of my concerns of what it means to have that during stage 2 because stage 2 is too unstable. We expect too many changes and for some proposals this might kill like all availability time for anyone working in the proposed system. Like my not require too much as like a some API documentation etc.., There are a lot of tests for Queens(?) one of the examples of community that I seem really would be decorators. The greatest has been changing like sinful since forever. He even calls burn out on people trying to Champion it. And it's been a proposal that we actually requires a lot of tests food great, but currently if you get the current model of the creator's it's not hard to show the API and make us API surface documentation test would bring is another thing. It's actually a bad amount and any other new changes to the decorators proposal if they also require test that's also going to require a lot of things. So going on track of with what we'll see it would probably be good to see something like a stage 2.5 where yes, we actually like this proposal and it seems like it's finally ready for tests because someone working on stage 2 if they work a lot on the proposal, but they also work a lot of the providing tests and they come to TC39 anticipate that TC39 decides that's not the way they wanted that proposal. That means way more work costing time and maintenance of that proposal changing everything including the test. So what do we signal to Champions when they reach stage 2. It's mostly like I do not advise anyone who could be writing test right after you reach stage 2, which is actually what we signal for stage 3, but it's a different. perspective there because on stage 3 we expect stability. That's why we do have test262 as a requirement during stage 3 as well, because we expect this is stability. We expect implementations to follow the very great advantage of having tests ready when you reach stage 3, and I believe this would help accelerating implementations and they also like when things are far more consistent. I think that's would be a good guidance and also helpful to get champions and engaged I would sing chorus, but then in quarters of all I can gauge to the to having tests ready one of the other advantages of this was talking to Daniel on Monday and one of the things that might be even more useful is actually having some requirement. If you go transpiler related to that proposal and show that as a proof concept, I think that is actually a little bit more valuable in this process. He doesn't like change if we want or not but having a policy or transpiler ready before stage 3works as a good proof of concept and we can extract a task that we use for that. There are many more things that I could say here. I don't think I'm going to be discussing like the complexity of adding things to test262 because I think that it's worth of a lot of separate discussion. I'm also in support of that, but I just wanted to share this support because that's what I discussed with RW, someone who also important that statistics too cold for a long time, and this is my perspective. I'm going to share [a link](https://gist.github.com/leobalter/16364bb167633cb3cb31e0f95e160a2a) with some summarization of these points. SYG: One of the motivations that GCN mentioned earlier was that they don't want buggy implementations to ship if the implementations were to misread the spec. That would have been caught by tests. I would like to respond and say that historically, in my experience, the difficulty cuts both ways. Is it's not easy to write tests out of thin air without an executable implementation to iterate on. So if the burden is on the champions to write the tests during stage 2 or whatever new stage this is I would imagine the rate of buggy tests also go up. I mean, currently the status quo even today it is not a rare occurrence that during the course of implementation in V8 or SpiderMonkey or whatever that we find some of the already committed test262 tests if there are any in stage 2 and we have to fix them. Sometimes they're really just silly bugs like because there was no executable implementation. There was just like a syntax error or something. Sometimes there are actual bugs. But yeah, it cuts both ways. What I would really like to see is that the barrier of test262 contributions significantly lower and I would like to offer a comparison here to web platform tests, which is a which is another cross vendor suite of tests, except they test, you know web platform features for interop. There's really no barrier of entry like there is no complex directory. There is no front matter to learn. There is no meditated to learn. It's just rather unstructured if you can write a test if you can write a test have some comments or whatever. You can get it in and I would like us to move into that direction. I frankly have not gotten any value out of the complex front matter of test262. I have gotten some value out of the directory structure. That's fine. But I think if we want to change the test262 it's to encourage folks to write tests earlier. We really need to do something about how easy it is to get a test written and accepted. -FYT: Plus one for whatever the issue SYG mentioned. There are many times buggy test. +FYT: Plus one for whatever the issue SYG mentioned. There are many times buggy test. LEO: So I wanted make actually getting better. Test engine262 works really really nice for authoring 262 tests than a given cooperation and probably checking these collaboration with we're doing it because the way engine262 of his reading we are it's worth a lot of exploration there from all the delegates. Not only people who are maintaining engine262 (and thanks GCL) because a just one of the main reasons engine engine262 is basically literal copy of specs that helps a lot getting coverage and verification of tests, it's too difficult to write tests. When you don't have anything to test with to run your tests with that would give you like some indication of passing or not. And yes, I think it's a totally worth separate discussion if you want to talk about how we want to change test262 to improve contributions. I don't think I think those are this is a separate discussion in both this current discussion and the contributions are worth a lot of extra time. I historically tried changing a lot of things and one of the main reason these cannot happen right now is because the weight that's just test262 is used among like several projects and it's still hard to make any changes at all the projects. We were actually doing it consistently. And making test262 keeping to be compatible is in my opinion harder than keeping ecmascript compatible with the web reality. For some of the changes that I try like frontmatter, a lot of people complain about. Something that I have another pet peeve is removing copyright headers in test262. I went through legal. I got like legal positions that I could actually remove it. I could just bring it to TC39 but technically I cannot remove copyright because it breaks test runners in browsers. It's one of my major things in test262 that people hate that in copyright headers and most of the time and very often in maintenance of tests to succeed. You need to go and grow grass and say like a you need the copyright header. You cannot forget it like you need to to add anything. We will try to understand why and the reason is because browsers are not like too invested changing that Sorry, it was too harsh. USA: I didn't want to say much apart from what was mentioned, but this is all great. I am really happy that we're discussing this. I'm really happy we're talking about this. I hope we can come up with a venue for continuing this discussion. -SYG: Yes, so this item, let me give a quick overview of what happens in reality today for test 262 in case folks aren't aware. So what generally happens is that if the champions champions are eager I suppose or they have time or they would like to they generally contribute test262 tests. They write it themselves and sometimes that does happen before stage 3 sometimes does happen after stage 3 and in case the champions do not write test 262 for whatever reason because it is not a requirement for stage 3. What happens is that implementations generally want (or at least we want in Chrome) the presence of the feature in an interoperable tests suite before we ship something. So while stage 3 is implementation time, by the time we come to ship something, if there are not test262 tests that does hold up the shipping at least in Chrome because you know, we don't want to ship something that is not tested. We're just testing ourselves. And that seems eminently reasonable and what happens is that to not slow down velocity their Google contracts Bocoup to pick up the slack if there are features that have been stage 3 for a while and do not have test262 tests. We contract Bocoup to work on those tests and get them in and this usually works. Okay, but the tests don't get magically written right like someone has to write them and there are resources in place right now to make sure that they do get written and what I would want as an implementer, is that while historically this pattern has worked okay. I think the main answer here for my point of view as I said earlier is to make contribution easier. It's like a perennial complaint that engine implementer have: Certainly the writing test262 is a pain. Like figuring out how to copy and paste the right slight bit of spec into the metadata and then getting it reviewed and that kind of thing. That's not a thing that people really want to do if they can help. So the infrastructure around test262 could be drastically improved in my opinion, especially tighter coupling with implementations. So what happens if there are no test262 tests, is that when we Implement a new feature, you know, we're not going to commit code to the code? base without any tests. So we write tests anyway, but where did those tests go? Those tests go into the engine specific private test suites. They get uploaded and they run on our bots. So that happens. Wouldn't it be great if we just took those tests and automatically just exported them to test262. This is kind of how it works for WPT (web platform tests). There are these things called two-way sync bots where if implementers as a matter, of course when implementing right test anyway, let's get them to let's get them up streams and let's get that fast track to upstreams, you know, apply all the IP rules get the copyright and whatever I would like us to move toward that kind of future that would kind of I think it duplicate a lot of work and move everything kind of to be easier. +SYG: Yes, so this item, let me give a quick overview of what happens in reality today for test 262 in case folks aren't aware. So what generally happens is that if the champions champions are eager I suppose or they have time or they would like to they generally contribute test262 tests. They write it themselves and sometimes that does happen before stage 3 sometimes does happen after stage 3 and in case the champions do not write test 262 for whatever reason because it is not a requirement for stage 3. What happens is that implementations generally want (or at least we want in Chrome) the presence of the feature in an interoperable tests suite before we ship something. So while stage 3 is implementation time, by the time we come to ship something, if there are not test262 tests that does hold up the shipping at least in Chrome because you know, we don't want to ship something that is not tested. We're just testing ourselves. And that seems eminently reasonable and what happens is that to not slow down velocity their Google contracts Bocoup to pick up the slack if there are features that have been stage 3 for a while and do not have test262 tests. We contract Bocoup to work on those tests and get them in and this usually works. Okay, but the tests don't get magically written right like someone has to write them and there are resources in place right now to make sure that they do get written and what I would want as an implementer, is that while historically this pattern has worked okay. I think the main answer here for my point of view as I said earlier is to make contribution easier. It's like a perennial complaint that engine implementer have: Certainly the writing test262 is a pain. Like figuring out how to copy and paste the right slight bit of spec into the metadata and then getting it reviewed and that kind of thing. That's not a thing that people really want to do if they can help. So the infrastructure around test262 could be drastically improved in my opinion, especially tighter coupling with implementations. So what happens if there are no test262 tests, is that when we Implement a new feature, you know, we're not going to commit code to the code? base without any tests. So we write tests anyway, but where did those tests go? Those tests go into the engine specific private test suites. They get uploaded and they run on our bots. So that happens. Wouldn't it be great if we just took those tests and automatically just exported them to test262. This is kind of how it works for WPT (web platform tests). There are these things called two-way sync bots where if implementers as a matter, of course when implementing right test anyway, let's get them to let's get them up streams and let's get that fast track to upstreams, you know, apply all the IP rules get the copyright and whatever I would like us to move toward that kind of future that would kind of I think it duplicate a lot of work and move everything kind of to be easier. RPR: we're actually at the end of the time box now. GCL: Yeah. very happy with this discussion. Heard a lot of things I wasn't expecting to hear. But I think this is a great jumping-off point on this topic. Thank you everyone. All right. -RPR: Thank you to GCL and everyone. +RPR: Thank you to GCL and everyone. ## NVC Training Proposal + Presenter: Dave Poole (DMP) - [issue](https://github.com/tc39/Admin-and-Business/issues/130) @@ -70,17 +70,17 @@ DMP: We've been working for roughly the last six months to come up with topics t DMP: So why do we want to do this? So we at the inclusion group believe that good communication is fundamental to working well together and to producing good quality products. Shared understandings and Frameworks can help make that communication easier even when we're talking about difficult subjects and communication is especially challenging with cross cultures; across time zones, and across organisations. At the end of the day we want to be productive even when opinions and positions differ. Just want to note that this isn't the first time that this specific topic has been brought up before there were three instances that I could find: two reflector in one of the in the code of conduct repo. So hopefully this is not an entirely new subject to people. -DMP: So what is nonviolent communication specifically? This is a methodology or framework developed by someone called Michael Rosenberg. If you're not familiar, this is the most concise explanation that I was able to find. It comes from Wikipedia, but basically it's a communication strategy based on the principles of non-violence and it's a method designed to increase empathy and improve the quality of life for people that do this work and for people around them. The two links below cnvc.org and baynvc.org are two resources that I found to be very helpful in my research. cnvc.org is the global community for non-violent communication and bayncv.org is the Bay Area specific Regional group. +DMP: So what is nonviolent communication specifically? This is a methodology or framework developed by someone called Michael Rosenberg. If you're not familiar, this is the most concise explanation that I was able to find. It comes from Wikipedia, but basically it's a communication strategy based on the principles of non-violence and it's a method designed to increase empathy and improve the quality of life for people that do this work and for people around them. The two links below cnvc.org and baynvc.org are two resources that I found to be very helpful in my research. cnvc.org is the global community for non-violent communication and bayncv.org is the Bay Area specific Regional group. -DMP: I want to talk a little bit about the process that we used to get to where we are today and to the training that we're going to recommend. We started by searching on CNVC.org for trainers presenting beginner/foundational level courses. We contacted five across North America and the UK, although there are trainers in every area of the globe. We asked them to submit proposals to deliver an intro course for us and from there, we selected two for a “short list” and asked them to normalize their proposals so that we could compare them side by side. +DMP: I want to talk a little bit about the process that we used to get to where we are today and to the training that we're going to recommend. We started by searching on CNVC.org for trainers presenting beginner/foundational level courses. We contacted five across North America and the UK, although there are trainers in every area of the globe. We asked them to submit proposals to deliver an intro course for us and from there, we selected two for a “short list” and asked them to normalize their proposals so that we could compare them side by side. -DMP: We are recommending the proposal by Kathy Simon and Itzel Hayward. This proposal is two sessions of our plenary observations held on different days. We'll do some group surveys and meetings with leadership to Custom Design training specific to our needs. We do four 90 minute sessions delivered across multiple four day plenaries. So for example one in July, then one in October, and then on into 2022. Following the completion of that, the training there would be two 60-minute office hours outside of the plenaries for any follow-up questions. The two issues that you see link to there are where we've been tracking these proposals for these trainings and all of the logistical stuff going on behind the scenes. Does anyone have any questions or comments? +DMP: We are recommending the proposal by Kathy Simon and Itzel Hayward. This proposal is two sessions of our plenary observations held on different days. We'll do some group surveys and meetings with leadership to Custom Design training specific to our needs. We do four 90 minute sessions delivered across multiple four day plenaries. So for example one in July, then one in October, and then on into 2022. Following the completion of that, the training there would be two 60-minute office hours outside of the plenaries for any follow-up questions. The two issues that you see link to there are where we've been tracking these proposals for these trainings and all of the logistical stuff going on behind the scenes. Does anyone have any questions or comments? IS:I have a question: two years ago at the 2019, June General Assembly. The TC39 Chair Group went to the Ecma General Assembly and they have asked for a professional communication training and I guess this is the reply to that. So this would be the so-called “professional communication training”. And then the answer of the General Assembly was that the general assembly allocated about 6,000 Swiss Francs for this activity and it was thought that it would be a half a day workshop in one of the TC39 plenary meetings, but next they wanted to have a more detailed proposal about what this project is about, with more details to the general assembly, and this contribution should be prepared by the TC39 chairs. Or probably on behalf of the TC39 chairs by somebody, and they wanted to see that back before we went ahead. So, I guess that this presentation is, or it could be as a part of it, or it could be the answer for it. I don't know, but I just wanted to recall it, you know what has happened two years ago. And so we are in the middle of a process. Thus there is an action required by the general assembly, because this is what they have asked for. Yeah, so this is my question, how we are going to satisfy the request of the General Assembly. DMP: Yeah, definitely. Thank you for bringing up the comments IS. We started from the assumption that since the training was previously approved for in 2019 that budget approval was no longer valid. On the topic of funding we really believe that this series of training would be immensely valuable. In addition to that given that there is a Global Group or a global support body for non-violent communication. It would be a really powerful thing because anybody can pick up the same topic and go to where they are in the world and continue if they so choose. -DE: So to speak to each one's Point. Yes, this is the continuation of the same topic and we do plan to make this into a more concrete written thing to send to the general assembly and they admit Secretariat everybody the Chair Group delegated to me to liaise with the Secretariat on these funding issues. So I am looking forward to continuing to work there. On the budget creep. You know when that 6,000 Swiss franc figure was chosen. It was not chosen based on estimates with trainers. I've looked through the different trainers that were interviewed, and you can see them all in the inclusion group repository. There's very detailed notes written up about these possibilities and I really think that the selected option here will be a lot more valuable than if we just went for the absolute lowest cost option. I think this is a change in format from what was previously proposed (to begin at a half-day session), but I think this will provide a lot more deep value, you know, this multiple meeting format that's being proposed in addition to the experience that these trainers have with anti-racism training as well. +DE: So to speak to each one's Point. Yes, this is the continuation of the same topic and we do plan to make this into a more concrete written thing to send to the general assembly and they admit Secretariat everybody the Chair Group delegated to me to liaise with the Secretariat on these funding issues. So I am looking forward to continuing to work there. On the budget creep. You know when that 6,000 Swiss franc figure was chosen. It was not chosen based on estimates with trainers. I've looked through the different trainers that were interviewed, and you can see them all in the inclusion group repository. There's very detailed notes written up about these possibilities and I really think that the selected option here will be a lot more valuable than if we just went for the absolute lowest cost option. I think this is a change in format from what was previously proposed (to begin at a half-day session), but I think this will provide a lot more deep value, you know, this multiple meeting format that's being proposed in addition to the experience that these trainers have with anti-racism training as well. IS: So my point was you know that they see our course of action and so they are expecting an answer now, and then what you have explained is that it is a part of the answer. Yes. so it has to be explained to them. In particular what is the scope of this workshop, it has to be explained to them, if that is ok? And it is not CHF 6,000 but it is some USD 9,200 or whatever, you know, etc….. So it is a continuation of that and now we have to feed back to them, but they have requested and will tell you this is not exactly the same but because we have evolved in a little bit different direction, etc. Etc. But you have now just told me. So that's cool. and then discuss with the General Assembly meeting then I guess that you get maybe a modification or maybe a go ahead and etc. I don't know, maybe some modification to the original idea and in that case in the July meeting and in the subsequent meeting if everything is approved you have to take that into account, etc... you know…. but I just wanted to remind you that this is how it works within Ecma, you know... so that they gave you the “ball” which happened two years ago, and then you have to kick the “ball” back now. @@ -93,50 +93,50 @@ PFC: I just want to say that communication is difficult and I think it can only RPR: So Dave. Do you have everything you want? DMP: Yes. Thank you. + ## Read-only ArrayBuffer and Fixed view of ArrayBuffer for Stage 1 + Presenter: Jack Works (JWK) - [proposal](https://github.com/Jack-Works/proposal-readonly-arraybuffer/) - [proposal](https://github.com/Jack-Works/proposal-arraybuffer-fixed-view) - [slides](https://docs.google.com/presentation/d/1TGLvflOG63C5iHush597ffKTenoYowc3MivQEhAM20w/edit?usp=sharing) - JWK: Okay, so I want to introduce two separate proposals and they are somehow related. Today we have ArrayBuffers but they are missing some features. The first one is that we cannot freeze ArrayBuffers. So that it won't be changed accidentally. For example, I some constant messages and I don't want someone else to change them in the ArrayBuffer. And today we have no way to do this. The second one is that we cannot make an ArrayBuffer or typed views that are internally mutable but read-only externally, and the third one is we cannot limit how much of the binary that's some use sites can view, which means if we expose a TypeArray, even if we set the offsets and length they can still bypass the via the prototype.buffer to access the whole ArrayBuffer. The final one is not too important for me, but maybe we can achieve this by this in the proposal, which is performance optimization. Now we have SharedArrayBuffers. But it has so many limitations. It cannot be used in the insecure (without cross-site isolation) context. But if we can freeze the ArrayBuffer, the browser can share memory directly instead of going through the structure cloning algorithm. Therefore there are two new features to introduce. The first one is read-only ArrayBuffer and the second one is the fixed view. JWK: Let me explain what the fixed view means. It describes the ability to limit a TypeArray or a DataView to only be able to view a small range of the underlying buffer. This feature can be composed with read-only so there’re four kinds of access (3 of them are new) to the ArrayBuffer. The right down one (no limitation at all) is what we have today. We can read and write, or get the full buffer from the TypedArrays. The left bottom one (read-only) is read-only so it can be only read but no write. The right top (fixed-view, writable) one is read/write but only limited to a small surface of the area, and the left top one (fixed-view, readonly) is the most limited one, which is read-only and only limited to a small area. JWK: I haven't decided which API design I should use, but I have some design goals for what the APIs should satisfy. The first one is it should be one way, which means once you froze the ArrayBuffer. There's no way back. You cannot turn a read-only ArrayBuffer into a read-write ArrayBuffer. and it's the same for the limited view. If you make a limited view of an ArrayBuffer, you can only make the view area smaller, but not recover the full view. -JWK: Even though I did not decide what design to use, I have two possible designs. The first one is a proxy-like new object, which acts like a proxy to an ArrayBuffer. _(shows slide "Possible design 1: Proxy view")_ You can see in the picture the 0-9 parts are the whole ArrayBuffer, and the proxy1 is what we are going to introduce. It’s a proxy to the bottom ArrayBuffer, but it's limited to the offsets and length. If anyone used this proxy1. It's will act looks like a normal ArrayBuffer and the 0 and 1 area is not visible to the user of the proxy1. We can also limit the writability on a proxy. Now we have proxy2 on area 6 to 9 and if we create a new Uint8Array from the proxy, this Uint8Array can only see the see the 6-9 parts of the whole buffer and it cannot change its contents. +JWK: Even though I did not decide what design to use, I have two possible designs. The first one is a proxy-like new object, which acts like a proxy to an ArrayBuffer. _(shows slide "Possible design 1: Proxy view")_ You can see in the picture the 0-9 parts are the whole ArrayBuffer, and the proxy1 is what we are going to introduce. It’s a proxy to the bottom ArrayBuffer, but it's limited to the offsets and length. If anyone used this proxy1. It's will act looks like a normal ArrayBuffer and the 0 and 1 area is not visible to the user of the proxy1. We can also limit the writability on a proxy. Now we have proxy2 on area 6 to 9 and if we create a new Uint8Array from the proxy, this Uint8Array can only see the see the 6-9 parts of the whole buffer and it cannot change its contents. -JWK: The second design _(shows slide “Possible design 2: configuration”)_ is configuration-like. We create a new typed array with a second option bag that limits the ability to access the ArrayBuffer, Or we can add new prototype methods on the TypedArray prototypes to freeze the view. Maybe I missed something (in the slides). Let me share the repositories. +JWK: The second design _(shows slide “Possible design 2: configuration”)_ is configuration-like. We create a new typed array with a second option bag that limits the ability to access the ArrayBuffer, Or we can add new prototype methods on the TypedArray prototypes to freeze the view. Maybe I missed something (in the slides). Let me share the repositories. JWK: The read-only parts have two targets. The first one is we should add a new way to freeze the buffer and the second one is we should have the ability to create a read-only TypedArray to a read/write ArrayBuffer. This means we can achieve internal mutability. -RPR: Okay, Jack. Did you want to go to the queue? They're out. There are some questions for you. Okay, excellent or that their thank you for that presentation. +RPR: Okay, Jack. Did you want to go to the queue? They're out. There are some questions for you. Okay, excellent or that their thank you for that presentation. MM: Hi. First of all, I want to say I am very very supportive of this proposal. I think that getting better control of mutability is something that's very high priority, and I want to make sure that we distinguish carefully something being frozen or immutable versus a read-only view of something that's possibly mutable. They're both valuable, for example only something that's immutable can be safely shared with other threads without any interlocks. The main point I want to raise is that there's this stage 1 proposal (https://github.com/tc39/proposal-readonly-collections) that I've made that I've been quietly working towards advancing. We worry about collections in the main focus of that proposal. Is that for all of our collections that there's these three methods snapshot to diverge and read-only view for snapshot gives you back an immutable one if you do snapshot on one that's already immutable you get that one back. Diverge creates a fresh new mutable copy that starts from the state of the thing that you're diverging and then read-only view simply makes a read-only view of whatever it is and if you're already looking at something that's immutable or a read-only view read The readOnlyView method just gives you that one back. So I think all of that can be all of those. apply to ArrayBuffer and would fit with… Oh, thanks for thanks for bringing the the reading of the read-only collections thing up on the screen. I appreciate that. And so I think all of those methods and the general perspective of that proposal could easily be extended to include ArrayBuffer and would satisfy some of what you're asking for here.i like the window aspect that you're asking for here, which is not part of that proposal. And finally I want to mention that the issue that moddable raises that the mentioned on the screen here, which is the ability to put data in ROM is also a big payoff from introducing immutable ArrayBuffers as a first-class concept in the language. So Bravo! -SYG: Hi, so for your two initial designs that you raised with the proxy views and the configuring of TypeArray. I'm pretty against the proxy view design. I think ArrayBuffers and TypeArray are different than other collections because arraybuffers are already these things that you cannot directly use, that you already have to make an additional type to view. On top of via the TypeArray and I am pretty reluctant to want to introduce more levels of indirection here by piling proxy views of ArrayBuffers on top which they themselves cannot be directly consumed and then still have to make expertise around is that seems too complex to me and that complexity will probably reflected into the implementation, which is not great for for security because, you know, people try to break out of the sandbox and own engines with ArrayBuffers and TypeArray all the time. That's my first item we go into the second item the word security was thrown around. I know that this is kind of ongoing disagreement among several delegates in committee on what constitutes security for the language and we have a new TG2 to discuss that so I would be mindful here of how much you want to build this as security. The views would not for example. Certainly, it would not be security against side channels. It wouldn't necessarily be security for implementation bugs and if that's where the complexity comes in that if the extra complexity makes it harder to secure the implementation because there are suddenly, you know an explosion of code paths that need to consider that arguably a net detriment to security. +SYG: Hi, so for your two initial designs that you raised with the proxy views and the configuring of TypeArray. I'm pretty against the proxy view design. I think ArrayBuffers and TypeArray are different than other collections because arraybuffers are already these things that you cannot directly use, that you already have to make an additional type to view. On top of via the TypeArray and I am pretty reluctant to want to introduce more levels of indirection here by piling proxy views of ArrayBuffers on top which they themselves cannot be directly consumed and then still have to make expertise around is that seems too complex to me and that complexity will probably reflected into the implementation, which is not great for for security because, you know, people try to break out of the sandbox and own engines with ArrayBuffers and TypeArray all the time. That's my first item we go into the second item the word security was thrown around. I know that this is kind of ongoing disagreement among several delegates in committee on what constitutes security for the language and we have a new TG2 to discuss that so I would be mindful here of how much you want to build this as security. The views would not for example. Certainly, it would not be security against side channels. It wouldn't necessarily be security for implementation bugs and if that's where the complexity comes in that if the extra complexity makes it harder to secure the implementation because there are suddenly, you know an explosion of code paths that need to consider that arguably a net detriment to security. JWK: so when I say security I mean application security here. -SYG: Yes. Then please say that. I think that is the point. In general I am very sympathetic to the read only case. I understand that, you know some parts of it should not be should not provide you could you would you want to lock it down to be to be read only I am more skeptical of some of the other use cases in that grid you put out especially making a subsection read-write and disallowing writing to the rest. I would like to see some concrete use cases there I guess. - -SYG: I am wondering what the concrete use cases are. if you could go back to your slides where you presented a grid _(slide "Introduce two new orthogonal features")_. Okay, how fixed view composes with read only? So in the upper right is read write a small area what let's say the top row it might correct in understanding that the motivating use case for the read only a small area. Is that webassembly use case you posted +SYG: Yes. Then please say that. I think that is the point. In general I am very sympathetic to the read only case. I understand that, you know some parts of it should not be should not provide you could you would you want to lock it down to be to be read only I am more skeptical of some of the other use cases in that grid you put out especially making a subsection read-write and disallowing writing to the rest. I would like to see some concrete use cases there I guess. -JWK: actually the whole first row is from that use case. +SYG: I am wondering what the concrete use cases are. if you could go back to your slides where you presented a grid _(slide "Introduce two new orthogonal features")_. Okay, how fixed view composes with read only? So in the upper right is read write a small area what let's say the top row it might correct in understanding that the motivating use case for the read only a small area. Is that webassembly use case you posted +JWK: actually the whole first row is from that use case. -SYG: Okay, hold still cold top row read only a small area and rewrite a small area is for web simply use case. Yes. +SYG: Okay, hold still cold top row read only a small area and rewrite a small area is for web simply use case. Yes. -JWK: So if delegates think this is not a problem to solve. I can only bring the read-only parts to the proposal. +JWK: So if delegates think this is not a problem to solve. I can only bring the read-only parts to the proposal. -SYG: Yeah, I would like to talk into that more. We don't need to do it now. +SYG: Yeah, I would like to talk into that more. We don't need to do it now. BFS: you were saying mixing read-only and rewrite with fixed areas. The I think yes slide is not necessarily stating. These are mixed. This could be read only of a small area of a larger read-only or read. Right? -SYG: Isn't that isn't that mixing the mode of the entire ArrayBuffer? +SYG: Isn't that isn't that mixing the mode of the entire ArrayBuffer? BFS: No I'm saying you have a fixed view on a read-only ArrayBuffer would be top left. Effectively is a solution to your problem of mixing modes. @@ -144,9 +144,9 @@ SYG: I'm not sure. sure. I understood that. BFS: So Jack was explaining the fixed view is essentially taking a subset of your data and having it be bound checked. And so if you do that on a read-write or a buffer, we're not mixing modes; if you do that on a read-only, you're also not mixing modes. I think the concern You're having is if fixed view can change the read/write-ability of the underlying data. Is that correct? -SYG: No, I'm just talking about if you layer a fixed view on top of another thing that changes to write ability. +SYG: No, I'm just talking about if you layer a fixed view on top of another thing that changes to write ability. -BFS: I think we can take this offline, but I think this is just a confusion on capabilities. I agree mixing modes would be bad. +BFS: I think we can take this offline, but I think this is just a confusion on capabilities. I agree mixing modes would be bad. JWK: They are two separate capabilities. They’re orthogonal features you can use one without use another one. @@ -154,15 +154,15 @@ SYG: Fix views exist today via TypeArray. JWK: No because you can access `view.buffer` to get the original buffer. -SYG: So do you want to lock that down? It's that it's not that you cannot like limit the view, it's that you can always escape the view, and you want to not be able to do that unless you were already past the buffer. +SYG: So do you want to lock that down? It's that it's not that you cannot like limit the view, it's that you can always escape the view, and you want to not be able to do that unless you were already past the buffer. -BFS: Correct. This has actually been a problem with Node.js buffer and a security issue in the past where people were leaking data all for reused allocation pools. So it's not it's not essentially heartbleed and had a similar kind of thing going on in node where you could basically Escape your bound check and start reading things. +BFS: Correct. This has actually been a problem with Node.js buffer and a security issue in the past where people were leaking data all for reused allocation pools. So it's not it's not essentially heartbleed and had a similar kind of thing going on in node where you could basically Escape your bound check and start reading things. SYG: Thanks for the explanation. -JHX: I see the two designs which presents a I am not sure understand. so creating a separate view of about the other is just for is the whole ArrayBuffer. So the first one the first one is more like the read-only collection proposal. So I think it's not necessary to add more interaction levels. +JHX: I see the two designs which presents a I am not sure understand. so creating a separate view of about the other is just for is the whole ArrayBuffer. So the first one the first one is more like the read-only collection proposal. So I think it's not necessary to add more interaction levels. -MM: I want to respond to SYG (I see it's also the next thing that SYG has on the queue). I don't think that this proposal should be shy about saying security. It's unquestionably contributing to security. Security can often be broken down into availability, confidentiality, and integrity. (https://agoric.com/blog/all/taxonomy-of-security-issues/) This obviously makes no contribution availability or confidentiality. Side channels concern confidentiality. Read-only means being able to lock down mutability, which unquestionably makes a contribution to Integrity. So there's just no reason to waffle about whether this contributes to security. I think Google, in particular the Chrome team at Google, should be careful about trying to take some very weird idiosyncratic view of security that's focused exclusively on the same-origin model and side channels and try to turn it into a corrupted view of the general concept of security. +MM: I want to respond to SYG (I see it's also the next thing that SYG has on the queue). I don't think that this proposal should be shy about saying security. It's unquestionably contributing to security. Security can often be broken down into availability, confidentiality, and integrity. (https://agoric.com/blog/all/taxonomy-of-security-issues/) This obviously makes no contribution availability or confidentiality. Side channels concern confidentiality. Read-only means being able to lock down mutability, which unquestionably makes a contribution to Integrity. So there's just no reason to waffle about whether this contributes to security. I think Google, in particular the Chrome team at Google, should be careful about trying to take some very weird idiosyncratic view of security that's focused exclusively on the same-origin model and side channels and try to turn it into a corrupted view of the general concept of security. SYG: I would like proposals to be wary of using the word "security" simpliciter. Given that you MM have a taxonomy of where a proposal would improve what aspects of security I would like those explicitly noted instead of "this improves security". @@ -172,33 +172,33 @@ SYG: That is not the view of the Chrome team as we have said in the past. MM: I'm very glad we're going to be arguing that out until the Chrome team explains their bizarre perspective on security, I don't think everybody else should need to corrupt their use of the term. -RPR: Okay. I'm so what I'm hearing here is that this is a known disagreement. Do we need to resolve this in the context of this particular proposal? +RPR: Okay. I'm so what I'm hearing here is that this is a known disagreement. Do we need to resolve this in the context of this particular proposal? -MM: We do not. +MM: We do not. -RPR: Thank you. +RPR: Thank you. -BFS: so we have read only collections and we now have this to some extent with the read-only TypeArray wondering if we should actually make a real taxonomy for this or protocol for this of some kind, and generalize where you want to lock down internal data of some kind. It seems like both this proposal and read-only collections are about blocking down internal data when we don't really have a way to allow users to hook into the protocol to do that. We also don't have well-documented invariants. We're trying to get to when we do such a thing. So any future proposal might not match these two proposals, or they could go out of sync which we saw a concern earlier about. I just think we need to actually generalize this even if we keep the proposal separate just so we have everything in order so that nothing goes out of sync. That's one comment. +BFS: so we have read only collections and we now have this to some extent with the read-only TypeArray wondering if we should actually make a real taxonomy for this or protocol for this of some kind, and generalize where you want to lock down internal data of some kind. It seems like both this proposal and read-only collections are about blocking down internal data when we don't really have a way to allow users to hook into the protocol to do that. We also don't have well-documented invariants. We're trying to get to when we do such a thing. So any future proposal might not match these two proposals, or they could go out of sync which we saw a concern earlier about. I just think we need to actually generalize this even if we keep the proposal separate just so we have everything in order so that nothing goes out of sync. That's one comment. -BFS: The other one is I'm definitely in favor of avoiding only allowing creating these at allocation point so the ability to stream data into a buffer while it is mutable, and then mark as immutable is much cheaper than trying to do things like ropes or something on the application level. I know VMs can try to optimize and do things on that path, like SYG mentioned, but the more paths you add it's not just VMs who have to deal with complexities. It's also applications and applications do have bugs. Those bugs can have real world issues. So I would be against it if we transition this in some future point to be a snapshot that allocates over something else in you can transition existing mutable array to immutable. That's all. +BFS: The other one is I'm definitely in favor of avoiding only allowing creating these at allocation point so the ability to stream data into a buffer while it is mutable, and then mark as immutable is much cheaper than trying to do things like ropes or something on the application level. I know VMs can try to optimize and do things on that path, like SYG mentioned, but the more paths you add it's not just VMs who have to deal with complexities. It's also applications and applications do have bugs. Those bugs can have real world issues. So I would be against it if we transition this in some future point to be a snapshot that allocates over something else in you can transition existing mutable array to immutable. That's all. DE: Yeah, I definitely agree with a lot of the things that BFS and SYG said. This is a frequently requested feature for example in Node.js and especially for this kind of case that the BFS and where you want to allocate something right to it and then freeze it so this is a reason that this proposal is quite different from the needs for records and tuples because records and tuples have structural equality without order requirement and you know we want to keep equality reliable in JavaScript with fixed Behavior. So if we're freezing something in the middle these structures are still going to have identity-based equality whereas records and tuples have equality based on contents. So not everything can be unified, but I think it's a really good idea to follow what BFS suggested and to think about this whole problem space. This is very closely related to the read-only collections proposal. I filed an issue previously on that proposal suggesting that we talk about ArrayBuffers and that seemed to be viewed positively within the context of that proposal so we should definitely decide on a particular factoring. I'm also a little concerned about the complexity of this space. I think once we have this taxonomy, it's important to think about which things we really definitely want to create language features for, and it doesn't have to be everything at first. It can be some of the simpler parts. I definitely agree with SYG that it does risk creating security issues to make some of these paths too complex. So I strongly support this proposal going to stage 1. It's a very important problem space for us to discuss and in the course of Stage 1, We can work out these issues. -JWK: There's nothing on the queue now. so we have two separate problems. It seems like we have strong agreements for read-only to stage 1. But do you think fixed views are also an important case to think about? +JWK: There's nothing on the queue now. so we have two separate problems. It seems like we have strong agreements for read-only to stage 1. But do you think fixed views are also an important case to think about? -SYG: I think for stage 1 you would want to certainly explore it. +SYG: I think for stage 1 you would want to certainly explore it. -JWK: Yeah the limited view is requested from GitHub issues. And I think it _might_ be useful. Therefore I added it to the repo. +JWK: Yeah the limited view is requested from GitHub issues. And I think it _might_ be useful. Therefore I added it to the repo. -BFS: I think that's something you can explore in stage 1. Okay, you can still ask for stage 1 one right now. +BFS: I think that's something you can explore in stage 1. Okay, you can still ask for stage 1 one right now. PHE: I support both of these moving to stage 1. I think they as has been pointed out overlap with various proposals that have been mentioned in terms of addressing immutability, Especially in read-only in the APIs, and that's that's very important to us, but I do think as part of stage one. We're going to have to take a broader look at how all those pieces fit together. YSV: In a review of these proposals, we actually found fixed views pretty compelling. We had a couple of concerns about the read-only view in terms of some of our internal discussions, but nothing that makes us concerned about stage 1. So this is just we do see a value in figuring it out for stage one. -KM: I don't oppose stage 1 the but I think since a lot of this is sort of seems like like my understanding all of this is predicated on proxies not being fast enough the type of work and JSC some work in progress on making proxies just as fast effectively optimizing code, proxy optimized code and +KM: I don't oppose stage 1 the but I think since a lot of this is sort of seems like like my understanding all of this is predicated on proxies not being fast enough the type of work and JSC some work in progress on making proxies just as fast effectively optimizing code, proxy optimized code and -JWK: the proxy I mentioned is not a proxy we have today. It's a new kind of object that's dedicated, and masks the underlying ArrayBuffers, so it's not related to the proxy for objects. +JWK: the proxy I mentioned is not a proxy we have today. It's a new kind of object that's dedicated, and masks the underlying ArrayBuffers, so it's not related to the proxy for objects. KM: Sure. I get the right and you can could re-implement this with that. You wanted to it would just have a performance overhead today. But so basically all I'm saying is if you know once if we had version of proxy that has like the information that the ability to inline your Hooks and stuff without overhead. We would probably be against this proposal because there would over sort of in which should be obsolete in world. @@ -214,7 +214,7 @@ JWK: Ok, I only got the possible solution, so maybe I can make the problem a mor JHD: Yeah, this was brought up yesterday on another proposal but in general the purpose of stage 1 is for us to explore a problem, and it's not until stage 2. They'll be really focused on the solution and what we often put on cost. This is stage one, but it's useful to think about it in terms of the problem. -DE: I want to disagree with that JHD. I think it's quite normal to have stage 1 proposals in terms of you know the in this kind of form and +DE: I want to disagree with that JHD. I think it's quite normal to have stage 1 proposals in terms of you know the in this kind of form and JHD: It's typical but it's explicitly in the process document, and we've spoken about it in committee before, and you didn't disagree with the same request for a different proposal yesterday, so I'm confused. @@ -222,11 +222,11 @@ DE: The reason is because I think the presentation made the problem quite clear. JHD: It's not clear to me. -DE: It's okay. Could you could you clarify that the nature of your no, I lack an understanding our problem. +DE: It's okay. Could you could you clarify that the nature of your no, I lack an understanding our problem. JHD: So I would like it to be phrased in terms of the problem so that I can gain an understanding. I cannot tell you why I don't know what I don't know. -RPR: Okay, could this be worked out after stage 1? +RPR: Okay, could this be worked out after stage 1? JHD: Yes. The reason I'm asking is because if it goes to Stage 1 I would update the proposals table. and typically we try to have those have a problem statement. And so I'm hoping to have that clarified. So yes, it can be worked out separately. I want to have that request on the record, is all. @@ -234,11 +234,11 @@ RPR: Okay good. JWK: So it seems like we have stage 1? -RPR: Yeah, let's let's just ask so this is for the tube that you've got two questions here. Is this actually two independent proposals where these all in one? and our to in the independent proposals. Okay. Well, let's let's go through one by one then. +RPR: Yeah, let's let's just ask so this is for the tube that you've got two questions here. Is this actually two independent proposals where these all in one? and our to in the independent proposals. Okay. Well, let's let's go through one by one then. -RPR: So we will start asking for a and other objections to the read only to stage one. +RPR: So we will start asking for a and other objections to the read only to stage one. -DE: sorry. I this problems be should be considered jointly for stage 1 there's there's lot to work out and I think we should be working this out together. +DE: sorry. I this problems be should be considered jointly for stage 1 there's there's lot to work out and I think we should be working this out together. RPR: Okay. All right then so the quick the question is and then there's more support from MF as well for that the progressing these together. All right. So so then the question is are there any objections to advancing both of these to stage 1? @@ -250,33 +250,33 @@ JHD: Thank you. RPR: Okay, so I'm not hearing any other objections. -SYG: I don't have time to get on the Queue right now, I don't have an objection. I would like to note for the notes that as one proposal, I find the read-only use case more compelling currently than the fixed-view use case and it is good to explore both during stage 1, but I just I guess I just want noted that I would like to be open to the possibility that the problem space is narrow in scope in the future if either use cases is found to be less compelling, and the champion themselves have said that maybe the the fixed view use cases not best as compelling submits a request from GitHub. +SYG: I don't have time to get on the Queue right now, I don't have an objection. I would like to note for the notes that as one proposal, I find the read-only use case more compelling currently than the fixed-view use case and it is good to explore both during stage 1, but I just I guess I just want noted that I would like to be open to the possibility that the problem space is narrow in scope in the future if either use cases is found to be less compelling, and the champion themselves have said that maybe the the fixed view use cases not best as compelling submits a request from GitHub. -JWK: Yeah. +JWK: Yeah. RPR: Okay, it's notes from Shu any any objections to stage one. -RPR: No, so then I congratulations Jack you have stage 1. +RPR: No, so then I congratulations Jack you have stage 1. JWK: Thanks. ### Conclusion/Resolution -* Both proposals progress to stage 1 -* Two proposals will be merged +- Both proposals progress to stage 1 +- Two proposals will be merged RPR: I just want to add one extra note on the previous topic, which was the NVC training proposal by DMP, which is also just to explain that also. Just wanted to say that the chair group supports the NVC training proposal + ## Intl Enumeration API update + Presenter: Frank Yung-fong Tange (FYT) - [proposal](https://github.com/tc39/proposal-intl-enumeration) - [slides](https://docs.google.com/presentation/d/1LLuJJvGsppQfFf0eCBBcplub_E7NY4EdbSVeE2duyoA/edit#slide=id.g96c285a300_1_0) +FYT: Okay, so, my name is Frank Tang from Google and what convey a internet resolutions lies HSI like 142 I came here last yesterday talked of three proposal. Today's the fourth one to talk about different from the other three. We are not asking for stage an advancement in this particular meeting, but there are some requests here to this International until you nomination. Epi currently seeing stage. To I just give you update the charter for the Intl. I nominate API is to be able to list the support has value for option in the pre-existing magma for to API, which means some of the ecma402I have option but is some of the value of the option is from the color point of view is a little harder figured out which value are supported which are not supported so in this This particular proposal we try to be able to let the software program be able to get the support of value in a particular implementation. So give a little historical background. So it was originally motivated during the Temporal proposal to discuss of a time zone support which actually encouraged to form this a proposal. Of course, other kinds of just one of the values helped the redress there's some other values or some of the option that people figured out that is also necessary to be able to return it was Advanced to stage 1 during June meeting last year and advanced to stage 2 in September meeting last year, one mistake. I made I'm sorry for that and that time when we Advanced stage 2 is so cool events to stay too and somehow I didn't get state three reviews. Viewers tied up or maybe we ask I just didn't realize that for stage 2 time for and I say, okay I can wait for stage 3, so I really sorry. But so if anyone signed up for stage 3 reviewers, I cannot find them from the meeting notes. So basically I need someone to sign up for safety with yours right now. I gave the update in November meeting and all. so the other thing I want to emphasize that one of the reason we want to advance to State 2 is there are some concern about fingerprinting and privacy and we believe in the time we advanced stage 2 is however help us to get more exposure and to get feedback about the area which all report on the next slide about that. in the March meeting and this is the not TC39. This is in the ecma402. A monthly meeting in the March. We as many people may know that in agreement for TG2 to we have establish three area that we would like to qualify a proposal they support for stage 2 and but that because that happened after the September meeting. So I think our chair tried to do the process rise, so we discuss and reaffirm whether this we discussand reaffirmed this proposal did fit that three criteria pyre prior our civil code to implement in usual and and brought appear during that meeting, and also around that time Mozilla also reported their analysis of the fingerprinting thing. I think in February, and published a short paper, and share around an apple dialect in that time feel they need to take a deeper look at it. So they take a month and the April meeting Apple expressly agreed with summary that Mozilla put down in the I just a apple agree with analysis of what the Mozilla folks put it together and we believe they're no more privacy and fingerprint concerns regarding this proposal within the Ecma for to subcommittee and here's the thing that we got reported from Mozilla in fabric fourth 2021. Here's a link you You can read a report for short. The title is Intl enumeration APIprivacy inculcation Mozilla has recommendation here. You can read I start out long, but I'm not go do that you ask you to read it here line by line, but just call the summary the summary statement by mozilla does not believe any nomination proposal open up any new fingerprint vector and does we do not believe it should be blocked on that ground. So as I mentioned earlier DMP agreed that privacy analysis and that meeting therefore we believe. Sorry. I can't come back here and therefore we leave at least in the concern of fingerprinting. You raised during the stage 1 time. And when with advancement it got resolved, so we believe this shouldn't have any fingerprint issue if anyone has additional concern you're free to express it. But unless someone reopens that issue, I believe that it is being addressed. - -FYT: Okay, so, my name is Frank Tang from Google and what convey a internet resolutions lies HSI like 142 I came here last yesterday talked of three proposal. Today's the fourth one to talk about different from the other three. We are not asking for stage an advancement in this particular meeting, but there are some requests here to this International until you nomination. Epi currently seeing stage. To I just give you update the charter for the Intl. I nominate API is to be able to list the support has value for option in the pre-existing magma for to API, which means some of the ecma402I have option but is some of the value of the option is from the color point of view is a little harder figured out which value are supported which are not supported so in this This particular proposal we try to be able to let the software program be able to get the support of value in a particular implementation. So give a little historical background. So it was originally motivated during the Temporal proposal to discuss of a time zone support which actually encouraged to form this a proposal. Of course, other kinds of just one of the values helped the redress there's some other values or some of the option that people figured out that is also necessary to be able to return it was Advanced to stage 1 during June meeting last year and advanced to stage 2 in September meeting last year, one mistake. I made I'm sorry for that and that time when we Advanced stage 2 is so cool events to stay too and somehow I didn't get state three reviews. Viewers tied up or maybe we ask I just didn't realize that for stage 2 time for and I say, okay I can wait for stage 3, so I really sorry. But so if anyone signed up for stage 3 reviewers, I cannot find them from the meeting notes. So basically I need someone to sign up for safety with yours right now. I gave the update in November meeting and all. so the other thing I want to emphasize that one of the reason we want to advance to State 2 is there are some concern about fingerprinting and privacy and we believe in the time we advanced stage 2 is however help us to get more exposure and to get feedback about the area which all report on the next slide about that. in the March meeting and this is the not TC39. This is in the ecma402. A monthly meeting in the March. We as many people may know that in agreement for TG2 to we have establish three area that we would like to qualify a proposal they support for stage 2 and but that because that happened after the September meeting. So I think our chair tried to do the process rise, so we discuss and reaffirm whether this we discussand reaffirmed this proposal did fit that three criteria pyre prior our civil code to implement in usual and and brought appear during that meeting, and also around that time Mozilla also reported their analysis of the fingerprinting thing. I think in February, and published a short paper, and share around an apple dialect in that time feel they need to take a deeper look at it. So they take a month and the April meeting Apple expressly agreed with summary that Mozilla put down in the I just a apple agree with analysis of what the Mozilla folks put it together and we believe they're no more privacy and fingerprint concerns regarding this proposal within the Ecma for to subcommittee and here's the thing that we got reported from Mozilla in fabric fourth 2021. Here's a link you You can read a report for short. The title is Intl enumeration APIprivacy inculcation Mozilla has recommendation here. You can read I start out long, but I'm not go do that you ask you to read it here line by line, but just call the summary the summary statement by mozilla does not believe any nomination proposal open up any new fingerprint vector and does we do not believe it should be blocked on that ground. So as I mentioned earlier DMP agreed that privacy analysis and that meeting therefore we believe. Sorry. I can't come back here and therefore we leave at least in the concern of fingerprinting. You raised during the stage 1 time. And when with advancement it got resolved, so we believe this shouldn't have any fingerprint issue if anyone has additional concern you're free to express it. But unless someone reopens that issue, I believe that it is being addressed. - -FYT: So again, so what are the scope of the this API? Is that as I mentioned we try to cover what is already Express is already in option in a coma for two API in different Intl object and try to be able to programmatically to figure out what is supported in that particular browser implementation that including calendar and collation and currency and also numbering system time times on number system unit are exposed to listed in ecma402 text. We kind of have different kind of treatment one is Qaeda is my remember, right? I think the number assistance a you need to at least support o thing and unit basic state. That is exactly the state either support for browser all order for have no clear set return us back tax that you know, which is a set. So we try to have a way to allowed developer to figure out in that The computation what is it supported? So now we try to limit the API surface. So we try to for economics reason not to show a lot of different method. I think the originally when in the stage 0 kind there are many have several different methods and I forgot when exactly the time. we change it at least the so far. I think after cease to we just support one method which is not from any other Intl object but created from the interrupted to implement a method of supported values of which is very close to the supported locales of Intl different from that one. That does not accept a Locale but set key and optionally read except options our explains that part and the key is apply for all the different things. You have calendar, currency, number system, time zone and unit and those property of those key are actually property near in various objects and happen in number format some happening data format and some happened in more than one format. And the expectation is that we will return a SupportedValues prototype object which has a iterator method which mean you can iterate through it. +FYT: So again, so what are the scope of the this API? Is that as I mentioned we try to cover what is already Express is already in option in a coma for two API in different Intl object and try to be able to programmatically to figure out what is supported in that particular browser implementation that including calendar and collation and currency and also numbering system time times on number system unit are exposed to listed in ecma402 text. We kind of have different kind of treatment one is Qaeda is my remember, right? I think the number assistance a you need to at least support o thing and unit basic state. That is exactly the state either support for browser all order for have no clear set return us back tax that you know, which is a set. So we try to have a way to allowed developer to figure out in that The computation what is it supported? So now we try to limit the API surface. So we try to for economics reason not to show a lot of different method. I think the originally when in the stage 0 kind there are many have several different methods and I forgot when exactly the time. we change it at least the so far. I think after cease to we just support one method which is not from any other Intl object but created from the interrupted to implement a method of supported values of which is very close to the supported locales of Intl different from that one. That does not accept a Locale but set key and optionally read except options our explains that part and the key is apply for all the different things. You have calendar, currency, number system, time zone and unit and those property of those key are actually property near in various objects and happen in number format some happening data format and some happened in more than one format. And the expectation is that we will return a SupportedValues prototype object which has a iterator method which mean you can iterate through it. FYT: So here's an example of what how it look like. So, you can say I want to iterate through the calendar of Intl and what calendat systems are you supporting Intl right? It will iterate through it. We return basically an iterator to the function similar to our array, but it's more flexible and you can iterate through that and so on and so forth and particular for time zone. It could ask also passing an option which have a region code that will allow you to not only lists all the time zone that support but for whatever the times are supported in that particular area. So the spec text here and I can read the spec text since we're not going to ask for stage advancement. This just try to tell you the report we have suspect axed. Are we still working on I plan to bring up to stay three advancement in May if we can reach agreement and car for two months of meeting. So if you have any issue or any action item you like me to work on it will be renamed. You can tell me today. @@ -290,7 +290,7 @@ SFC: I mean, I was hoping to get some more input from others on the committee, b SFC: Okay, my next agenda item is one of the main concerns that has come up from some stakeholders when we discussed this in tg2 is that the main motivating use case behind this proposal is for Pickers. Although I would argue it's not the only use case, that's certainly one of the main motivating use cases is I have a picker, like a timezone picker, etc, and you know one argument that has been made in tg2 is that if we want to support a picker we should do that in HTML, not in ecmascript. and I'm of the position that I think that this is in scope for ecmascript because this is the foundational data source that's required to implement a date picker. This data is required for HTML, and it's required if any user land library simply wanted to create a picker. This is the data source for that picker and I think the way that FYT has presented this proposal makes it very appropriate for use as a picker. I just wanted to get an opinion from this body about to verify whether it's a legitimate thing to introduce this API into ecmascript first as opposed to going through the w3c process of introducing this directly into HTML without an ecmascript side. -DE: If we if we want to go ahead with exposing this information, I think it does make sense for this to be part of Intl rather than HTML. If it were a couple to a UI user interface feature, that would be quite unfortunate, but even putting it in HTML, otherwise... I don't know maybe Intl could have been done in HTML or CSS rather than JavaScript, but at this point we have a body of experts producing these and I think it's all go here and didn't make sense to continue doing it at the js level. +DE: If we if we want to go ahead with exposing this information, I think it does make sense for this to be part of Intl rather than HTML. If it were a couple to a UI user interface feature, that would be quite unfortunate, but even putting it in HTML, otherwise... I don't know maybe Intl could have been done in HTML or CSS rather than JavaScript, but at this point we have a body of experts producing these and I think it's all go here and didn't make sense to continue doing it at the js level. DE: To combine with my next topic, you know, I do like the API design of this proposal. However, the you know, the fingerprinting issue has raised and I think at this point should block at stage 3 for explicit statements of support from probably from Mozilla and webkit to ensure that they've analyzed this and come to the conclusion that it's not going to be a blocker for them to to later implement this feature. @@ -298,17 +298,17 @@ FYT: Sorry, I don't quite get that. I did show you that they internal fingerprin DE: You know, it would be nice for them to clarify this Committee just to you know, sign off on what you just said -FYT: I put the web page there. You can click that link there. It's not published by me they publish it in February and there's a link you can click, and there was something written from webkit. +FYT: I put the web page there. You can click that link there. It's not published by me they publish it in February and there's a link you can click, and there was something written from webkit. -SFC: We have explicit statements of support from both WebKit and Mozilla (about fingerprinting) in the TG2 meeting two weeks ago, which is recorded in the notes. +SFC: We have explicit statements of support from both WebKit and Mozilla (about fingerprinting) in the TG2 meeting two weeks ago, which is recorded in the notes. -FYT: Yeah. There are two apple delegates. I'm not sure as from Apple and I'll show how they were internally but to person from Apple delegate in I'm aboard to agree with pound that +FYT: Yeah. There are two apple delegates. I'm not sure as from Apple and I'll show how they were internally but to person from Apple delegate in I'm aboard to agree with pound that -DE: Sorry for my confusion here. No problem to make sure that this was yeah, I'll take those notes in. Sorry about just want to clarify they have no explicit statement state. +DE: Sorry for my confusion here. No problem to make sure that this was yeah, I'll take those notes in. Sorry about just want to clarify they have no explicit statement state. -FYT: They're going to implement this proposal. They have the explicit statements say it is not a fingerprinting concern in the degree of the Exposed on that document. They say there are no additional fingerprinting vector and that's their conclusion. Just want to make sure I didn't miss the group that they this is a summary they say and they didn't say more than this. +FYT: They're going to implement this proposal. They have the explicit statements say it is not a fingerprinting concern in the degree of the Exposed on that document. They say there are no additional fingerprinting vector and that's their conclusion. Just want to make sure I didn't miss the group that they this is a summary they say and they didn't say more than this. -YSV: Hi, I can back up that we did review the privacy concerns and we don't consider that to be the blocking issue there. I don't know if the needle’s been moved yet on making the argument around the use cases for introducing this. But I believe this is just an update, so we don’t need to get into that. +YSV: Hi, I can back up that we did review the privacy concerns and we don't consider that to be the blocking issue there. I don't know if the needle’s been moved yet on making the argument around the use cases for introducing this. But I believe this is just an update, so we don’t need to get into that. FYT: [Yeah, you're just this is yeah, @@ -322,11 +322,11 @@ FYT: That's an April meeting. I don't think SFC may not have been put it up on t SFC: Yeah, I'll publish the official notes soon, but you can find the notes in the Google Doc in the drive folder. -FYT: Yeah, usual. Usually they get published like couple of days before the next meeting. So I think that's one of the reasons SFC hasn't put them up. +FYT: Yeah, usual. Usually they get published like couple of days before the next meeting. So I think that's one of the reasons SFC hasn't put them up. -SFC: Yeah, I haven't put it up quite yet because I wanted delegates to have the chance to review it. +SFC: Yeah, I haven't put it up quite yet because I wanted delegates to have the chance to review it. -FYT: the most knows about Mozilla has already put up just the apple one is not not not there the Mozilla one it seemed March meeting. Then Apple took one additional month to review whatever Mozilla folks wrote so it took longer than what happened the April meeting. +FYT: the most knows about Mozilla has already put up just the apple one is not not not there the Mozilla one it seemed March meeting. Then Apple took one additional month to review whatever Mozilla folks wrote so it took longer than what happened the April meeting. MS: Yes, so I wasn't at that April meeting but not going to just dispute or deny that that we reviewed that I wasn't one of the ones that would be valid. @@ -336,24 +336,23 @@ MLS: I'm from Apple. Okay? so if it's in the minutes from one of our one of the FYT: I apologize. I'm not a hundred percent sure. They're from webkit. I know they're wrong Apple so who -SFC: It was MCM. +SFC: It was MCM. MLS: Yeah MCM is on webkit. Yeah. Yeah. Yeah. I trust more to him. There's another Apple engineer. FYT: I believe the two people both say that yeah. Okay, so I assume that topic is not a no longer issue or any other issue on the talk about or any requests before I can take back to work on this month. if not, then my request is this could some cool people sign up for stage 3 reviewers. -RPR: okay, so we're not getting any immediate volunteers. Is this essential for now FYT? +RPR: okay, so we're not getting any immediate volunteers. Is this essential for now FYT? -FYT: Yeah, if I come back as stage three, +FYT: Yeah, if I come back as stage three, -RGN: I'm planning to review it. But as an editor of 402, I'm not sure that's sufficient. +RGN: I'm planning to review it. But as an editor of 402, I'm not sure that's sufficient. JHD: I'll review it - ### Conclusion/Resolution -* Richard Gibson & JHD will review +- Richard Gibson & JHD will review ## Reviewers for the Object.hasOwn proposal @@ -361,17 +360,17 @@ RPR: So just before we go to lunch, we've got another similar request. So that a JHD: Object.hasOwn yes, I would love to. -RPR: Leo Balter has volunteered and Felienne as well. +RPR: Leo Balter has volunteered and Felienne as well. ### Conclusion/Resolution -* JHD & Leo Balter & Felienne Hermans will review +- JHD & Leo Balter & Felienne Hermans will review ## Isolated Realms update YSV: Okay, we're at approximately 80 people so we had hopes that tomorrow could avoid meeting so that you would have the day off and we wouldn't have to meet again, especially at the odd hours that everybody's been getting up or staying up until however, we will very likely overrun today with the topics that are being discussed the question. Committee is do we want to tack on another 30 minutes to this presentation, or does the committee want to meet tomorrow early in the morning for some of you so that would be our normal morning starting time at 10 a.m. Eastern time or if we want to do something else do people have any quick comments on that. You can also post it on the Discord on IRC. I'll be monitoring that to see what we want to do about this. So rather post on IRC what your thoughts are there and I'll be taking a look at that. Then let's get started with Leo's topic, which is isolated Realms. You have the floor, floor, please. Go ahead. -LEO: Okay, thanks. Yes, and just for clarification and I have a hard blocker for me tomorrow, so I'm not going to be able to join. I hope we can fix the realms updates today. All right, so we believe you can see my screen right but the Realms cover. Yes, that's right. Thanks. So. Yeah, this is a Realms update with the callable monitoring API. It's a new update on the API API. I'm going to give the word to Karine, But before that I just want to be sure like the primary goals still remain, like we want. but we run for the for this is a new global object with new setting of intrinsics with a separate module graph that still preserves synchronous communication between both Realms and for all to provide a proper mechanism to control execution of a program. I think CP can join in for now +LEO: Okay, thanks. Yes, and just for clarification and I have a hard blocker for me tomorrow, so I'm not going to be able to join. I hope we can fix the realms updates today. All right, so we believe you can see my screen right but the Realms cover. Yes, that's right. Thanks. So. Yeah, this is a Realms update with the callable monitoring API. It's a new update on the API API. I'm going to give the word to Karine, But before that I just want to be sure like the primary goals still remain, like we want. but we run for the for this is a new global object with new setting of intrinsics with a separate module graph that still preserves synchronous communication between both Realms and for all to provide a proper mechanism to control execution of a program. I think CP can join in for now CP: Yeah, I just want to do before you give you a little bit of context because we have been a stay still for a long time and part of the discussion with implementers of being around two main topics one related to the fact that you have multiple set of in transition. They are in the previous API. They were connected together introducing what we call identity this community, which is one you see that between guy friends and the main window and so on we have been trying to work with the implementers, especially with Google on resolving this issue. Basically the food gone issue as we call it is the problem that affects developers will have a hard time maybe getting to understand what's going with the identity of the object that are from different Realms. That's how being the priority for us to find a solution for that the second issue again about whether or not a realm is a security boundary. We continue saying there's not a security boundary. And so that's a secondary part that we have to continue engaging with with vendors and implementers. So this one in particular is focus on the separation between realms not having the reamls to in to be intertwined the object graph of the realm to be intertwined. So Leo will provide an update of these API that we believe supports this they use cases, but also support the theory - @@ -385,7 +384,7 @@ WH: So if `this` is an object then it will fail? CP: if it is not if it is a callable then it would not if yeah -WH: When is the `this` object callable? +WH: When is the `this` object callable? CP: You can “apply” on it because whatever have you call on it? You will get wrapped. @@ -393,25 +392,25 @@ LEO: Yeah. Okay. You still don't have the access to this, is just you just creat YSV: Sorry, can I jump in for a second? We need more note takers are can we please have a volunteer to help with the notes? I can help with the notes great. -LEO: Also in this area here where I have This is a little bit trickier where I have a function and I can see this error here from the other function because I in this case here dosomething at the last line is sending another function to the other realm. Sending a callable value is okay. It's going to be a wrapped in the other realm and when I try to call this function there it will fail there because that's when wrapped is going to be called. And it's like, yeah I I'm trying to receive object. There's not callable. So the failure still goes there. It's too tightly or? there. and here's the same way it also applies to async functions, but I'm using a even a little bit more General example of trying to wrap the array. That's when you actually try to bring in Constructors. Array will return an object, an array object and And if you try to the wrapped array It will fail it would have a type error exception. +LEO: Also in this area here where I have This is a little bit trickier where I have a function and I can see this error here from the other function because I in this case here dosomething at the last line is sending another function to the other realm. Sending a callable value is okay. It's going to be a wrapped in the other realm and when I try to call this function there it will fail there because that's when wrapped is going to be called. And it's like, yeah I I'm trying to receive object. There's not callable. So the failure still goes there. It's too tightly or? there. and here's the same way it also applies to async functions, but I'm using a even a little bit more General example of trying to wrap the array. That's when you actually try to bring in Constructors. Array will return an object, an array object and And if you try to the wrapped array It will fail it would have a type error exception. WH: Can you explain the previous slide? You get the type error, then what happens? -LEO: I do have the type error in the realm like in this case. I'm doing a try/catch. I'm actually saving I'm actually capturing these exception so do something here. We see if this error function. These are a function when called it receives the argument array there is going to be named into the parameter wrapped array and in this I try to call wrapped array fails because it tries to complete in an object value or returning a new array it throws a type error because wrapped array is in a wrapped function is type object. and it throws a type error because you cannot completes into a object non-callable. Yeah. +LEO: I do have the type error in the realm like in this case. I'm doing a try/catch. I'm actually saving I'm actually capturing these exception so do something here. We see if this error function. These are a function when called it receives the argument array there is going to be named into the parameter wrapped array and in this I try to call wrapped array fails because it tries to complete in an object value or returning a new array it throws a type error because wrapped array is in a wrapped function is type object. and it throws a type error because you cannot completes into a object non-callable. Yeah. -CP: I think this slide is mostly around the caller which in this case is the realm, the caller of wrapped function. Array will get an error that fits into that particular realm. So the identity of that error is from that realm, it doesn't leak even though the error happens on the other side when it returns something back to the realm that is not a primitive value or callable. +CP: I think this slide is mostly around the caller which in this case is the realm, the caller of wrapped function. Array will get an error that fits into that particular realm. So the identity of that error is from that realm, it doesn't leak even though the error happens on the other side when it returns something back to the realm that is not a primitive value or callable. -LEO: Yeah, and we think these slides I'm actually trying to show like where errors should happen within the proposed spec. Can we go to the next slide or yes, please? Okay. Thanks. Yeah, so here I have an Abrupt completion wrapping where I do have, In the case of like I just trying to evaluate throw new error. Here is for me by a good illustration where we have like, I'm trying to throw error just a generic error, but it's in this other realm. I don't have the identity of the error object from the other realm. I still have type error. I don't I still doesn't share the identity the API doesn't provide that directly and one important thing of about the wrapped functions they want to carry properties if have like if it tried to send a function that does have a property secret when I evaluate that I don't see that property. The wrap function is just like a new function. It's not an object that will get to call the other connected function in the other realm, but it does not look or observe any any extra properties of the function or anything. I should be using object has own here, but unfortunately the slides were made before the presentation yesterday. I would love to use object dot has own but let's move to the next Slide. the Realm prototype import value. I'm sorry about my audio should be clear here. So, what is it? We do have an importValue that is analogous to Dynamic import and the other should be different, but it kind of counteracts with the evaluate because the evaluates to depends on CSP relaxing like the unsafe eval. Importvalue is quite analogous to a dynamic import, but you actually capture a value from that the imported module name space. You don't get a binding you can totally don't get a dynamic binding or anything. You just capture value and also the values that are resolved they are dependent on going through this get wrapped value in this case. You can only receive primitive values or Callable objects in the callable objects will be wrapped. +LEO: Yeah, and we think these slides I'm actually trying to show like where errors should happen within the proposed spec. Can we go to the next slide or yes, please? Okay. Thanks. Yeah, so here I have an Abrupt completion wrapping where I do have, In the case of like I just trying to evaluate throw new error. Here is for me by a good illustration where we have like, I'm trying to throw error just a generic error, but it's in this other realm. I don't have the identity of the error object from the other realm. I still have type error. I don't I still doesn't share the identity the API doesn't provide that directly and one important thing of about the wrapped functions they want to carry properties if have like if it tried to send a function that does have a property secret when I evaluate that I don't see that property. The wrap function is just like a new function. It's not an object that will get to call the other connected function in the other realm, but it does not look or observe any any extra properties of the function or anything. I should be using object has own here, but unfortunately the slides were made before the presentation yesterday. I would love to use object dot has own but let's move to the next Slide. the Realm prototype import value. I'm sorry about my audio should be clear here. So, what is it? We do have an importValue that is analogous to Dynamic import and the other should be different, but it kind of counteracts with the evaluate because the evaluates to depends on CSP relaxing like the unsafe eval. Importvalue is quite analogous to a dynamic import, but you actually capture a value from that the imported module name space. You don't get a binding you can totally don't get a dynamic binding or anything. You just capture value and also the values that are resolved they are dependent on going through this get wrapped value in this case. You can only receive primitive values or Callable objects in the callable objects will be wrapped. LEO: This quick example here shows that I have a specifier and and then I have a binding name some and the value of sum will be the one that I receive in this case as it is a function. I received the wrapped function exotic object that's quite similar to what you would have with evaluate, but the difference here is imported. We have the aspects of dynamic import and Import in general. Caching evaluated modules the successful evaluated modules. You still can run this another more times. You still have the aspect here where you need an async function for this code injection, but it's one thing that is a good trade-off for us. At least for us I mean the representing that the Champions here. The other parts of the communication there is this to remain synchronous. the module specifier and exported name are both required right now. So in this case here. I can have some insight code that actually my to import more modules who anything even if I don't care about the names, but I assume someone using a realm can just wrap that. Here's an example ahead with the test framework that I just want something that like exports and run tests. and I'll I'll scanning for Test code for userland most of the time test code I don't need to require like something to be exported by the name to be exported. That's why our for realms it’s still okay to have something. It seems OK we considered other options for this. This is in my opinion an okay option. and in the future when we discuss more further on the module blocks we can consider importValue having a module block instead of specifier. I think that's a very good addition for using Realms but it also depends on the module blocks advancing. -LEO: Yeah, we have some caveats. I mentioned evaluate is subject to CSP directives as unsafe for important values also subject to it CSP directives their different as default Source functions are never unwrapped. There is no unwrapping, every evaluation wrap squabbles into a new wrabbed function exotic. That means if you transfer function here and they are all the time you also create a wrapped function exotic on top. You don't, There is no unwrapping of it. Wrap function is exotics don't have a construct and these are not going to be connected. It's just a call internal also the function Exotics the call they want coerced these argument to object. This is done in regular functions call. And wrap function exotics is this argument is also subject to get wrapped value that that's one the previous questions from WH. Going through after the cave. He adds the resolutions that I believe the current proposal might be limited on cross from object exotics, but still enables a proper virtualization mechanism as to provides enough tools to implement membranes on top and because we have these wrappers audit functions enabling crossrealms callbacks in either direction. we do have a proof of concept implementation from curries of membranes framework on top of these Realms API. and we can talk about it the The current status that we that we have rendered spec. We do have the explainer updated. We've got some SES feedback with some homework that we are working on and we do have some initial implementers feedback. We still work in progress. We want to reach out to implementers for more feedback. There is the TAG review that was ongoing. This proposal is to bring this new format of the proposals to be going back to the TAG for any necessary for reviews. And we have this proof of concept membrane on top we can extend about it, we can talk can talk about it. I really want to get this proposal to request advancement to stage three in the next meeting in May 25th. So please if you have anything that you feel that it's important to be solved by then. Let's talk because I think we're in a very good direction here. One of the things that is important about the membranes just to make JHD has raised some concerns about this API as because identity discontinuity. Yes. It's already a reality today. We already have that we've membranes framework and but with the membranes framework CP was able to reproduce some object like comparisons. That's to bring that aspect back through userland code but I'd say that's like how much more userland up to code through a membrane implementation. It's not provided by the API and that's it. Let's open for questions. I'm going to try to bring the tcq here. +LEO: Yeah, we have some caveats. I mentioned evaluate is subject to CSP directives as unsafe for important values also subject to it CSP directives their different as default Source functions are never unwrapped. There is no unwrapping, every evaluation wrap squabbles into a new wrabbed function exotic. That means if you transfer function here and they are all the time you also create a wrapped function exotic on top. You don't, There is no unwrapping of it. Wrap function is exotics don't have a construct and these are not going to be connected. It's just a call internal also the function Exotics the call they want coerced these argument to object. This is done in regular functions call. And wrap function exotics is this argument is also subject to get wrapped value that that's one the previous questions from WH. Going through after the cave. He adds the resolutions that I believe the current proposal might be limited on cross from object exotics, but still enables a proper virtualization mechanism as to provides enough tools to implement membranes on top and because we have these wrappers audit functions enabling crossrealms callbacks in either direction. we do have a proof of concept implementation from curries of membranes framework on top of these Realms API. and we can talk about it the The current status that we that we have rendered spec. We do have the explainer updated. We've got some SES feedback with some homework that we are working on and we do have some initial implementers feedback. We still work in progress. We want to reach out to implementers for more feedback. There is the TAG review that was ongoing. This proposal is to bring this new format of the proposals to be going back to the TAG for any necessary for reviews. And we have this proof of concept membrane on top we can extend about it, we can talk can talk about it. I really want to get this proposal to request advancement to stage three in the next meeting in May 25th. So please if you have anything that you feel that it's important to be solved by then. Let's talk because I think we're in a very good direction here. One of the things that is important about the membranes just to make JHD has raised some concerns about this API as because identity discontinuity. Yes. It's already a reality today. We already have that we've membranes framework and but with the membranes framework CP was able to reproduce some object like comparisons. That's to bring that aspect back through userland code but I'd say that's like how much more userland up to code through a membrane implementation. It's not provided by the API and that's it. Let's open for questions. I'm going to try to bring the tcq here. -YSV: All right, first up. We have a question or topic from JHD which I believe you just covered JHD. Do you want to speak about the intentional discontinuity? +YSV: All right, first up. We have a question or topic from JHD which I believe you just covered JHD. Do you want to speak about the intentional discontinuity? JHD: Yes. It is totally fine with me to say that object discontinuity is a footgun, and it’s totally fine with me to find a way to make the default behavior of Realms not have that footgun. That sounds great. Folks who are using a membrane library now, I think it is also fine to say for their use case, they will still need to use a membrane library (perhaps a different one around realms, but because they're already using a library, they already accept that cost). What I am not comfortable with: I can already use iframes and just directly pass objects around. That's a capability that eternally exists and can never be removed, so it does not make any sense to me that I would simply not be able to use Realms without paying for the cost of some sort of membrane library so I can replace the ways I'm currently using iframes (for example, to grab reliable copies of prototype methods or built-ins). I can already do this, and I want to be able to do it with Realms instead. This call wrapping stuff - I've heard that there's at least one use case for it, which is I believe Amp, and that's great, but it seems like a lot of complexity. It doesn't match the capabilities that the platforms already have, and could be done with a library around those capabilities anyway, so I just really don't understand why. The sense I'm getting is that there are some folks in the web world who wish iframes didn't exist and thus are pushing back and forcing Realms not to match iframes’ capabilities - that perception may be completely unfair and wrong, and I apologize if it is - but I think it's really important that Realms have the ability somehow to be an iframe, basically. -CP: Yeah, I can take that one. Well partially so yeah, we have discussed this extensively mostly with Google Fox. I think she was here the current API allows to eventually open that gate I would say I just give you access to the global space of the realm. So it's a possibility that in the future we can do that. I hope that we're going to revisit that at some point. If if the implementers are okay with having that we could add just very simple thing to do, but for now, it's just not really we got to push back on that particular point and that's what we're making a changes. So we want to continue arguing that the who have to get implementers provide more direct feedback +CP: Yeah, I can take that one. Well partially so yeah, we have discussed this extensively mostly with Google Fox. I think she was here the current API allows to eventually open that gate I would say I just give you access to the global space of the realm. So it's a possibility that in the future we can do that. I hope that we're going to revisit that at some point. If if the implementers are okay with having that we could add just very simple thing to do, but for now, it's just not really we got to push back on that particular point and that's what we're making a changes. So we want to continue arguing that the who have to get implementers provide more direct feedback JHD: Yeah, and I would love to assess that beacuse to me Realms don't seem worth it without the ability to have an identity discontinuity like I pointed out. @@ -425,7 +424,7 @@ JHD: No, I think you're right, but it's a is a much smaller surface area and thu CP: And I'll just and I also think is important that higher. For that use case is JHD has tried you can not guaranteed to be the first one to be there. So creating an iframe to do that is the proper way and even the creation of the iframe doesn't even guarantee you that yet. You're going to be the first one but separate question, but I think it's real that you cannot achieve that with the current and proposed of without introducing a membrane, which is probably going to be more complicated. -YSV: BFS did you have a further reply that you wanted to make here? +YSV: BFS did you have a further reply that you wanted to make here? BFS: Yeah, so this is kind of reply to the topic not necessarily to that. We're talking about iframes in particular. We're talking about a specific host capability that not all JavaScript hosts have not all seek to have and we're talking about a capability that that host ecosystem the The web environment is moving discourage and flat-out banning some of the behaviors were stating are always going to be there through their CSP stuff. I don't think we should use deprecated features that are seen as potential problem as a forcing function that it needs to land in the language itself when it is a host API that we're talking about. That's all I just don't think this topic is a good direction for us as a committee to walk towards. @@ -433,11 +432,11 @@ YSV: Okay, and JWK also has a comment here or reply here JWK: I agree it's hard to handle with module graphs from different realms, but I think we should keep the capability for advanced users to opt-in to the original access to the objects or the membrane the version of access to objects. And I have to say the current solution of the wrapped function looks like a poor Man's membrane, and why not make it a full functional membrane? -YSV: Okay, and we have DE next. +YSV: Okay, and we have DE next. -DE: JHD is arguing that this proposal would not be worth it without the ability to share objects directly not just primitives and callables through the through the realm boundary and I see this style of argumentation a lot in TC39 that you know, this proposal wouldn't be worth it. Unless we add that feature and I want to push back against that it's a general style of argument. I think even though we can find use cases for that other feature. It's reasonable for us to come take a on a smaller feature set. That would still meet important use cases so that the champion group here has provided important use cases that this feature meets, and it's a relatively simple way to meet them and I don't think it's appropriate to say “Well, it must also provide this other thing because it's accessible through that other mechanism”. As BFS stated some parties view this is deprecated. It's not available on all JavaScript environments. And so I'm not I'm not convinced by that argument that the scope needs to include sharing objects between Realms. I don't think it would be a bad direction to go in I could see use cases, but I disagree that it's a precondition for moving forward. +DE: JHD is arguing that this proposal would not be worth it without the ability to share objects directly not just primitives and callables through the through the realm boundary and I see this style of argumentation a lot in TC39 that you know, this proposal wouldn't be worth it. Unless we add that feature and I want to push back against that it's a general style of argument. I think even though we can find use cases for that other feature. It's reasonable for us to come take a on a smaller feature set. That would still meet important use cases so that the champion group here has provided important use cases that this feature meets, and it's a relatively simple way to meet them and I don't think it's appropriate to say “Well, it must also provide this other thing because it's accessible through that other mechanism”. As BFS stated some parties view this is deprecated. It's not available on all JavaScript environments. And so I'm not I'm not convinced by that argument that the scope needs to include sharing objects between Realms. I don't think it would be a bad direction to go in I could see use cases, but I disagree that it's a precondition for moving forward. -LEO: All right. I want to iterate something here with Regarding this group that the current discussion. I've been I just also talked to JHD. There is one problem here where we also trying to face some challenges. There is challenges involving both like the original form of the realms propsoal. Oh like the previous form and this new calendar or boundary one. JHD is telling like realms is not useful with this form, but it's too has an opportunity. You need to with proper discussion to discuss extensions of this realm with the I believe where we can meet these requirements in the future. But right now this current form is meeting a lot of our initial goals as I mentioned at the beginning. It's sort of like picking the challenges that we have because we have challenges on both ways. I know we might not disagree with all of them. But I think this is like a good path to go to move forward instead of being stuck like Like if we try to move to the original form we're going to be stuck again for the same reasons. We've been stuck for so long. JHD my callable action is actually do you think this is even considering all of this? Do you think this is something that for you? It's something that you would still object thinking this way like if. We want to explore expansion of this realm proposals after it's done with proper discussions in it It's still something that You still would still object with the current format? +LEO: All right. I want to iterate something here with Regarding this group that the current discussion. I've been I just also talked to JHD. There is one problem here where we also trying to face some challenges. There is challenges involving both like the original form of the realms propsoal. Oh like the previous form and this new calendar or boundary one. JHD is telling like realms is not useful with this form, but it's too has an opportunity. You need to with proper discussion to discuss extensions of this realm with the I believe where we can meet these requirements in the future. But right now this current form is meeting a lot of our initial goals as I mentioned at the beginning. It's sort of like picking the challenges that we have because we have challenges on both ways. I know we might not disagree with all of them. But I think this is like a good path to go to move forward instead of being stuck like Like if we try to move to the original form we're going to be stuck again for the same reasons. We've been stuck for so long. JHD my callable action is actually do you think this is even considering all of this? Do you think this is something that for you? It's something that you would still object thinking this way like if. We want to explore expansion of this realm proposals after it's done with proper discussions in it It's still something that You still would still object with the current format? JHD: Yes, so let me reply to both DE and you. This isn't expanded scope, this is a reduction in scope, and it's a reduction in scope based on arguments that we, I don't think, have fully and fairly heard out in plenary. It is totally reasonable to say for any feature, “let's ship a subset and add something later”. There's a number of times with that's been done in the past. I don't think there's consensus that that's always a good approach but certainly it's a viable approach. However, that's only a viable approach when there remains a path to future addition of the extra features, and based on what BFS saying if browsers will never want to allow this feature, then it's not “shipping a subset now and adding it later”, it's “sneaking in a rejection of that extra feature”, and based on these arguments, I have no hope or expectation that if we ship this form of Realms that we would ever get the direct object sharing form. I think it's completely reasonable to to withhold consensus for advancement absent having those discussions in plenary and not just the with the champions in back channels. @@ -451,45 +450,45 @@ SYG: Okay, I mean you're being convinced of the arguments being presented doesn' JHD: I mean that it's a bad argument that an opt-in footgun is something you’re likely to accidentally do wrong. -SYG: Yes, and we have to if we have right the the specific foot gun too kind of say my understanding here is not simply that you can you can mingle object wraps through the user realm and the incubator realm the specific foot gun. that There was active confusion both from web devs at large. Just like on that I saw on Twitter as well as you know, supposedly fairly well-informed folks with the web platform team about the nuances of security boundaries internal to Google who were confusing the capability that Realms was providing as some kind of active sandbox. and that foot gun was that this confusion would seem to be very real and that folks were adopting it for guarantees that it was not providing where intending to adopt it for guarantees that it was not provided. The foot gun is not that the capability exists and that you could accidentally opt-in, it is that they were actively opting in through a misunderstanding and this was something we saw during the previous iteration of the proposal. It was not a theoretical thing. It sounded to me. your response JHD that your use case it's not quite the full package of use cases that the Realms proposal seeks to address here, which is running this not entirely untrusted, but maybe semi trusted code with a fresh set of things and kind of its own environment. That's like separate somehow. your use case seem to be more focused on the on getting pristine copies of things. Is that right? +SYG: Yes, and we have to if we have right the the specific foot gun too kind of say my understanding here is not simply that you can you can mingle object wraps through the user realm and the incubator realm the specific foot gun. that There was active confusion both from web devs at large. Just like on that I saw on Twitter as well as you know, supposedly fairly well-informed folks with the web platform team about the nuances of security boundaries internal to Google who were confusing the capability that Realms was providing as some kind of active sandbox. and that foot gun was that this confusion would seem to be very real and that folks were adopting it for guarantees that it was not providing where intending to adopt it for guarantees that it was not provided. The foot gun is not that the capability exists and that you could accidentally opt-in, it is that they were actively opting in through a misunderstanding and this was something we saw during the previous iteration of the proposal. It was not a theoretical thing. It sounded to me. your response JHD that your use case it's not quite the full package of use cases that the Realms proposal seeks to address here, which is running this not entirely untrusted, but maybe semi trusted code with a fresh set of things and kind of its own environment. That's like separate somehow. your use case seem to be more focused on the on getting pristine copies of things. Is that right? JHD: yeah, I mean it is essentially a form of the getOriginals API that lacks the reasons it was a concern. That's essentially what I want out of Realms. -SYG: I would like to further push back that it is it is it seems rather unfair to the champion group into the Realms proposal which have not been confused about what the problem they're trying to solve to kind of pin that particular use case on the realms proposal and say that without that solving for that use case. +SYG: I would like to further push back that it is it is it seems rather unfair to the champion group into the Realms proposal which have not been confused about what the problem they're trying to solve to kind of pin that particular use case on the realms proposal and say that without that solving for that use case. JHD: From proposal is not useful. If we had never discussed it before that's fine, but we've discussed it in plenary and like I've discussed it with the Champions many times. So I think it's well understood that that is a hoped-for use case, but I hear you saying. -YSV: if it's all right, I would like to advance the queue. We've had a few people waiting for a bit specifically. Let's go with MM's reply to this topic and then BFS and then continuing on with the rest of the queue. I think this topic is close to being closed, please go ahead MM. +YSV: if it's all right, I would like to advance the queue. We've had a few people waiting for a bit specifically. Let's go with MM's reply to this topic and then BFS and then continuing on with the rest of the queue. I think this topic is close to being closed, please go ahead MM. MM: Yeah, we want to confirm much of what JHD saying about how we got here historically. And to respond to I think it was DE's using the term expanded scope in terms of adding in the direct access. It was the denial of direct access. That was the expanded scope. Realm started out simpler. As a result of Google's expressed concern and objection of the foot gun issues on direct access. And what we ended up with here was definitely an expansion in scope to deny direct access. It was motivated by the dangers of direct access and on this one. I'm kind of in the middle. So let me let me see if I can briefly State my in the middle perspective on this, which is that Realms both before and after are a security sandbox and the attempts to deny that or not understand that or paint it in other terminology have largely been besides the point. However, the experience at confirm very strongly that when you try to do anything at the realm boundary for security purposes other than build a complete membrane that guarantees isolation with one mechanism. if you have a bare membrane boundary and you do any kind of ad hoc manual programming trying to use it as a security boundary you will screw up. So in that sense, I want to confirm Google's concern as valid if you use the the realm boundary as a security boundary. You've got to have a single bounded mechanism that guarantees isolation so that you can then do everything else on top of that isolation. - we would have proposed building in a membrane directly if we had a simple membrane to propose to build indirectly but membranes are a pattern not at this point an abstraction mechanism. So what we're proposing here is essentially an isolation guaranteeing abstraction mechanism that we've shown by proof of concept can support building membranes. So I think that this succeeds at squaring the circle here succeeds at navigating between these different concerns and coming out with something that that best meets The Joint set of concerns. -YSV: All right, BFS, Do you have a response to the current topic or to mark? to the current topic not too much. All right, ahead. +YSV: All right, BFS, Do you have a response to the current topic or to mark? to the current topic not too much. All right, ahead. -BFS: To the current topic not too Mark. video calls their around weekly the cover this there on the TC39 calendar. So if anybody feels like this is being done in a back room. Please attend those calls wash the GitHub. These are things that have been discussed for many months. These are not new changes or the problem of sharing objects non Primitives is not new. In fact, it's years old. So it's a little strange to hear that. It's being done in kind of a shady sounding way, please if you feel so attend meetings at be on GitHub. Yeah, that's all. I +BFS: To the current topic not too Mark. video calls their around weekly the cover this there on the TC39 calendar. So if anybody feels like this is being done in a back room. Please attend those calls wash the GitHub. These are things that have been discussed for many months. These are not new changes or the problem of sharing objects non Primitives is not new. In fact, it's years old. So it's a little strange to hear that. It's being done in kind of a shady sounding way, please if you feel so attend meetings at be on GitHub. Yeah, that's all. I -YSV: believe we have a reply from Jack Works to what Mark had said earlier. +YSV: believe we have a reply from Jack Works to what Mark had said earlier. -Jack: Yeah Mark said this is a proof of concept of for isolation. That's the current semantics doesn't support we upgrading supporting works of for example objects to achieve the membrane and because the current realm if I pass the function to another realm and that's run past the function back. I don't skip the same identity of the function but wrapped it twice the function. That's so its blocking us from upgrading to force membrane future because it's as a behavior changing and that's what we breaking. +Jack: Yeah Mark said this is a proof of concept of for isolation. That's the current semantics doesn't support we upgrading supporting works of for example objects to achieve the membrane and because the current realm if I pass the function to another realm and that's run past the function back. I don't skip the same identity of the function but wrapped it twice the function. That's so its blocking us from upgrading to force membrane future because it's as a behavior changing and that's what we breaking. CP: No. We have we have created a proof of concept that this is not a problem. We have a experimental Library. I call it irealm and this experimental Ivory allow you to create a fully functional membrane across the two realm and yeah, you have to do a little bit more active acts to communicate between the two sides, but is a very small Library about less than 2 k. And so it's just a proof of concept demonstrate that in fact you will be able to do what you do with different graphs. Are you it gives you the illusion that you have access to the other object graph. We you do so through a bunch of proxies. -MM: Yeah is not the Exotic function created by this round method. You have to get to build a membrane. You're building an entirely new abstraction level with new proxy objects, including new functions that are not directly related to the Exotic function. +MM: Yeah is not the Exotic function created by this round method. You have to get to build a membrane. You're building an entirely new abstraction level with new proxy objects, including new functions that are not directly related to the Exotic function. -YSV: all right, I if there's any more topics related to what we've been discussing right now around iframes, please start a new topic. Let's move on to SYG's discussion. +YSV: all right, I if there's any more topics related to what we've been discussing right now around iframes, please start a new topic. Let's move on to SYG's discussion. -SYG: right, I would like to extend my thanks to the champion group and the folks who have done a lot of hard work here to like Mark said I do believe this indeed squares the circle. When you know when we originally have grazed the the concern and proposing building in an isolation boundary directly. There was a pretty big expressivity problem with with Cycles which this callable boundary neatly solves and I'm very impressed by cleverness of the solution so just a thank you there. I'm very supportive of the current direction the there are some smaller remaining things that will not be blockers and I believe they can be there more mechanical issues that can be worked through one of them is the separate module graph, I think I don't think there's any conceptual issue there, but some kind of progress on draft on how this would integrate on the HTML side with the graphs machinery would help build confidence there That itss the right choice. I'm kind of I find the import value API a little strange in that it combines this Dynamic import mechanism with an export mechanism. That seems to be unrelated to what you just imported. If I read the spec job correctly. You can just export anything like that. You can try to get the value of anything. It just so happens that there's an import Step at the same time, but it doesn't have to be like a value that's exported by the thing you just imported. So that's a little bit strange to me. And finally there are some discussions internally at Google about the name realm and if it should signal more unambiguously that it is not a security boundary in the browser security sense. I imagine this will be very contentious. I don't really want to get into it now, but that is a remaining concern but it's not a blocker. I want to stress that name is not a blocker. +SYG: right, I would like to extend my thanks to the champion group and the folks who have done a lot of hard work here to like Mark said I do believe this indeed squares the circle. When you know when we originally have grazed the the concern and proposing building in an isolation boundary directly. There was a pretty big expressivity problem with with Cycles which this callable boundary neatly solves and I'm very impressed by cleverness of the solution so just a thank you there. I'm very supportive of the current direction the there are some smaller remaining things that will not be blockers and I believe they can be there more mechanical issues that can be worked through one of them is the separate module graph, I think I don't think there's any conceptual issue there, but some kind of progress on draft on how this would integrate on the HTML side with the graphs machinery would help build confidence there That itss the right choice. I'm kind of I find the import value API a little strange in that it combines this Dynamic import mechanism with an export mechanism. That seems to be unrelated to what you just imported. If I read the spec job correctly. You can just export anything like that. You can try to get the value of anything. It just so happens that there's an import Step at the same time, but it doesn't have to be like a value that's exported by the thing you just imported. So that's a little bit strange to me. And finally there are some discussions internally at Google about the name realm and if it should signal more unambiguously that it is not a security boundary in the browser security sense. I imagine this will be very contentious. I don't really want to get into it now, but that is a remaining concern but it's not a blocker. I want to stress that name is not a blocker. LEO: That's all true. I just want to say likewise on the tanks because things for all the iterations on the on the import value the original name was import value, but we wanted to rename to avoid to avoid ambiguity with Dynamic bindings because there's actually get a gets a value from a from a binding of this module in space, but it's not ergonomic. I'm open. I'm totally open to bike shed the names and yes, we are still working on trying new things for this round. Um School structure of we just like not finding any alternative that we feel solid to suggest to rename realm we think about callable realm we really could could probably bike shed this but I think this should be done async too. CP: Yeah on the on the April value. We also discuss some alternative an expansion of the import assertions, maybe something like that some options that can be passed. So we're still looking into it if you or anyone has any feedback on what kind of API we could use that I'll be great. -YSV: We have a clarifying question from DE about the relationship with modules. Go ahead Daniel. +YSV: We have a clarifying question from DE about the relationship with modules. Go ahead Daniel. -DE: Sorry not a clarifying question more like a response. So SYG mentioned that something needs to be done about the way that Realms would work with the HTML module map structure and how HTML works with molecules. I would like more information on this because I produced a PR for this in the past and I don't understand what the problem with it is. I think this version of Realms would work with that PR just the exact same way as the previous version did about I see +DE: Sorry not a clarifying question more like a response. So SYG mentioned that something needs to be done about the way that Realms would work with the HTML module map structure and how HTML works with molecules. I would like more information on this because I produced a PR for this in the past and I don't understand what the problem with it is. I think this version of Realms would work with that PR just the exact same way as the previous version did about I see -SYG: That helps my bad didn't know the previous version the pr still applied. +SYG: That helps my bad didn't know the previous version the pr still applied. -DE: Yeah, because it's pretty orthogonal to you know, this restriction is pretty orthogonal to how data modules work, but I think there was some discomfort with having a module map for realm and that's something that I continue to not understand. So second there was the concern about import value Maybe. It's you know, it could be surprising that you're importing something but also getting a single export at the same time and you know CP was suggesting that maybe we should have it import and you get a special name space object, but that would be far more complicated because we would have to make a special kind of module name space object that would perform this, you know wrapping or checking thing on each property access so you would to make a new kind of exotic object and it just seems like huge amount of overkill so I think it was it's just a much simpler approach. Even if it's sorry +DE: Yeah, because it's pretty orthogonal to you know, this restriction is pretty orthogonal to how data modules work, but I think there was some discomfort with having a module map for realm and that's something that I continue to not understand. So second there was the concern about import value Maybe. It's you know, it could be surprising that you're importing something but also getting a single export at the same time and you know CP was suggesting that maybe we should have it import and you get a special name space object, but that would be far more complicated because we would have to make a special kind of module name space object that would perform this, you know wrapping or checking thing on each property access so you would to make a new kind of exotic object and it just seems like huge amount of overkill so I think it was it's just a much simpler approach. Even if it's sorry SYG: If I can quickly reply. My concern was not that there's a single value to export it, but that there didn't seem to be any connection between what you are getting out and the thing you just imported. I agree that getting a namespace exotic object would be a worse solution. @@ -499,15 +498,15 @@ SYG: if I'm reading the spec text correctly The Binding that you are requesting CP: No no would be an export binding. -SYG: Okay, then I retract that concern. Okay, good. Thank you. +SYG: Okay, then I retract that concern. Okay, good. Thank you. -YSV: Alright, the next topic is from MM. +YSV: Alright, the next topic is from MM. -MM: LEO in the slide where you're showing import value. We said something about needing an async function for something around the world, but I do follow that at all and I'm not aware of anything in this spec that that requires you to use an async function just for clarification. +MM: LEO in the slide where you're showing import value. We said something about needing an async function for something around the world, but I do follow that at all and I'm not aware of anything in this spec that that requires you to use an async function just for clarification. LEO: I'm sorry it requires an await, it produces a promise that will be resolved into the important value. Because it's connected to the dynamic import. -MM: Okay. We should we should all be should all remember that promises are the base level for this not wait. Await is higher level. Okay. +MM: Okay. We should we should all be should all remember that promises are the base level for this not wait. Await is higher level. Okay. LEO: Yeah. I was just trying to quickly jump into if you want that value you need to await an async tic. @@ -517,7 +516,7 @@ WH: How would this interact with records and tuples? CP: Yeah, so this is one of the things that we're eager to see and obviously once you have recommend to people because those are primitive values you will be able to pass them around and return it because they don't have identity and there they will be shared around different Realms that will work really great. They only open question is really about the boxes the concept of the box and whether or not you will be able to unbox something for number around so we have been some discussions around that but we’re waiting to see the development of Records and Tuple. -SRV: Yeah the replace just to say we have to work here doing whatever we can do to make because end goal is to be compatible with realms. That would be great. +SRV: Yeah the replace just to say we have to work here doing whatever we can do to make because end goal is to be compatible with realms. That would be great. CP: And I noticed another thing that I want to mention here. Is that one of the probably the counter proposal from Google Fox was around using a structured plan to be able pass complex structures between the realms and are pushed back at the time was as focus on the Primitive values because represent tuples will allow you to be more complex structures once we have We have had in the language. @@ -527,13 +526,13 @@ WH: Yes LEO: Thank you -DE: Very happy with where we ended up in this proposal, you know, you can see in the public github thread in the public, you know videos from the meetings where we discuss the various Alternatives here that there were a number of other APIs discussed which relied on things like particular globals or I proposed one that was just kind of a mess of different functions that are you until what was going on, and I think collectively we in the in the participants of the SES calls. I'm happy that we were able to iterate on that and come up with something that was both simple and intelligible. and yeah, I support this proposal going forward. Yeah. Thanks for the help man. You have a lot on dates. +DE: Very happy with where we ended up in this proposal, you know, you can see in the public github thread in the public, you know videos from the meetings where we discuss the various Alternatives here that there were a number of other APIs discussed which relied on things like particular globals or I proposed one that was just kind of a mess of different functions that are you until what was going on, and I think collectively we in the in the participants of the SES calls. I'm happy that we were able to iterate on that and come up with something that was both simple and intelligible. and yeah, I support this proposal going forward. Yeah. Thanks for the help man. You have a lot on dates. LEO: Yes. there is a big list of all a lot of people who really helped in this proposal. DE has been among the like the top contributors in as I active facilitators for this proposal move in the head. YSV: All right. Next we have GB. -GB: Yeah, just a second what DE said, it's a really exciting Direction and really great to see importvalue solution. is a really neat way of solving this problem. I just wanted to ask if because the one thing that does change is typically with Dynamic Imports. With normal imports you have the ability to get the name space and understand, you know, there's various design decisions there but specifically there is a normally with namespaces you can get the list and just thinking about if you wanted to have a slightly more reflective API if you didn't know the module you importing and you wanted to be able to reflect over it and get gather the certain exports or check for names. I was just wondering if there has been any consideration about being able to determine the export values. I also saw there was an issue about a possible Dynamic import form as a way to kind of Drive execution, you know, if something like that could return like a list of exports was just thinking if there's Ways that could kind of facilitate a reflective API for this where you if you don't already know the exports named in advance in order not to get narrower or if that's considered use. +GB: Yeah, just a second what DE said, it's a really exciting Direction and really great to see importvalue solution. is a really neat way of solving this problem. I just wanted to ask if because the one thing that does change is typically with Dynamic Imports. With normal imports you have the ability to get the name space and understand, you know, there's various design decisions there but specifically there is a normally with namespaces you can get the list and just thinking about if you wanted to have a slightly more reflective API if you didn't know the module you importing and you wanted to be able to reflect over it and get gather the certain exports or check for names. I was just wondering if there has been any consideration about being able to determine the export values. I also saw there was an issue about a possible Dynamic import form as a way to kind of Drive execution, you know, if something like that could return like a list of exports was just thinking if there's Ways that could kind of facilitate a reflective API for this where you if you don't already know the exports named in advance in order not to get narrower or if that's considered use. CP: Yeah, it is a good feedback. We haven't really talked about these we have touched on these use case. We have to look at that. @@ -543,15 +542,15 @@ CP: Yeah, we discussed that one the problem with that one. Is that what happened JWK: Well, that makes sense. Thanks. -DE: I also think that it's possible to build both of these ideas both what GB suggested and what and what JWK suggested in terms of this minimal APIs. he pattern is that you make single module that they you use import value on but then that module contains its own Dynamic import and then it can expose things out to the other realm by passing primitive values and by calling functions the membrane implementation that Salesforce has produced is is an example that shows some of this off and it will really be a quite short code sample to do something like what GB suggested with enumerating the the exports of course the proxy to implement what JWK is suggesting is more complex, but no less doable. So anyway, think we should say minimal with this proposal. proposal. As MM was saying, you know to make good use of this. It requires a bunch more infrastructure, and we're not really able to provide all of that here. It's more like providing the basis for patterns for using. +DE: I also think that it's possible to build both of these ideas both what GB suggested and what and what JWK suggested in terms of this minimal APIs. he pattern is that you make single module that they you use import value on but then that module contains its own Dynamic import and then it can expose things out to the other realm by passing primitive values and by calling functions the membrane implementation that Salesforce has produced is is an example that shows some of this off and it will really be a quite short code sample to do something like what GB suggested with enumerating the the exports of course the proxy to implement what JWK is suggesting is more complex, but no less doable. So anyway, think we should say minimal with this proposal. proposal. As MM was saying, you know to make good use of this. It requires a bunch more infrastructure, and we're not really able to provide all of that here. It's more like providing the basis for patterns for using. ???: Yeah that one of the, DE, one of the things that is critical to open the door for many of these Solutions is to have something similar to module block something that does not require evaluation in order to or parsing an evolution in order to do some of the coordination between the two Realms. So we hope that we can get module blocks or something similar to that both really opened the door for all days. -DE: Want to clarify the I mean, as you know module blocks are not a requirement for Realms. They're very ergonomic Improvement or and deployment Improvement, but all of Realms, you know, these mechanisms that are described would just as well with You know modulo these deployments and ergonomic issues with separate modules with separate files compared to with multiple blocks. +DE: Want to clarify the I mean, as you know module blocks are not a requirement for Realms. They're very ergonomic Improvement or and deployment Improvement, but all of Realms, you know, these mechanisms that are described would just as well with You know modulo these deployments and ergonomic issues with separate modules with separate files compared to with multiple blocks. -???: Yeah, fine even with evolving you're willing to evolve allow evolving you laughs. +???: Yeah, fine even with evolving you're willing to evolve allow evolving you laughs. -YSV: Yes indeed. we have a another reply from BFS. +YSV: Yes indeed. we have a another reply from BFS. BFS: Yeah, I don't think this problem is necessarily unique for exports enumeration. I think there are plenty of in this where you want to interact with code that may do things like preserve an ID of an object across the calls for whatever reason and I think just that's going to have to be something we figure out later. We have membranes built around this membranes are generally not something that we could easily approach a standard Library feature at this time. so I think maybe we should reframe exports enumeration to be a more General problem for any kind of enumeration for you want to iterate Properties or stuff like that? That's all. @@ -561,37 +560,37 @@ JRL: So I want to explain a bit of AMPs use case or a pattern that we're using c CP: Yeah. I think that's possible when you call an unsafe value, you can pass a blob at least on the web. You should be able to do so. -JRL: Okay, to answer BFS’ questions. We use a blob object and then create a blob URL if either of those are usable in the realm that would be fine with us. +JRL: Okay, to answer BFS’ questions. We use a blob object and then create a blob URL if either of those are usable in the realm that would be fine with us. -MM: Could you could you explain in JavaScript terms of what a blob is a web API. +MM: Could you could you explain in JavaScript terms of what a blob is a web API. -JRL: There's a global Constructor called blob. It essentially allows you to create a new script that you can then evaluate. +JRL: There's a global Constructor called blob. It essentially allows you to create a new script that you can then evaluate. MM: I'm not asking a PR for creating am asking about what we value itself is. What kind of an object is a blob. -JRL: It's just an instance to blob Constructor so you can pass it to import to `URL.createObjectURL` and then you can get a dynamically evaluatable script from The Blob. +JRL: It's just an instance to blob Constructor so you can pass it to import to `URL.createObjectURL` and then you can get a dynamically evaluatable script from The Blob. -CP: Yeah MM, this is used today with Dynamic import you can create. Eight a blob and pass it to the dynamic import and it will work just fine. It's it's consumable buffer. +CP: Yeah MM, this is used today with Dynamic import you can create. Eight a blob and pass it to the dynamic import and it will work just fine. It's it's consumable buffer. MM: Even when CSP has suppressed eval? -CP: Yes, yes, but if you could block block as well if you want to but if you don't block it and you have eval. +CP: Yes, yes, but if you could block block as well if you want to but if you don't block it and you have eval. MM: Yes. Okay, that's Really piece of news. Thank you -JRL: Yeah so blob when you use it to create a URL creates a new `blob:` scheme URL and it it's blob colon and then some uuid URL and you're able to pass that around to a script evaluator and then evaluate that blob as if it was a real script that existed on a server and that allows us to get around eval. +JRL: Yeah so blob when you use it to create a URL creates a new `blob:` scheme URL and it it's blob colon and then some uuid URL and you're able to pass that around to a script evaluator and then evaluate that blob as if it was a real script that existed on a server and that allows us to get around eval. -??: Yeah doesn't just as I was saying that you will be able all to use that in the incubator around what we call the realm itself around Dodd import values. You will not be able to pass it into the room and use it inside the wrong obviously because it's an object but for not side you'll be able to evaluate these modules inside the realm. +??: Yeah doesn't just as I was saying that you will be able all to use that in the incubator around what we call the realm itself around Dodd import values. You will not be able to pass it into the room and use it inside the wrong obviously because it's an object but for not side you'll be able to evaluate these modules inside the realm. YSV: I'm jumping on the queue to say that we need another note taker. Can someone volunteer please? we will finish this topic sooner and get to other things if we have a note taker. And if you are new to the committee, then it would be a great way to get started working with it. I've got a volunteer on IRC. Okay, we can continue BFS, your topic was clarified, yes? Then I will move on to what WH yes. -WH: I'm trying to understand the answer to Jack Works' question regarding what happens when you pass a function both ways — it gets double wrapped. And thus I'm trying to understand the claim that was made that this wrapping is helpful in making membranes that preserve round-trip object identity — how does that work? +WH: I'm trying to understand the answer to Jack Works' question regarding what happens when you pass a function both ways — it gets double wrapped. And thus I'm trying to understand the claim that was made that this wrapping is helpful in making membranes that preserve round-trip object identity — how does that work? -CP: so two parts of these. I know you could I not a question. Before I think was removed on the double wrapping we talk about these which you as well in the spec. We're not doing any optimization as always and so the implementers can choose to optimize the wrapping because if you are wrapping it already wrap an exotic object you could go a straight and jump into the target function rather than wrapping the function. again, so you will be able to optimize that in terms of not having to create when you call one of these multiple wrap functions to have to go to multiple jumps, you can go straight into the target. That's one thing. The second thing is yes, it took us a while to come up with a solution to be able to use the wrap functions to pass information between the realm that is not necessarily in a ??? base therefore if you send me the function back, I would not be able to understand that that that was actually a function that I send you send you in in the first place. So that poses a problem how we could create a membrane that understand what's going on in the two sides of the boundaries. What we did in this case was to use some finals through some of these functions when you invoke the function. In the function is placing some internal reference some values in the size of the real that has the original Target the Target that you're wrapping on the other side. So every time that the other side needs to do an operation on the target, it has to first sign up to the realm that it will be using that Target by calling a function. So it does some initialization steps and then call the game with the actual operation that needs to be I carried on to that target. That was some kind of general idea of how we were able to implement this membrane on top of these wrapping my kind. +CP: so two parts of these. I know you could I not a question. Before I think was removed on the double wrapping we talk about these which you as well in the spec. We're not doing any optimization as always and so the implementers can choose to optimize the wrapping because if you are wrapping it already wrap an exotic object you could go a straight and jump into the target function rather than wrapping the function. again, so you will be able to optimize that in terms of not having to create when you call one of these multiple wrap functions to have to go to multiple jumps, you can go straight into the target. That's one thing. The second thing is yes, it took us a while to come up with a solution to be able to use the wrap functions to pass information between the realm that is not necessarily in a ??? base therefore if you send me the function back, I would not be able to understand that that that was actually a function that I send you send you in in the first place. So that poses a problem how we could create a membrane that understand what's going on in the two sides of the boundaries. What we did in this case was to use some finals through some of these functions when you invoke the function. In the function is placing some internal reference some values in the size of the real that has the original Target the Target that you're wrapping on the other side. So every time that the other side needs to do an operation on the target, it has to first sign up to the realm that it will be using that Target by calling a function. So it does some initialization steps and then call the game with the actual operation that needs to be I carried on to that target. That was some kind of general idea of how we were able to implement this membrane on top of these wrapping my kind. -WH: Okay, that gives me a general idea of what's going on. Thank you. +WH: Okay, that gives me a general idea of what's going on. Thank you. -YSV: Daniel you also have replied to this question. +YSV: Daniel you also have replied to this question. DE: Yeah, just summarize what he was saying. I think we could say it doesn't let membrane it doesn't implement the membrane for you. You need a different kind of wrapping and membranes but it provides a defense in depth and it's compatible Is that is with building a membrane for you. that accurate CP? @@ -599,79 +598,78 @@ CP: Yep. Great. YSV: Thank you for that summary next up, we have LEO. -LEO: Yeah, well if we don't have anything else to discuss at this current meaning for Realms, I just want to make sure we have some people signing up as reviewers. for eventually requesting Station 3. I still intend to request these three for the next meeting. I need reviewers. +LEO: Yeah, well if we don't have anything else to discuss at this current meaning for Realms, I just want to make sure we have some people signing up as reviewers. for eventually requesting Station 3. I still intend to request these three for the next meeting. I need reviewers. DE: I will review. -LEO: And who is that? Daniel? +LEO: And who is that? Daniel? -DE: So I think I could be revered. +DE: So I think I could be revered. YSV: Yeah. Yeah, I will also review just to make sure I'm very familiar with the text as it is now. -LEO: Thank you. So, I believe we have any SYG and YSV. That's right. I'm not sure if someone and muted there there is some noise now. That was me. I was going to encourage JHD to be involved in the review process as well. +LEO: Thank you. So, I believe we have any SYG and YSV. That's right. I'm not sure if someone and muted there there is some noise now. That was me. I was going to encourage JHD to be involved in the review process as well. JHD: Yeah, I'm happy to be on the list. There's enough other reviewers that if I don't have time to review the full thing, it should still be fine. -MM: I would appreciate it. +MM: I would appreciate it. -JHD: Sure, and yeah, I'm happy to be included. +JHD: Sure, and yeah, I'm happy to be included. -DE: Yep. MM reflects my reason for encouraging JHD. Well, thank you JHD. All right. +DE: Yep. MM reflects my reason for encouraging JHD. Well, thank you JHD. All right. -YSV: Do we have any other questions or topics for isolated Realms that we want to discuss? The queue is currently empty and we have about five minutes left on this topic. So I briefly wanted to touch on the topic that I introduced when we came back from lunch, which is what we want to do about the 30-minute overhang for new topics specific to the incubator calls chartering and also, the overflow for resizable ArrayBuffer and growable shared ArrayBuffer. We had three people explicitly say that they would prefer staying on with no break today and a couple of people signaled that either this would be fine or tomorrow after lunch would be fine. What are people's feeling in the room here. So far there have been no comments and nothing has been raised. However, I do have the previous three direct request to have today's meeting extended by 30 minutes. Are there any objections to extending today by 30 minutes so that we do not meet tomorrow. Thank you, explicit support from DE. We don't have any objections so we will stay or for me as well exposed to support from JHD as well. Okay, then we will continue on today for an extra 30 minutes longer than we normally would. Okay LEO have you finished everything you wanted to do with your topic now. +YSV: Do we have any other questions or topics for isolated Realms that we want to discuss? The queue is currently empty and we have about five minutes left on this topic. So I briefly wanted to touch on the topic that I introduced when we came back from lunch, which is what we want to do about the 30-minute overhang for new topics specific to the incubator calls chartering and also, the overflow for resizable ArrayBuffer and growable shared ArrayBuffer. We had three people explicitly say that they would prefer staying on with no break today and a couple of people signaled that either this would be fine or tomorrow after lunch would be fine. What are people's feeling in the room here. So far there have been no comments and nothing has been raised. However, I do have the previous three direct request to have today's meeting extended by 30 minutes. Are there any objections to extending today by 30 minutes so that we do not meet tomorrow. Thank you, explicit support from DE. We don't have any objections so we will stay or for me as well exposed to support from JHD as well. Okay, then we will continue on today for an extra 30 minutes longer than we normally would. Okay LEO have you finished everything you wanted to do with your topic now. -LEO: For the Realms. Yes. The next one is slightly connected to Realms too so yeah, I mean for isolated Realms. +LEO: For the Realms. Yes. The next one is slightly connected to Realms too so yeah, I mean for isolated Realms. -YSV: So in that case, please go ahead and start your next topic. +YSV: So in that case, please go ahead and start your next topic. -LEO: Yeah, just in case we are starting to call it Realms with the callable boundaries is start of isolated groups. But this is just the beginning with the name. Yes, I just use these two little rooms for too long. Let's go with the presentation. It should just afraid but slide that I'm sharing and I believe you can still see it. +LEO: Yeah, just in case we are starting to call it Realms with the callable boundaries is start of isolated groups. But this is just the beginning with the name. Yes, I just use these two little rooms for too long. Let's go with the presentation. It should just afraid but slide that I'm sharing and I believe you can still see it. ### Conclusion/Resolution -Signed Intention to request advancement for Stage 3 in the next TC39 plenary -Reviewers Assigned: +Signed Intention to request advancement for Stage 3 in the next TC39 plenary Reviewers Assigned: + ## Symbols as WeakMap keys for Stage 2 - [Proposal](https://github.com/tc39/proposal-symbols-as-weakmap-keys) - [Slides](https://docs.google.com/presentation/d/1TWg0T4PEeBqH4NooWE5fLi0gJtAiHuXSn_s2oPR0g2I/edit#slide=id.gcbecde6e4c_0_7) - YSV: Yes, I can see it. Anyone have any issues with seeing the slide right now. and also before we dive in do we have enough note-takers? I don't know if we have enough note takers. Note takers, please yell if you are not having enough hands on the keyboard, go ahead Leo. -LEO: Thank you. Um, so with the TC39 famous last words, this one should be quick. So here I am to present symbols as weak keys. Let's go through this. So the primary goal for this proposal is just to have some a known object unique primitive values. queues for weak reference values the starts with weak Maps, but we're going to see more about it further. What this proposal is trying to introduce is just using a symbol in things such as weak map keys. And here I am to talk about unique values we map the keys are limited to today. They are limited to objects only due to their unique value and garbage collection observation. Symbols are also like unique primitive values with an expected short-term memory footprint. I believe so like that's my expectation and better translating to the usage go when we use them as unique keys. TCQ right now about this but among the other benefits this provides better ergonomics for weak references in general. +LEO: Thank you. Um, so with the TC39 famous last words, this one should be quick. So here I am to present symbols as weak keys. Let's go through this. So the primary goal for this proposal is just to have some a known object unique primitive values. queues for weak reference values the starts with weak Maps, but we're going to see more about it further. What this proposal is trying to introduce is just using a symbol in things such as weak map keys. And here I am to talk about unique values we map the keys are limited to today. They are limited to objects only due to their unique value and garbage collection observation. Symbols are also like unique primitive values with an expected short-term memory footprint. I believe so like that's my expectation and better translating to the usage go when we use them as unique keys. TCQ right now about this but among the other benefits this provides better ergonomics for weak references in general. -MM: I'm sorry quick clarification. You don't you don't mean weak references. +MM: I'm sorry quick clarification. You don't you don't mean weak references. -LEO: Yeah weak reference the queue, But yes, yeah. Yeah. It's such a minefield of words that are used to describe this I just try to use to say like symbols as WeakMap Keys as weak map values. I'm going to go through that like other the weak references finalization group. I have a slide for that for her, but I'm just trying to say like symbols as weak keys. I still need to find the best jargon for that. +LEO: Yeah weak reference the queue, But yes, yeah. Yeah. It's such a minefield of words that are used to describe this I just try to use to say like symbols as WeakMap Keys as weak map values. I'm going to go through that like other the weak references finalization group. I have a slide for that for her, but I'm just trying to say like symbols as weak keys. I still need to find the best jargon for that. -MM: Yeah. +MM: Yeah. LEO: But I still believe symbols provide better ergonomics for that in okay. Yeah. Well, the next light was for the proposal. Jargon. We can see unique values as those values that cannot be reproduced identifiable without access to the initial production each new symbol is a distinct value regardless of their given description or lack of a specific one. Wait the tcq right now. Let's go to this slide. Yes, we today we still have symbol for and well known symbols because they still create a new symbol but they those are registry in the global registry shared among any other realm. If the symbol already exists with that key the symbol can be reused that's one of caveats of of this whole proposal and the well known symbols value are also shared across cross-realms we understand those are values that are not productive as WeakMap keys, but I believe for all the discussion that already happened around this at the TC39 GitHub orgs. This should be useful in my opinion. This should be user when responsibility concern. the use cases that we have for this the first one is most important for me is just agronomics. The Proposal itself provides better ergonomics with four distinct values being used just as key so for WeakMaps rather than creating custom meaningless objects simple values can a better purpose while still being primitive values and ecmascript already specified symbols as primitive values to be used as unique null string keys. So this kind of makes sense to also connect them as for me. It makes sense to connect them as WeakMap Keys as well. We have some slight use cases as well where this sort of connects as an alternative or as an option for people such as records and tuples where symbols de-reference through WeakMaps is also one of the one reasonable path as a way forward for to reference objects in records and tuples. We've been discussing box and probably one of the The candidates but see imposes to remains as a tentative and this proposal would offer a good option to be explored. Records and tuples also cannot contain objects or functions or methods and we'll throw a type error when some one attempts to do it. Realms and membrane system were already discussed? Realms in here are the main brain systems are using top of rounds are Frameworks using on top of rooms for communication virtualized environments. They usually rely generally rely on WeakMaps with many reference values use cross Realms and also this proposal opens exploration for using symbols as weak keys to kept remembering reference and still with better ergonomics in the in our current API though that as you already know objects cannot be accessed across realms this proposal will ??? system to have this communication using the symbols, although I just want to be transparent here the current main brain proof-of-concept membrane that currently developed those not really fully require that but it's something that we want to explore further. We want to explore the memory footprint of the daily usages. We cannot explore that with an actual like the these being offered. yeah, so we also talk about support for the weak references. I'm sorry MM. I'm still finding the good most accurate. jargon here because after analysis, I think we should we may support the symbols as weak set been using weak that's being used as weak references actual weak refs in the finalization registry. I'm really not sure and don't have a good argument in my opinion to bring as a use-case for this or that then consistency with as they now become a aloud as weak map keys, the reason is why not adding them as we grab a weak set with graphs and finalization registry. Maybe there is something to be explored with weak refs and finalization registry. I don't see much for weak refs but consistency has been one of the things that I've seen all the delegates as well claiming to be a good reason. And as a starting point this proposal that all of the symbols are allowed as weak map keys weak map entries as weak references and finalization Registries. I'm trying to alias this as weak Keys. It's not helpful Expression so yeah, this is just examples of usage of symbols, it's quite straight forward instead of having an object. You have a symbol there. You would not notice this if I had like the first line of var unique equals an object. It will be the same code. and the current status of this we have the spec draft already with the only the normative changes. There are some pearls to adjust but they're all editorial within the spec draft just reflects the number of changes. We have proposal repo we have it an actual draft ecma262 where I still need to capture these editorial pearls, but it's to in stage one and there is a fair amount of previous discussions on this some of them on the thread started by GCL and we have a fair amount of like a input from delegates. It's to on stage one. I'm here requesting consensus for stage 2 in this country from that. I think it's good enough and we can Define what is next for what would be next for stage 3. So do we have consensus for stage 2? -YSV: We have a few things on the queue right now. Our first topic is from RRD, please go ahead. +YSV: We have a few things on the queue right now. Our first topic is from RRD, please go ahead. -RRD: Yes. I'm in support of this and we originally wanted to bring that back for recording tuples as you explained in your sights, but we kind of left it on the side because we focused on bucks. That being said, I don't think that bugs and symbols as weak keys are mutually exclusive. If their utility, so yeah, it's important. This is good. +RRD: Yes. I'm in support of this and we originally wanted to bring that back for recording tuples as you explained in your sights, but we kind of left it on the side because we focused on bucks. That being said, I don't think that bugs and symbols as weak keys are mutually exclusive. If their utility, so yeah, it's important. This is good. -YSV: Thank you Robin next we have DE. +YSV: Thank you Robin next we have DE. DE: Yeah. I also explicitly support this proposal. I mean as you might expect because I brought it to committee. I think it just makes sense for their symbols to be permitted and we have these use cases that were in the presentation and other use cases that committee members have raised but they didn't even make it to the presentation. So let's go ahead with this. I did expect text I reviewed it and it looks sound. -YSV: Thank you, DE. WH, you're next. +YSV: Thank you, DE. WH, you're next. WH: It's still unclear to me what problem this solves? LEO: Yes. I mean I so for my presentation. -WH: It seems like this is making things more orthogonal. That I agree with but it's also introducing a new foot gun. So it's unclear to me actually what problems this solves. +WH: It seems like this is making things more orthogonal. That I agree with but it's also introducing a new foot gun. So it's unclear to me actually what problems this solves. -LEO: What footgun do you consider it contains? +LEO: What footgun do you consider it contains? -WH: A weakmap is no longer weak if you use a symbol that can be regenerated from a string at will. +WH: A weakmap is no longer weak if you use a symbol that can be regenerated from a string at will. -LEO: Yes that yes. Yeah, it does have the option if you use well-known symbol or symbols. registering the global registry shared across Realms those +LEO: Yes that yes. Yeah, it does have the option if you use well-known symbol or symbols. registering the global registry shared across Realms those -WH: I'm not concerned with collecting the few well-known symbols. +WH: I'm not concerned with collecting the few well-known symbols. CP: I believe when we talk about these in the SES meeting as we were saying that if you're using symbol for and using that as a key for the requirement for a WeakMap, that's not different from using something like array prototype as the key for weak map for those who never be collected either. @@ -681,17 +679,17 @@ CP: Can you provide more details? WH: Just because it's possible to screw up by doing something deliberate doesn't mean that we should make it easy to screw up accidentally. -LEO: Well, I think the this is a the risk is minor because like someone would know what they were would be doing with their own code and by adding like a well-known symbol or symbol there is already shared. He's also like the person would make the symbol as security say like the person would know like what it would mean they were would be at an array prototype to a weak map key. +LEO: Well, I think the this is a the risk is minor because like someone would know what they were would be doing with their own code and by adding like a well-known symbol or symbol there is already shared. He's also like the person would make the symbol as security say like the person would know like what it would mean they were would be at an array prototype to a weak map key. WH: Well-known symbols are a distracting tangent to this discussion and I don’t want to worry about those. I'm concerned about the case of somebody generating their own `Symbol.for` symbols and using those as weak map keys. -LEO: Well, I'm using simple.for in well known because they have the same like this dish to reproducible like there's there's two identifiable in like the your initial question was actually like “why what is the problem?” I think is not a like a big pain Point here, but there is a good I see and many other delegates for see a very good ergonomic Improvement by just adding symbols as weak map keys and the consideration of like should we be limiting these symbols to Be strictly unique symbols as like not just allowing symbols are registered in this global registry. I think there were also like could like if we did that in fact in spec and not userland we would have other concerns as like identifying symbols are already existing as like we use for symbol keys for I don't see like any value in that in like verification process for low-level API in the spec, so that's why I'm not introducing this limitation. I still think the value of the ergonomic Improvement that this proposal offers is a better. It's a good solution over like the caveats. +LEO: Well, I'm using simple.for in well known because they have the same like this dish to reproducible like there's there's two identifiable in like the your initial question was actually like “why what is the problem?” I think is not a like a big pain Point here, but there is a good I see and many other delegates for see a very good ergonomic Improvement by just adding symbols as weak map keys and the consideration of like should we be limiting these symbols to Be strictly unique symbols as like not just allowing symbols are registered in this global registry. I think there were also like could like if we did that in fact in spec and not userland we would have other concerns as like identifying symbols are already existing as like we use for symbol keys for I don't see like any value in that in like verification process for low-level API in the spec, so that's why I'm not introducing this limitation. I still think the value of the ergonomic Improvement that this proposal offers is a better. It's a good solution over like the caveats. -CP: Yeah one thing to provide a concrete example of where you will use is and you were asking the question the previous topic about creating a membrane between realms and we were able to do it without the symbols. But if you have symbols and what really you could do is more simple approach where an object that you have to create a proxy on the other side. You generate a key symbol for it. You put it in a way. Are you sure the symbol with the other side and that symbol represent the identity of the real Target any time that you have to do an operation on the other side you pass the symbol back and we use the symbol as a way to determine what the target is and when the symbol is collected on the other side because no one is using it anymore. No one is using it in the realm or even after from then the WeakMap will do the proper thing that those are that's a use a use case where you have symbols that are primitive value chain between grams and they serve as a key for WeakMaps that preserve identity somehow in a realm. +CP: Yeah one thing to provide a concrete example of where you will use is and you were asking the question the previous topic about creating a membrane between realms and we were able to do it without the symbols. But if you have symbols and what really you could do is more simple approach where an object that you have to create a proxy on the other side. You generate a key symbol for it. You put it in a way. Are you sure the symbol with the other side and that symbol represent the identity of the real Target any time that you have to do an operation on the other side you pass the symbol back and we use the symbol as a way to determine what the target is and when the symbol is collected on the other side because no one is using it anymore. No one is using it in the realm or even after from then the WeakMap will do the proper thing that those are that's a use a use case where you have symbols that are primitive value chain between grams and they serve as a key for WeakMaps that preserve identity somehow in a realm. -WH: I agree with it. I love the idea of using unique symbols as weak map keys but skeptical about using `Symbol.for` symbols, which could never be collected because anybody can regenerate their strings in the future. +WH: I agree with it. I love the idea of using unique symbols as weak map keys but skeptical about using `Symbol.for` symbols, which could never be collected because anybody can regenerate their strings in the future. -YSV: We'll start with Kevin. +YSV: We'll start with Kevin. KG: Yeah, so I'm on the queue. I share this concern, but I think it's useful to think about whether this is likely to come up. Symbol.for is kind of a niche API. I don't end up using it in my own code, I don't encounter it in other code bases very frequently. The place I see it come up is when someone very specifically intends to make something that can never be GCed and will be shared by everyone. I think the fact that it's not a thing that you just use randomly, it's a thing that you use specifically for the case when you want something that will live forever means that hopefully anyone who does that is going to understand the implications of putting it in a WeakMap. It's not like this is an easy mistake to make; it's something that I think is pretty clear from the API, so I think that makes me a lot less worried about it. @@ -699,17 +697,17 @@ WH: I disagree with the claim that `Symbol.for` symbols can’t be GCed. They ca KG: It is true that it can be GC'd, but the reason that you do it is because you want to have identity across all callers of symbol.for which is, I agree, slightly different from wanting it to never be GCed, but it's still like - this is not a thing that you normally do and is only a thing that you want to do in these scenarios when the thing you care about is the ability to have the symbol with the identity at different points in your program. -WH: Yeah, I just think that symbols generated by `Symbol.for` are very dangerous to use with weak maps. +WH: Yeah, I just think that symbols generated by `Symbol.for` are very dangerous to use with weak maps. -YSV: We have a reply from TLY. +YSV: We have a reply from TLY. TLY: Yeah, I just wanted to point out a specific and really common use case, which is that in library code if you want to have something which is interoperable with the library. You often use symbol.for that it's discoverable by the library like for example rxjs uses a symbol.for(“observable”), so you can mark something as something that can be expected to have the observable protocol. So like it's not because you don't want to be garbage collected, say it's because you want it to be unique and discoverable across the code. But I don't know how that would interact with WeakMaps. I don't know if that makes it any more likely to be used in WeakMaps, just that there is it's not his niche as you're saying, but it is probably Understood as what it's for, but maybe not since you were saying it’s for garbage collection. -YSV: We have on the queue GCL, let's go. +YSV: We have on the queue GCL, let's go. GCL: Yeah. I just wanted to say I think while WeakMaps is not an exceedingly rare API and symbol dot for is not an exceedingly rare API. Well, I mean it you know, and it does come up as TYL said I think when these two things inter, you know come into contact with each other. I think it's exceedingly unlikely that that contact won't be intentional and I think the limitation when this is intentional would be quite frustrating. For the same reason that normal symbols being limited here is quite frustrating. It's just breaks orthogonality. -YSV: Okay, we have our next topic which comes from RGN. +YSV: Okay, we have our next topic which comes from RGN. RGN: I just wanted to express enthusiastic support for this. I think that the gap in functionality is frustrating. It's not just hypothetical. @@ -719,19 +717,19 @@ JHD: repeating the explicit support for this proposal; this is great. YSV: Next we have LEO. -LEO: Yeah, so I just want to make sure if we can put this proposal I am still requesting stage 2 in case it advances to stage two, I will ask for reviewers for stage 3 entrance of want to make sure because there there is a few concerns raised here. Are these two? objections for stage 2 does anyone object can we go? Can we move? Let me be clear. Do we have consensus for stage 2? I have audio problems, but I'm interpreting this as silence. +LEO: Yeah, so I just want to make sure if we can put this proposal I am still requesting stage 2 in case it advances to stage two, I will ask for reviewers for stage 3 entrance of want to make sure because there there is a few concerns raised here. Are these two? objections for stage 2 does anyone object can we go? Can we move? Let me be clear. Do we have consensus for stage 2? I have audio problems, but I'm interpreting this as silence. -YSV: Yes. We are having silence unless anyone has been having trouble unmuting there has been silenced for the last half a minute to minute or so. It does sound an awful lot like consensus for stage 2. Does anyone object? All right, LEO. It sounds like you have consensus on stage 2 for this. +YSV: Yes. We are having silence unless anyone has been having trouble unmuting there has been silenced for the last half a minute to minute or so. It does sound an awful lot like consensus for stage 2. Does anyone object? All right, LEO. It sounds like you have consensus on stage 2 for this. -LEO: Thank you. +LEO: Thank you. JHD: I'll be happy to be a reviewer for it. -LEO: Thank you, JHD and RGN +LEO: Thank you, JHD and RGN JHD: On IRC BSH also volunteered, appreciate that. -LEO: are we capturing the meeting notes so I don't have the whole thing here. Great. Is this all for your topic? Laughs? Yeah, do you want to discuss more about rounds? I don't think so. We're done for the day. I appreciate that. +LEO: are we capturing the meeting notes so I don't have the whole thing here. Great. Is this all for your topic? Laughs? Yeah, do you want to discuss more about rounds? I don't think so. We're done for the day. I appreciate that. ### Conclusion/Resolution @@ -740,61 +738,61 @@ LEO: are we capturing the meeting notes so I don't have the whole thing here. Gr YSV: Actually we're not done for the day because we decided to go another 30 minutes if everyone's still cool with that. You have two more topics to cover. If that's cool then on the agenda the last two topics, we have our incubation called chartering from Shu for five minutes. and then the Overflow item of resizable ArrayBuffer and Global shared ArrayBuffer Shu. Do you want to take the stage? -SYG: yes, let's do incubation first because that will be quicker and can just get that out of the way. So currently we actually Quite a bit of a backlog too. I guess the changed cadence so shorter amounts of time in between meetings and just general scheduling conflicts with finding time for the incubator calls. Let me bring up the current charter between the last meeting and this one. One second, please. Is that visible on the screen? +SYG: yes, let's do incubation first because that will be quicker and can just get that out of the way. So currently we actually Quite a bit of a backlog too. I guess the changed cadence so shorter amounts of time in between meetings and just general scheduling conflicts with finding time for the incubator calls. Let me bring up the current charter between the last meeting and this one. One second, please. Is that visible on the screen? YSV: It is visible for me. Is anyone not seeing the site that SYG's presenting? the issue rather. -SYG: Okay, something everybody can see it. So since last meeting, we only had two for lazy Imports and the regex set notation which leaves three overflow the resizable buffers, module fragments, and the pipeline and I think there is one proposal that was called out this meeting that they would like to get on the incubator calls. Was that the copy methods on array. prototype, I believe, Is RRD still in the call? +SYG: Okay, something everybody can see it. So since last meeting, we only had two for lazy Imports and the regex set notation which leaves three overflow the resizable buffers, module fragments, and the pipeline and I think there is one proposal that was called out this meeting that they would like to get on the incubator calls. Was that the copy methods on array. prototype, I believe, Is RRD still in the call? -RBN: I can speak for him. Yeah, we would like to add this to the okay. +RBN: I can speak for him. Yeah, we would like to add this to the okay. -???: Great. Yeah. Yes. +???: Great. Yeah. Yes. -RRD: Sorry. I was having trouble finding button. Yes would be super interested. +RRD: Sorry. I was having trouble finding button. Yes would be super interested. -SYG: Okay, sounds good. But given the backlog. I think that is the only one I am comfortable adding. I imagine we'll get to at most three calls before the next meeting because now we're at about a month in between each meeting, which is good, I guess for the faster Cadence. But it means less time to actually get these calls in. So without risking building up an even bigger backlog. I would propose we add just that one as we work through the backlog. Any thoughts or any other champions who would like to get in on the backlog? +SYG: Okay, sounds good. But given the backlog. I think that is the only one I am comfortable adding. I imagine we'll get to at most three calls before the next meeting because now we're at about a month in between each meeting, which is good, I guess for the faster Cadence. But it means less time to actually get these calls in. So without risking building up an even bigger backlog. I would propose we add just that one as we work through the backlog. Any thoughts or any other champions who would like to get in on the backlog? -YSV: Sounds reasonable to me. +YSV: Sounds reasonable to me. -SYG: all right, so then let the notes reflect that the only new proposal added to the incubation call charter between this meeting and the next one in May, at the end of May is the array copy methods on array.prototype. Yes, all right. Then let me stop sharing and then I'll jump in to my next topic, but I guess before I do that. So to kind of catch folks back up. Well, actually not before even that I guess we had asked somebody to one of the chairs take a screenshot of the queue. I think there were a few items that were left on the Queue that were not related to the where should construct for the resolution of the last meeting for the last proposal. We can have skis we don't have anything recorded in the notes. +SYG: all right, so then let the notes reflect that the only new proposal added to the incubation call charter between this meeting and the next one in May, at the end of May is the array copy methods on array.prototype. Yes, all right. Then let me stop sharing and then I'll jump in to my next topic, but I guess before I do that. So to kind of catch folks back up. Well, actually not before even that I guess we had asked somebody to one of the chairs take a screenshot of the queue. I think there were a few items that were left on the Queue that were not related to the where should construct for the resolution of the last meeting for the last proposal. We can have skis we don't have anything recorded in the notes. JRL: Interrupting, Was Symbols as WeakMap keys promoted stage two? We didn't record it in the notes. -YSV: it was promoted to stage 2 and three reviewers signed up. That was it think DE, JHD, and BSH, I don't know his last name. Thank you. +YSV: it was promoted to stage 2 and three reviewers signed up. That was it think DE, JHD, and BSH, I don't know his last name. Thank you. ## Resizable ArrayBuffer Overflow AKI: They are available on the schedule under time box overrun. There's the thing PHE was talking about explanation of reservations with global constructors. And next up was Yulia with a reply and you can do you can see this on the on the on the schedule. -YSV: I believe I believe those two topics covered while we were discussing unless PHE wants to jump in with further discussions about models position here, but I think we did we did cover that topic as well as +YSV: I believe I believe those two topics covered while we were discussing unless PHE wants to jump in with further discussions about models position here, but I think we did we did cover that topic as well as SYG: if it's okay if it's okay with PHE, I would like to save the majority of the time to return to global Constructor issue and Spend more time there, but I want to drain the rest of the queue because there were some topics that said that it's not directly related to the where's your Constructors Live question indeed to end the first topic after that is from DE, which titled. I was concerned about the wisemen decoration in the past, and I'm very happy with the proposal here DE. Do you want to speak to this? DE: Sure so much of SYG update was about the semantics of the wasm integration. For example, the rounding the host hook and all of that looks very well done to me and I'm really happy that this proposal is gone to the WASM CG that it's receiving wide review that there's a there's a concrete iterated on API for exposing this to WASM memory. And yeah. So previous concerns withdrawn. -SYG: Good to hear. Thank you. +SYG: Good to hear. Thank you. -YSV: And the next topic we have on the queue is from MM, which was Global / compartment. +YSV: And the next topic we have on the queue is from MM, which was Global / compartment. MM: Yes XS is a multi compartment system as SES and the compartment proposals introduce compartments and each compartment has its own Global. So the memory overhead of making new globals is more than what mean. PHE had mentioned is going by but I want to emphasize it. it's that there is a new Global property per new Global. Whereas an existing Global for which you add a property as using it as a namespace. That's the existing Global would be on the shared primordials. So it would just be / realm not / compartment that said for the particular thing that we're talking about I don't mind so much to new globals they see in the minor issue rather. rather the general issue of precedent is the important issue here. -SYG: all right, that would also be a good time to segue back into the discussion about the global Constructor versus the namespace constructors. So chatted a bit with Peter and they're so one I would very much I would like to take them up on their offer to do some exploration there on implementation strategies on the XS side to see if they can recoup some of the the memory cost incurred by extra globals see if anything that's already being done with the freezing strategy that they're doing for regular object properties can be extended to global properties as well. And for that reason a delay here is certainly reasonable and I would like how what I would like to get a signal my next meeting. +SYG: all right, that would also be a good time to segue back into the discussion about the global Constructor versus the namespace constructors. So chatted a bit with Peter and they're so one I would very much I would like to take them up on their offer to do some exploration there on implementation strategies on the XS side to see if they can recoup some of the the memory cost incurred by extra globals see if anything that's already being done with the freezing strategy that they're doing for regular object properties can be extended to global properties as well. And for that reason a delay here is certainly reasonable and I would like how what I would like to get a signal my next meeting. -MM: The freezing does not help with regard to the fact that they're global. +MM: The freezing does not help with regard to the fact that they're global. -SYG: Why not? +SYG: Why not? -MM: Because you create new compartments with new globals at runtime. Yes start to populate them with all the globals so they're not in wrong. +MM: Because you create new compartments with new globals at runtime. Yes start to populate them with all the globals so they're not in wrong. -SYG: Where do those new so for the so when you create a new compartment with a new Global and you need to put in and a namespace object from somewhere that namespace object in the contents of that object that is in realm and you can just kind of plop that in and that's +SYG: Where do those new so for the so when you create a new compartment with a new Global and you need to put in and a namespace object from somewhere that namespace object in the contents of that object that is in realm and you can just kind of plop that in and that's -MM: Yeah, okay. +MM: Yeah, okay. -SYG: so so why can that strategy with the copy-on-write thing not work for the global object itself +SYG: so so why can that strategy with the copy-on-write thing not work for the global object itself MM: wasn't talking about copy-on-write a copy a copy on write could deal with this requires more bookkeeping under the hood. That's a separate matter. -PHE: I think my microphone wasn't working earlier. So yeah, I agree marks representing this very well here. I don't think specifically the fries mechanism helps us with globals, but there may be a modification or or clever use of our aliasing mechanism that would allow us not to allocate memory for globals that are not that aren't used over. There are fully frozen and so we can look into that. I'm happy to do that based on conversations with SYG. To see because that would certainly if we can be successful there or can find out many directions with work that that would certainly help alleviate some of the concerns with the growth of globals great. +PHE: I think my microphone wasn't working earlier. So yeah, I agree marks representing this very well here. I don't think specifically the fries mechanism helps us with globals, but there may be a modification or or clever use of our aliasing mechanism that would allow us not to allocate memory for globals that are not that aren't used over. There are fully frozen and so we can look into that. I'm happy to do that based on conversations with SYG. To see because that would certainly if we can be successful there or can find out many directions with work that that would certainly help alleviate some of the concerns with the growth of globals great. SYG: Yeah, that would be that would be excellent. It it I would like so in terms of process here. I would like to... whether we add new globals as a matter of precedent is something we should figure out. But it certainly affects more than this proposal. You know Realms is one that's planning to ask for stage 3 soon that adds a new constructor and the global Constructor if it becomes a matter of implementation in possibility that we keep adding new globals we should deal with that as a separate conversation and hopefully resolve that soon rather than kind of incidentally blocking on whatever the next proposal is that adds a new Global Constructor that on this I believe that would unnecessarily slow down velocity. So I'm happy to wait a meeting here and let's please yeah, try to have a separate discussion about what should we do about globals at the same time for @@ -804,9 +802,9 @@ SYG: so with that said as we were thinking about this there may also be a way to MM: even though I titled my question something else. I'll take the opportunity to say I really like this. It does reduce the global pressure of course, but it also I think just leaves the entire API surface of the language as a whole feeling much smaller. It's less cognitive overhead for somebody learning the language to just see that this is an optional characteristic of ArrayBuffers. So how the language feels to people who are not familiar with the history but coming in new is a good consideration and I think this makes the language feel smaller than the other way. -DE: Two comments. First kind of superficial nit, I think an options bag would be more clear (both for people reading the code and for if we ever find we need to add more things later) than positional argument. But that's it. I don't know if that would affect the compatibility issue at all. But that's the only thing I would change here. I wanted to ask how this affects the security concerns that you raised initially. Is it the idea that because there's a separate hidden class that we don't expect the security risk of changing those existing paths to occur, or that on implementation we found that things didn't factor as cleanly as we thought. Or how are things going with that whole security argument? +DE: Two comments. First kind of superficial nit, I think an options bag would be more clear (both for people reading the code and for if we ever find we need to add more things later) than positional argument. But that's it. I don't know if that would affect the compatibility issue at all. But that's the only thing I would change here. I wanted to ask how this affects the security concerns that you raised initially. Is it the idea that because there's a separate hidden class that we don't expect the security risk of changing those existing paths to occur, or that on implementation we found that things didn't factor as cleanly as we thought. Or how are things going with that whole security argument? -SYG: a little of both. Originally I thought that having an array that's resizable buffers themselves have a separate hidden class would also be important for security. We learned that to be incorrect upon implementation. That is not important. What is important is the is the TypeArrayhidden class, whether they would be backed by resizable buffers or not. So that's one lesson that we learned. And so the security concern was for how easy it is to audit or how easy it is to avoid having to audit existing battle-hardened paths, and I believe that to be entirely handled by having separate hidden classes for TypeArray. +SYG: a little of both. Originally I thought that having an array that's resizable buffers themselves have a separate hidden class would also be important for security. We learned that to be incorrect upon implementation. That is not important. What is important is the is the TypeArrayhidden class, whether they would be backed by resizable buffers or not. So that's one lesson that we learned. And so the security concern was for how easy it is to audit or how easy it is to avoid having to audit existing battle-hardened paths, and I believe that to be entirely handled by having separate hidden classes for TypeArray. DE: That makes sense because the battle-hardened part isn't the entry point that we see at the JavaScript level, but instead these routines where you can add this in class check at the start of it. Is that what you're saying? @@ -820,7 +818,7 @@ SYG: Cool. I do want to reiterate that I find the point PHE raised about the con YSV: Alright, so on the queue, there's two items. The first one is mine. I'm taking off my chair hat at this point and speaking as a representative from Mozilla. Thank you very much for addressing our concerns around implementation interoperability with regards to the stuff that I've concretely reviewed so far. I'm happy with that. I would like to take a more in-depth look at these changes and think about it a bit but I don't have any direct concerns about the naming change immediately. So it looks good to me. -MM: Just support and especially like it with the new API. +MM: Just support and especially like it with the new API. [queue is empty] @@ -830,9 +828,10 @@ YSV: Before we close out this topic entirely I want to make sure that we don't l SYG: To me it sounds like it would be a longer term - it's certainly not a proposal, so it would fall under the longer term open discussions. I will make a note to add such an agenda item to the next agenda. I don't know if I would describe myself as a champion. Probably, you know, I as a representative of the web platform prefer to be able to add more globals but yes, I will add a new topic to the longer term discussion for next meeting to hash out this question. -YSV: It might also make sense to do it asynchronously or something. So I'm just making sure that we don't forget about that as it was something that was brought up as something to continue discussing and whatever direction people want to take with that. +YSV: It might also make sense to do it asynchronously or something. So I'm just making sure that we don't forget about that as it was something that was brought up as something to continue discussing and whatever direction people want to take with that. ### Conclusion/Resolution + - Stage 2.95 - Remaining todos: - removable of global constructors diff --git a/meetings/2021-05/may-25.md b/meetings/2021-05/may-25.md index 1fe12108..8d44e527 100644 --- a/meetings/2021-05/may-25.md +++ b/meetings/2021-05/may-25.md @@ -1,7 +1,8 @@ # 25 May, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -34,21 +35,22 @@ AKI: Next comes tools, many of you know that the freenode IRC network is Under New Management. The instability has led to an exodus of all sorts of projects from Freenode. The inclusion working group, which is an informal group within TC39 who have been strategizing ways to make the committee more accessible to delegates, both new & old have already been researching real-time chat for over six months. ## Inclusion WG Update (Chat Platform) -JWS: So yeah, hello everybody, I'm Jason Williams. I’m a TC39 delegate, working at Bloomberg, and I've been involved in the inclusion working group. MPC usually chairs these meetings but he cannot make plenary this time around. So it's myself and USA is also around and he'll be helping. Systems also. So yeah, we are. + +JWS: So yeah, hello everybody, I'm Jason Williams. I’m a TC39 delegate, working at Bloomberg, and I've been involved in the inclusion working group. MPC usually chairs these meetings but he cannot make plenary this time around. So it's myself and USA is also around and he'll be helping. Systems also. So yeah, we are. JWS: Yeah. So even though the title of this is conclusion, working group of day, maybe just going to be focusing on the chat. Platforms. As Aki said, you know, due to recent questions and around the entry freenode and Libera, Etc. Yeah, for any future updates on training, I'm sure Mark will be back for those. -JWS: So to give you a recap. Yeah, we were looking at this for quite a few months mainly starting in August. And what I'm going to do now is just give a really quick recap because there's quite a few things that we've gone into detail on and I'm going to, we don't have time to go into detail on every single thing we've gone through. So, this is just a sort of bring you up to speed on what we've been doing. +JWS: So to give you a recap. Yeah, we were looking at this for quite a few months mainly starting in August. And what I'm going to do now is just give a really quick recap because there's quite a few things that we've gone into detail on and I'm going to, we don't have time to go into detail on every single thing we've gone through. So, this is just a sort of bring you up to speed on what we've been doing. JWS: Mainly started in August, 2020. We had an issue on the reflector around IRC and inclusion and I think there's been talks around this for quite a while. And I think the main catalyst of this was what I noticed anyway was structural racism discussion was happening. One of the channels that I think some delegates were unaware of and the conversation Love the about. So I think we felt maybe in 2021, we can do it better than this. Talk when underway in this red, there was quite a bit of people giving ideas and what sort of requirements they would like. And then eventually this fell under the responsibility of the inclusion working group. So then the question was asked, do we want to move away from IRC? Do you want to look at and explore other options? This is quite an important question because if we're happy with what we've got, it doesn't make sense to spend time investigating different chat platforms if we're just going to stick with what we have. So the first question of course needs to be, are we happy with what we have now before we start doing any sort of further investigation? I was actually looking for some minutes on this earlier before, but sadly, I can't find them. There was myself, MBS, All's(?), I think we had MPC, DE, and USA. I think there's a few of us and it was pretty much a yes. We want to look at some other options. We want to start exploring what other platforms have to offer. We also had feedback as well, not just through that thread that I showed you, but also, I think miles did a bit of outreach on Twitter and various other places and I mean, some of the main things were assistants so needing to sort of about. so is something that a lot of a lot of delegates on willing to do, there is IRC Cloud which I think most of us use. But then I often had feedback people not wanting to pay the subscription for that or maybe their member company doesn't offer them money to pay for the subscription for that. So that's something that needs to be looked at there was quick sign on. Boarding we've had delegates, who haven't bothered to sign up to IRC because the not knowing how to set up nick serve and their servers. And there's also public logging as well, which is something that we've not set up and it's something we've not put the time into looking at, there's various others, I'm just sort of brushing over here but you kind of get the idea in the same month we looked at what do we want to use. -JWS: I can tell you now that we spent a long time on this, there is no single platform that pleases everyone. We had several delegates in this group. I think those around, maybe, seven, eight, or nine of us, we didn't come to an agreement on a platform. And I don't think, I don't think we ever will. Instead, we needed to flip the question around to what sort of requirements, do we want for a platform? What sort of things are important to us? Only then. Then we can sort of take some of those things and see what ticks the boxes. So, you know, we had assistance, which is quite important. moderation tools. Yeah. It can I use it in a web browser? Can I use it on my phone? Can I onboard other delegates in an easy way without having to spend a day with them without going through chanserv, nickserv, et cetera, and can we do breakout groups because that's people that's what some of us like to do. And the list goes on and so this something that we've discussing throughout August and September and week, some of you may have seen the spreadsheet could put together as well. This was at leave, this was shed on the inclusion for it. Also, but basically people came along and started adding some features that platforms did. And yeah we basically started to build a bit of a sort of catalog, which platforms can do what and where this is still available. Now Actually on threaded steel. I think someone was editing it not not too long ago but basically we had some delegates come in and offer sort of advice on things that were important to them and what they wanted to see. We had to bring this down to a few platforms. So it was mainly these for Discord, Slack, Matrix, and Zulip. Discord was a non-starter, one is because it's blocked in China, the other is that it goes against their terms of service to export logs, and they weren't really being responsive with regarding that slack was quite similar. They also remove their looks after a few months, if you're not on the page here. So it gets quite limited. Their this kind of brought us down to Matrix and Zulip. We went with Matrix as a trial first because we had more delegates using that and more delegates knew how to get up and running. I kind of look at these like proposals: Matrix had a couple of champions and although Zulip technically offered quite a few things we still weren't weren't sure about we had lot more delegates, championing Matrix and you're willing to sort of help us get to the up with that. So that's what we went with in October and we start the trial. Initially, anyone who joined the inclusion working group had These are the blocks that of Miriam said after we started running out invitations, we started soliciting feedback. That's pretty much been the case of until now. there's not been anybody coming to me personally, saying, this is not a I don't want to go down or bad idea. I'm not sure that Mark and the others, but what I did have was people saying things like, you know yeah there's a couple of connection issues or I don't know how to join that room or where are the logs, but there's they've mainly been trivial things that we can fix or things that we've been our control because we had things like this coming up. We decided to let the trail wall and let it keep going. We've had delegates trickle in and out. And that's why we've basically extended the trial and you can still use Matrix right now, we haven't made a decision on this but we decided to keep it going. For those who haven't seen it yet this is what Matrix looks like. This is me using Element in the browser. We are trialling Spaces at the moment. So if you've come from Discord, probably seeing something similar, you get a server with a group of channels. Each channel has some public looks, and we have room to the very, they should be a one-to-one match with what we have on IRC actually, so all of you can see that we have an IRC also public locks can be accessed by anyone. and yeah, we've been trying this for quite some time, pretty much since October. And as a working group, we're happy to put forward Matrix as a recommendation from what we've been seeing. I'm going to show you a quick demo If I have time take few seconds but just before I do that, basically what you were saying about Libera. It's yeah, it's been a situation with freenode. I'm going to go into that because we don't have time and I'm sure most of you sort of know what's happening there anyway, but I can't talk for TC39 regarding this but as far as the inclusion working group is concerned the situation which we know doesn't really change anything for us. We were talking about IRC back in August and most of the issues that came up were about IRC the protocol, not freenode. freenode. The network or not, any of them. It's on it. So this even if there was a move to Libera, it wouldn't change anything in regards to what people have bought up and because of this situation as far as this working group is concerned, it doesn't really change anything. So, I just want to sort of touch on that. +JWS: I can tell you now that we spent a long time on this, there is no single platform that pleases everyone. We had several delegates in this group. I think those around, maybe, seven, eight, or nine of us, we didn't come to an agreement on a platform. And I don't think, I don't think we ever will. Instead, we needed to flip the question around to what sort of requirements, do we want for a platform? What sort of things are important to us? Only then. Then we can sort of take some of those things and see what ticks the boxes. So, you know, we had assistance, which is quite important. moderation tools. Yeah. It can I use it in a web browser? Can I use it on my phone? Can I onboard other delegates in an easy way without having to spend a day with them without going through chanserv, nickserv, et cetera, and can we do breakout groups because that's people that's what some of us like to do. And the list goes on and so this something that we've discussing throughout August and September and week, some of you may have seen the spreadsheet could put together as well. This was at leave, this was shed on the inclusion for it. Also, but basically people came along and started adding some features that platforms did. And yeah we basically started to build a bit of a sort of catalog, which platforms can do what and where this is still available. Now Actually on threaded steel. I think someone was editing it not not too long ago but basically we had some delegates come in and offer sort of advice on things that were important to them and what they wanted to see. We had to bring this down to a few platforms. So it was mainly these for Discord, Slack, Matrix, and Zulip. Discord was a non-starter, one is because it's blocked in China, the other is that it goes against their terms of service to export logs, and they weren't really being responsive with regarding that slack was quite similar. They also remove their looks after a few months, if you're not on the page here. So it gets quite limited. Their this kind of brought us down to Matrix and Zulip. We went with Matrix as a trial first because we had more delegates using that and more delegates knew how to get up and running. I kind of look at these like proposals: Matrix had a couple of champions and although Zulip technically offered quite a few things we still weren't weren't sure about we had lot more delegates, championing Matrix and you're willing to sort of help us get to the up with that. So that's what we went with in October and we start the trial. Initially, anyone who joined the inclusion working group had These are the blocks that of Miriam said after we started running out invitations, we started soliciting feedback. That's pretty much been the case of until now. there's not been anybody coming to me personally, saying, this is not a I don't want to go down or bad idea. I'm not sure that Mark and the others, but what I did have was people saying things like, you know yeah there's a couple of connection issues or I don't know how to join that room or where are the logs, but there's they've mainly been trivial things that we can fix or things that we've been our control because we had things like this coming up. We decided to let the trail wall and let it keep going. We've had delegates trickle in and out. And that's why we've basically extended the trial and you can still use Matrix right now, we haven't made a decision on this but we decided to keep it going. For those who haven't seen it yet this is what Matrix looks like. This is me using Element in the browser. We are trialling Spaces at the moment. So if you've come from Discord, probably seeing something similar, you get a server with a group of channels. Each channel has some public looks, and we have room to the very, they should be a one-to-one match with what we have on IRC actually, so all of you can see that we have an IRC also public locks can be accessed by anyone. and yeah, we've been trying this for quite some time, pretty much since October. And as a working group, we're happy to put forward Matrix as a recommendation from what we've been seeing. I'm going to show you a quick demo If I have time take few seconds but just before I do that, basically what you were saying about Libera. It's yeah, it's been a situation with freenode. I'm going to go into that because we don't have time and I'm sure most of you sort of know what's happening there anyway, but I can't talk for TC39 regarding this but as far as the inclusion working group is concerned the situation which we know doesn't really change anything for us. We were talking about IRC back in August and most of the issues that came up were about IRC the protocol, not freenode. freenode. The network or not, any of them. It's on it. So this even if there was a move to Libera, it wouldn't change anything in regards to what people have bought up and because of this situation as far as this working group is concerned, it doesn't really change anything. So, I just want to sort of touch on that. [switch to demo] JWS: Okay, so basically this is this is how Matrix is working at the moment, probably very similar to IRC, you have your users on the right, which you can see. This is the TC39 delegates room yeah, we have they're they're all the different rooms now which are available so we have General people, you know, people have been chatting in the using this for quite a few months. So it shouldn't be. Any should be the new for most people on here Temporal? Zone of topic. We also have this and this has been used and yeah we are using spaces. We're trying this. That you can see all of the rooms, you can see the logs. Also hese go to the beginning and this is something that's just sorted for us. So we do have public logs set up and you have all the rooms and have a lot of people and you can do direct messaging. and yeah, and any questions you can come to the conclusion group. And raise anything here. This we're looking at this. So feel free to ask any questions in this channel. Yeah, I believe that's it. -WH: How do I view the logs on one page? It shows me just one screen at a time. I'd like to be able to find things in the logs and I can't do that. +WH: How do I view the logs on one page? It shows me just one screen at a time. I'd like to be able to find things in the logs and I can't do that. JWS: USA might know the answer to that, I don't know if you can set it to load everything in a single page. @@ -68,15 +70,17 @@ AKI: there's also a pretty robust bot support. So, if we find that the Matrix lo AKI: Okay, so this gives me a great opportunity to intro our next coms tool which we may want to come back to this plenary. But I just want to mention. We have some minor downtime for TCQ, which is our typical discussion queuing tool. There was an unexpected outage and a very expected baby. Welcome to the world, baby Terlson. And now we're going to be using Alterna-Q for this meeting. So I'm just going to need you to use your imaginations for the next slide. -AKI: It is the opinion of the chair group that the inclusion group have done their homework. We can return for a conclusion later. +AKI: It is the opinion of the chair group that the inclusion group have done their homework. We can return for a conclusion later. ## Secretary's Report -IS: I have sent out over the TC39 file server, the Secretariat report for this meeting rather than presenting it here. So if anybody has any sort of questions you can ask me now or during the two days, or whatever. Very, very briefly what I can say is that everything is on track. There is nothing unusual. We had one deadline in the meantime, which was on May 10. And this was the so called “Opt-Out”. That was the deadline for any TC39 member company to speak against the RF status of the standards that are going to be approve on June 22nd by the ECMA GA. So these are the ECMA-262 and the ECMA-402 standards, if from the RF IPR point of view is there anything, what they don't want to see in the draft. That is according to the RF Ecma patent policy. So if anybody had any problems with that you should have told so by May 10, 2021. We have not received anything - like also in the past years - we have never received anything so far. Nevertheless, we have to "play" this sort of procedure, every year, in order to comply with the choreography, but, that is the policy. So from that point of view, we can go ahead and then to have the approval on June 22 by the Ecma General Assembly. During this time if somebody finds editorial mistakes or spelling errors, or something like that, we can correct those and actually some of them already came in, and we expect that maybe some others will also come in and even after the approval especially in ECMA-262 which is a very long standard. And also, with the new parts, so it always has something for editorial change. But from a substantive point of view, it has to be already very, very stable. So on June 22nd, this will hopefully be approved standards. And what will be approved is the HTML version of the standard. We also have a PDF version of the standard. You may remember the PDF version is not the master one, the master one is the HTML so you should look primarily at the HTML and then at some point in time, we are trying to synchronize from the quality point of view as soon as possible, if we can manage it for. that the PDF versions also look nice. And this is rather important because these two standards and also the other TC39 standards represent more than half of the downloads of all standards that are currently getting downloaded. So this is basically the only news that I have to share with you. Otherwise, I mean you can read the rest of the report and I tried to be as complete as possible about that in order that you have not too many questions regarding this. And with that, I would like to close now my reporting. And as I said, please read it and if you have questions ask me. So thank you. + +IS: I have sent out over the TC39 file server, the Secretariat report for this meeting rather than presenting it here. So if anybody has any sort of questions you can ask me now or during the two days, or whatever. Very, very briefly what I can say is that everything is on track. There is nothing unusual. We had one deadline in the meantime, which was on May 10. And this was the so called “Opt-Out”. That was the deadline for any TC39 member company to speak against the RF status of the standards that are going to be approve on June 22nd by the ECMA GA. So these are the ECMA-262 and the ECMA-402 standards, if from the RF IPR point of view is there anything, what they don't want to see in the draft. That is according to the RF Ecma patent policy. So if anybody had any problems with that you should have told so by May 10, 2021. We have not received anything - like also in the past years - we have never received anything so far. Nevertheless, we have to "play" this sort of procedure, every year, in order to comply with the choreography, but, that is the policy. So from that point of view, we can go ahead and then to have the approval on June 22 by the Ecma General Assembly. During this time if somebody finds editorial mistakes or spelling errors, or something like that, we can correct those and actually some of them already came in, and we expect that maybe some others will also come in and even after the approval especially in ECMA-262 which is a very long standard. And also, with the new parts, so it always has something for editorial change. But from a substantive point of view, it has to be already very, very stable. So on June 22nd, this will hopefully be approved standards. And what will be approved is the HTML version of the standard. We also have a PDF version of the standard. You may remember the PDF version is not the master one, the master one is the HTML so you should look primarily at the HTML and then at some point in time, we are trying to synchronize from the quality point of view as soon as possible, if we can manage it for. that the PDF versions also look nice. And this is rather important because these two standards and also the other TC39 standards represent more than half of the downloads of all standards that are currently getting downloaded. So this is basically the only news that I have to share with you. Otherwise, I mean you can read the rest of the report and I tried to be as complete as possible about that in order that you have not too many questions regarding this. And with that, I would like to close now my reporting. And as I said, please read it and if you have questions ask me. So thank you. ## ECMA262 status updates + KG: That's me. This will be quite short. So this is the May 2021 status update. There have been no major editorial changes since the previous meeting. This is in part because we were focused on landing class fields and in part because the last meeting was very recent. The major normative change was of course that we landed class features. That is to say, public and private instance and static class fields as well as private methods and accessors. The editors and the champions of the proposal did a few last rounds of editorial changes on the structure of that PR and landed it a couple of weeks ago. -KG: One other topic that we wanted to bring up is that as part of our ongoing quest to move parts of annex B into the main specification, where it makes sense to do, we are going to be pulling in the legacy octal integer literal syntax. There are a couple of these weird octal literals and we have consensus for pulling those into main specification, but we wanted to say that specifically, we are going to be tagging these as “legacy” meaning we will have a note which says, basically, this is in the spec because we did it a long time ago, we regret that decision, but we're stuck with it now. We assume that everyone is okay with the addition of this note given the these productions have always been considered to be “legacy”, but we just wanted to give a heads up because when we asked previously about moving these into the main specification, we did not specifically mention that we were going to be tagging them with this legacy note. So if you have objections, of course, please bring it to our attention. But otherwise, we will assume that everyone is on board with marking those as Legacy. +KG: One other topic that we wanted to bring up is that as part of our ongoing quest to move parts of annex B into the main specification, where it makes sense to do, we are going to be pulling in the legacy octal integer literal syntax. There are a couple of these weird octal literals and we have consensus for pulling those into main specification, but we wanted to say that specifically, we are going to be tagging these as “legacy” meaning we will have a note which says, basically, this is in the spec because we did it a long time ago, we regret that decision, but we're stuck with it now. We assume that everyone is okay with the addition of this note given the these productions have always been considered to be “legacy”, but we just wanted to give a heads up because when we asked previously about moving these into the main specification, we did not specifically mention that we were going to be tagging them with this legacy note. So if you have objections, of course, please bring it to our attention. But otherwise, we will assume that everyone is on board with marking those as Legacy. KG: Right, upcoming work. Basically the same as it has been forever. I did again want to call out #545 here, which is still one of my higher priorities, which is to make abstract operations have slightly more structured headers. This is only going to affect authors of spec text. The rendered spec text will look exactly the same. But when you are authoring abstract operations, they will look somewhat different. Don't worry about fixing up your PRs, I will be happy to do that for you, but just do note that if you're reading the spec text you can expect abstract operations to look a little bit different. The purpose of this is of course to make it easier to statically check the correctness of the specification, because it is an extremely long document that amounts to like a thousand pages of code. Basically, the correctness procedure currently consists of, a community member runs his tool on it, and I try to read it very carefully, and having more structured information that a computer can check will make both of our lives much easier. And then other than that, basically the same editorial work that we have been planning to do forever. That's it. Thanks. @@ -85,36 +89,40 @@ KG: Right, upcoming work. Basically the same as it has been forever. I did again USA: I'm Ujjwal and welcome to the ECMA402 status update. The at a bunch of not. So interesting. Editorial work also, but it all comes down to three normative PRs that I would love to hear your thoughts on. So first up, we have #571 [TODO: link]. This comes from Long Ho who is going to maintain a format JS (???) and this is more motivated by something that they needed me from that chest (???). So that the tldr on this one is that when we are selecting the best Locale for a sudden set of options, we need to take the our cycle into account, right? So if the user prefers 24 hours, or 12 hours, or, you know, something else. And this information was just not available before. now it is. So that's what this moment PR does is it adds more information and implementation Define behavior can access that information? Next up, have 572, this is by Shane Carr. it fixes spec bugs in number format. So In number four, in sorry, in unifying numbers format to spec bugs Bugs, were introduced. 2020. These bugs were uncovered by Shane while working on number four and this PR resource pack Behavior to that reality. So that's that's also conducts especially when if you're grounding all these numbers and also displaying times a next up, we have 573 which fixes the behavior on time zone names in comedy, bearing. So this was done by Frank. This was also uncovered while working on a new proposal. This one being the PX and time zone name. It also fixes an older bomb that was filed by my colleague and squeegee. So this fixes the way time zone names or handled and allows the new proposal that were working on to actually utilize that and allow more expressiveness time zones. Sorry, I don't have your names. So that's pretty much it. Those are the query tool Quest we reviewed them in tgp but I'd love to hear okay? I guess that points to consensus. ### Conclusion/resolution + - Consensus on these PRs. ## ECMA404 status updates + CM: Uh, it's still there. I looked. ## TC53 liaison status updates -PHE: we should have just finished our opt-out period for our first real meeting this week. And so we might actually have real standard after the next general assembly meeting in June. So, we shall see why who? And those of you who want to, you know, influence the future of embedded JavaScript we're starting conversations about what's in the next ones, though. I join us at our monthly meetings. -AKI: That's thrilling. Congratulations! Or pre-emptive congratulations. +PHE: we should have just finished our opt-out period for our first real meeting this week. And so we might actually have real standard after the next general assembly meeting in June. So, we shall see why who? And those of you who want to, you know, influence the future of embedded JavaScript we're starting conversations about what's in the next ones, though. I join us at our monthly meetings. + +AKI: That's thrilling. Congratulations! Or pre-emptive congratulations. PHE: Yeah. We're not, we're not celebrating yet, hopefully thank you. AKI: well, That’s the end of the housekeeping, agenda items. And I don't have a window with the agenda open right now. I'm sorry. I know I wrote it. so, if there's no COC, the next would be Shu. Not updates from COC. I believe we do not have any updates from the code of conduct committee. Not a lot has happened in the last month. JHD: I just want to add the test262.report website was offline for a long time, and then some somebody pinged Bocoup and it went back up for like a day, and now it's offline. It seems like a really useful resource that this committee should be invested in maintaining, or ensuring is maintained. I just wanted to get that on the record - it would be great if we could figure out how to help ensure that. That's all. - -YSV: Yes, I can see. Yeah, sorry, yeah, I can say something to that. JHD, I spoke with Bocoup and they got it online. The problem appears to be with JSC with a JSC. download its infinitely, looping, and then crashing. So they somehow have been able to get it online like in the mornings and then it crashes in the evening, they are still investigating it. and the issues open. I'll post it in the Matrix so that people can follow along. -AKI: Does everybody have a link to the Alterna-Q? Does everyone have access to that. Okay. nobody said no. So I'm going to go with yes. If you need access to it, message me in the 8x8 chat or in Matrix or on freenode or on Libera, I'm everywhere. So if you need a link, let me know. Leo, Leo, what's up? +YSV: Yes, I can see. Yeah, sorry, yeah, I can say something to that. JHD, I spoke with Bocoup and they got it online. The problem appears to be with JSC with a JSC. download its infinitely, looping, and then crashing. So they somehow have been able to get it online like in the mornings and then it crashes in the evening, they are still investigating it. and the issues open. I'll post it in the Matrix so that people can follow along. + +AKI: Does everybody have a link to the Alterna-Q? Does everyone have access to that. Okay. nobody said no. So I'm going to go with yes. If you need access to it, message me in the 8x8 chat or in Matrix or on freenode or on Libera, I'm everywhere. So if you need a link, let me know. Leo, Leo, what's up? -LEO: Yeah, I just been talking to Rick and we are considering the options and all the implications for doing something that actually that also provides test results in general. But I think there is even a bit large plans for the new web platform, but also, including Test262. We can talk about this with more details and we definitely are welcoming to discuss this. With more people, it's should be something public eventually. We don't have anything like ice cold or anything. We're just in the talks. Rick is not here present. He's actually creating some plans for this and all the implications right now, what it means for us doing this as well. +LEO: Yeah, I just been talking to Rick and we are considering the options and all the implications for doing something that actually that also provides test results in general. But I think there is even a bit large plans for the new web platform, but also, including Test262. We can talk about this with more details and we definitely are welcoming to discuss this. With more people, it's should be something public eventually. We don't have anything like ice cold or anything. We're just in the talks. Rick is not here present. He's actually creating some plans for this and all the implications right now, what it means for us doing this as well. -AKI: Excellent. +AKI: Excellent. ## SharedArrayBuffer `.length` + Presenter: Shu-yu Guo (SYG) - [PR](https://github.com/tc39/ecma262/pull/2393) -SYG: So PR pretty simple. SharedArrayBuffer has one parameter: the length. Only has one parameter for the length, for some reason, in the spec it is marked as optional to contrast ArrayBuffers, whose parameter is not optional. This affects basically the length property on the Constructor itself and all the web engines that I checked including Xs, Already report a length of 1 for the SAB Constructor, meaning that the parameter does not look optional. This just changes the parameter to not be optional and I think Gus pointed out there's another editorial fix up to remove the word optional here but that's about it. Any concerns here? I would be extremely surprised. Alright, I'll take that as consensus. Thank you very much. +SYG: So PR pretty simple. SharedArrayBuffer has one parameter: the length. Only has one parameter for the length, for some reason, in the spec it is marked as optional to contrast ArrayBuffers, whose parameter is not optional. This affects basically the length property on the Constructor itself and all the web engines that I checked including Xs, Already report a length of 1 for the SAB Constructor, meaning that the parameter does not look optional. This just changes the parameter to not be optional and I think Gus pointed out there's another editorial fix up to remove the word optional here but that's about it. Any concerns here? I would be extremely surprised. Alright, I'll take that as consensus. Thank you very much. AKI: MM just popped onto the queue to ask why it was optional. @@ -123,14 +131,16 @@ SYG: I have no idea Mark. I imagine it's when we added it - when we originally m LEO: I'm sorry to jump in, but I believe if I recall this correctly, we should go all the way to That meeting in Munich, when I tried to address consistency for the array buffer Constructor, I don't remember what year, 2016? Yeah, yeah. So I believe there is a PR that dates to 2016 then talking about consistency of these constructors for optional values. and I remember we got some like implementation. I think we try to follow like to just match consistency. And that was it. We should not have like many notes, but a PR to ECMA262 about this. ### Conclusion/resolution + - Consensus ## RegExp Match Indices + AKI: Thank you. Next up, we have Ron Buckton to talk about RegExp indices -RBN: I’m today about the regex match indices proposal. We've discussed this number of times in the past, it's currently sitting at stage three. Same slide of presented for a while about the motivations for rigid for the regex matching these proposal to provide information about captured groups other than The entire capture as far as position information, which can be useful for things like, like, improving, error message and parsing tools accurate positions for syntax highlighting tools quote technique, for example, or vs code, it's one step towards supporting enough more interesting, and useful features of various regular expression grammars that are used by tools like Vs code, textmate et cetera. They currently depend on regex posures, like only guruma. and it's a opposed to the opposite. The alternative is to manually capture leading groups, which is extremely expensive and quite difficult to get correct with regular expressions. +RBN: I’m today about the regex match indices proposal. We've discussed this number of times in the past, it's currently sitting at stage three. Same slide of presented for a while about the motivations for rigid for the regex matching these proposal to provide information about captured groups other than The entire capture as far as position information, which can be useful for things like, like, improving, error message and parsing tools accurate positions for syntax highlighting tools quote technique, for example, or vs code, it's one step towards supporting enough more interesting, and useful features of various regular expression grammars that are used by tools like Vs code, textmate et cetera. They currently depend on regex posures, like only guruma. and it's a opposed to the opposite. The alternative is to manually capture leading groups, which is extremely expensive and quite difficult to get correct with regular expressions. -RBN: so, some historical information, we adopted stage 1 for this proposal in 2018 on May 24th. Originally adding offsets property. We had some discussions about performance concerns at the time. Stage two adoption was on July 25th, 2018. We discussed some different approaches addressing performance issues and alternative Solutions. We're each stage 3 on July 24th 2019, originally determining that both the Callback and options objects. We considered it on during stage two. Adoption were subclassing hazards. We'd renamed offsets two, two, indices to align with the rest the terminology on regular Expressions. Namely the use of next index-match dot index et cetera. We had some early perfect stations in V8 that said that the performance overhead might be negligible. In the eighth, JSC, we had at the time consensus on a simpler API and event in advance to stage 3 with that. Since that time we had several updates as the implementers built this into their various engines to determine what the outcome of these possible performance issues would become so on December 30, 2019 they shared updates from their implementation, but the time we included not to have any changes in November of 2020, we had updates from JSC that shared their updates for performance concerns and mitigation steps, we decided to meet the implementers and Champion to meet offline to discuss a remediation strategy and the consensus on January 25th 2021 for stage 3 was the adoption Of a `d` flag to opt into the indices result on the mat result. As a result, the proposal currently is at stage 3. We currently are meeting all stage four criteria. We tests for both the early and the current version of the test of the feature for test two. Six two, which have been merged. There is a PR for Equity 262, which has been approved. There are implementations in V8 shipping as of I V8 9025 9, which is in node.js V16, from 90. It's It's shipping in JavaScript core, at least as far Tech preview. don't have a Mac to test things out, so, I'm not sure if that's now in a public, non pretty Branch, but it's also shipping in spider-monkey as a Firefox. 88 at this time, I am seeking stage 4 consensus and ask for any objections. +RBN: so, some historical information, we adopted stage 1 for this proposal in 2018 on May 24th. Originally adding offsets property. We had some discussions about performance concerns at the time. Stage two adoption was on July 25th, 2018. We discussed some different approaches addressing performance issues and alternative Solutions. We're each stage 3 on July 24th 2019, originally determining that both the Callback and options objects. We considered it on during stage two. Adoption were subclassing hazards. We'd renamed offsets two, two, indices to align with the rest the terminology on regular Expressions. Namely the use of next index-match dot index et cetera. We had some early perfect stations in V8 that said that the performance overhead might be negligible. In the eighth, JSC, we had at the time consensus on a simpler API and event in advance to stage 3 with that. Since that time we had several updates as the implementers built this into their various engines to determine what the outcome of these possible performance issues would become so on December 30, 2019 they shared updates from their implementation, but the time we included not to have any changes in November of 2020, we had updates from JSC that shared their updates for performance concerns and mitigation steps, we decided to meet the implementers and Champion to meet offline to discuss a remediation strategy and the consensus on January 25th 2021 for stage 3 was the adoption Of a `d` flag to opt into the indices result on the mat result. As a result, the proposal currently is at stage 3. We currently are meeting all stage four criteria. We tests for both the early and the current version of the test of the feature for test two. Six two, which have been merged. There is a PR for Equity 262, which has been approved. There are implementations in V8 shipping as of I V8 9025 9, which is in node.js V16, from 90. It's It's shipping in JavaScript core, at least as far Tech preview. don't have a Mac to test things out, so, I'm not sure if that's now in a public, non pretty Branch, but it's also shipping in spider-monkey as a Firefox. 88 at this time, I am seeking stage 4 consensus and ask for any objections. AKI: Shu has something to say. @@ -138,94 +148,99 @@ SYG: Just expressing support for stage 4. WH: I concur. -AKI: Great, that's consensus. +AKI: Great, that's consensus. -RBN: And I would like to say, I've been a member of this committee for far, number of years, far-flung number of years. Now have participated in a number of various proposals and discussions. Have a number of proposed my own proposals of my own that are still on track. This is the first one I've had that as Champion. I have officially reached stage 4 I'm extremely excited about that. Thank you. +RBN: And I would like to say, I've been a member of this committee for far, number of years, far-flung number of years. Now have participated in a number of various proposals and discussions. Have a number of proposed my own proposals of my own that are still on track. This is the first one I've had that as Champion. I have officially reached stage 4 I'm extremely excited about that. Thank you. AKI: Oh That's great. -RBN: And that is the conclusion of my presentation. Thank you so much for everyone's time that they put into helping put the move this forward and expect more on regular expressions from me in the future. I've been working on something a comprehensive list of things that I have been working to bring up in a future meeting. Cool. I love, I love All right. +RBN: And that is the conclusion of my presentation. Thank you so much for everyone's time that they put into helping put the move this forward and expect more on regular expressions from me in the future. I've been working on something a comprehensive list of things that I have been working to bring up in a future meeting. Cool. I love, I love All right. -AKI: Next up with Top Level Await for Stage Four, Yulia. +AKI: Next up with Top Level Await for Stage Four, Yulia. ### Conclusion/Resolution + Stage 4 ## Top Level Await + Presenter: Yulia Startsev (YSV) - [proposal](https://github.com/tc39/ecma262/pull/2408) -[slides](https://docs.google.com/presentation/d/1EMtuhxtr2kG9yjjS9cCguvG5u7ksvQdvkICBfEfaQFo/edit#slide=id.p) -YSV: Thank you. Okay, so, hi everybody. My name is Yulia Startsev. If I am the new miles. No, I'm kidding. I'm, I'm bringing a top-level. Oh, wait for stage for taking it across line. Just sort of finishing things up for four miles, who is the original Champion, we are now co-champions on this proposal. proposal. The second line is his Nature, the wait, is over, but we'll see about where we are with this proposal in a second. So, really quick refresher for everyone, what top-level away does, is it enables modules to act as big asynchronous functions with top-level await the modules can await a given resource. For example, if you want to do a fetch or an import at the top level, you'll be able to weight it as though it's a synchronous called but it will In fact be asynchronous, requiring a tick And Etc. the proposal went to stage three in think June 2019, I didn't prepare the whole historical list to that. We just saw from Ron but we can go through that if that's helpful But you did see an update to this to this proposal last meeting where we know the meeting before. Last, where we discussed a change to post order. semantics around synchronous modules that are loading in a asynchronous child, in which case, the original specification allowed it so the sync modules would be reordered. We changed that in the previous meeting so that they're always imposed order. So that's that's just a reminder of happened last meeting. +YSV: Thank you. Okay, so, hi everybody. My name is Yulia Startsev. If I am the new miles. No, I'm kidding. I'm, I'm bringing a top-level. Oh, wait for stage for taking it across line. Just sort of finishing things up for four miles, who is the original Champion, we are now co-champions on this proposal. proposal. The second line is his Nature, the wait, is over, but we'll see about where we are with this proposal in a second. So, really quick refresher for everyone, what top-level away does, is it enables modules to act as big asynchronous functions with top-level await the modules can await a given resource. For example, if you want to do a fetch or an import at the top level, you'll be able to weight it as though it's a synchronous called but it will In fact be asynchronous, requiring a tick And Etc. the proposal went to stage three in think June 2019, I didn't prepare the whole historical list to that. We just saw from Ron but we can go through that if that's helpful But you did see an update to this to this proposal last meeting where we know the meeting before. Last, where we discussed a change to post order. semantics around synchronous modules that are loading in a asynchronous child, in which case, the original specification allowed it so the sync modules would be reordered. We changed that in the previous meeting so that they're always imposed order. So that's that's just a reminder of happened last meeting. YSV: Prior to this meeting that GB and I have been working on editorial changes primarily. So for example -YSV: So what I'm showing you now is one of the editorial changes that happened. Specifically I added a significant section of prose. describing what happens when there is an error in an asynchronous module graph and how it fails. This is non-normative text. this is just illustrative prose to help people understand what's expected to happen when a given module fails, how that impacts the rest of the asynchronous graph. The second topic is a spec, readability PR. so this was in response to an issue open by Keith Miller(KM) in which the specification text around the state change from links to evaluating to evaluate. It wasn't entirely clear especially in association with the async event Boolean which technically acts more like a counter. So what GB did was, he wrote this PR, which re-introduces the concept of a state, a private state to the specification into implementations if they choose to you to implement in this way, a stage called evaluating async to make it clear that a given module is being evaluated async And also Aim. So it went back and forth on the this issue is still being called queued evaluation. We went back to async evaluation in the end. So the summary is we're replacing all evaluated State checks to check for evaluated or evaluating async. We're replacing evaluating to evaluated for the transition of async executions with evaluating to evaluating async renaming racing to async evaluation. Again we went back and forth on this a couple of times. and we are appending the transition of status from evaluating async to evaluating to all the sites, where we previously set async evaluating to false. We never transitioned what used to be the async evaluating field, which is now the async evaluation field back to false. It's always the left as true once the evaluation has completed. -So this is also an extensive editorial PR, which we hope makes the Spec more readable and the intention of the specification, more clear. Please check it out. If you've got any concerns. +YSV: So what I'm showing you now is one of the editorial changes that happened. Specifically I added a significant section of prose. describing what happens when there is an error in an asynchronous module graph and how it fails. This is non-normative text. this is just illustrative prose to help people understand what's expected to happen when a given module fails, how that impacts the rest of the asynchronous graph. The second topic is a spec, readability PR. so this was in response to an issue open by Keith Miller(KM) in which the specification text around the state change from links to evaluating to evaluate. It wasn't entirely clear especially in association with the async event Boolean which technically acts more like a counter. So what GB did was, he wrote this PR, which re-introduces the concept of a state, a private state to the specification into implementations if they choose to you to implement in this way, a stage called evaluating async to make it clear that a given module is being evaluated async And also Aim. So it went back and forth on the this issue is still being called queued evaluation. We went back to async evaluation in the end. So the summary is we're replacing all evaluated State checks to check for evaluated or evaluating async. We're replacing evaluating to evaluated for the transition of async executions with evaluating to evaluating async renaming racing to async evaluation. Again we went back and forth on this a couple of times. and we are appending the transition of status from evaluating async to evaluating to all the sites, where we previously set async evaluating to false. We never transitioned what used to be the async evaluating field, which is now the async evaluation field back to false. It's always the left as true once the evaluation has completed. +So this is also an extensive editorial PR, which we hope makes the Spec more readable and the intention of the specification, more clear. Please check it out. If you've got any concerns. -YSV: Okay. And, of course, since we're asking for stage four, I went through our open issues and there are still a couple, but want to specifically highlight the issue that was opened two days ago by hax. +YSV: Okay. And, of course, since we're asking for stage four, I went through our open issues and there are still a couple, but want to specifically highlight the issue that was opened two days ago by hax. Shows https://github.com/tc39/proposal-top-level-await/issues/182 -YSV: Now, this issue is bringing up the fact that the top level top-level await can be hidden in Child node and it will have impacts on the graph specifically he's showing this from the perspective of HTML in that if you have two modules being loaded in And one of them used to be sync and was doing Global State setting, but then it became async that this is that this will potentially cause bugs. Now, this is actually covered in the readme and specifically. There's a line in the readme that talks about Pollyfilling and that any polyfills that use top-level. any polyfills that want to work with top-level await will have to imported by the modules that depend on them. So this was something that's been discussed in committee, but this is being brought up again as a stage four. One thing to also highlight here is that the solution given here is to restrict the use of top-level await to scripts that include the async attribute. Now, the async attribute has its semantics that are completely distinct from top level await, but the more important point here is that the specific solution to solve this isn't something that will be within the scope of this community to specify. So if we want to look more deeply into this, we should definitely keep that in mind. So that's one open issue. +YSV: Now, this issue is bringing up the fact that the top level top-level await can be hidden in Child node and it will have impacts on the graph specifically he's showing this from the perspective of HTML in that if you have two modules being loaded in And one of them used to be sync and was doing Global State setting, but then it became async that this is that this will potentially cause bugs. Now, this is actually covered in the readme and specifically. There's a line in the readme that talks about Pollyfilling and that any polyfills that use top-level. any polyfills that want to work with top-level await will have to imported by the modules that depend on them. So this was something that's been discussed in committee, but this is being brought up again as a stage four. One thing to also highlight here is that the solution given here is to restrict the use of top-level await to scripts that include the async attribute. Now, the async attribute has its semantics that are completely distinct from top level await, but the more important point here is that the specific solution to solve this isn't something that will be within the scope of this community to specify. So if we want to look more deeply into this, we should definitely keep that in mind. So that's one open issue. YSV: And another open issue that hasn't quite come to resolution is also a host integration, PR. specifically the service workers integration PR has not yet been merged all issues have been resolved on it and we are just waiting on the editors to to merge this in if they think it's ready or give us further feedback, if it's not and additionally, there are a couple of things specifically around documentation and also tests that's the current work. That's still on the back burner. I haven't quite gotten the test yet because they're rather complex. So continuing on with the presentation. -YSV: The current status is we do have a large number of tests 262 acceptance tests merged. There are compatible implementations in V8. spider, monkey core, chakracore and I believe in a few other engines and an integrated spec text PR has made. We're waiting on the ecmascript editors to sign off on that pull request. So, that's all I have to show and I'm happy to take questions from the key. +YSV: The current status is we do have a large number of tests 262 acceptance tests merged. There are compatible implementations in V8. spider, monkey core, chakracore and I believe in a few other engines and an integrated spec text PR has made. We're waiting on the ecmascript editors to sign off on that pull request. So, that's all I have to show and I'm happy to take questions from the key. JHD:. Entirely covered by Yulia by bringing up JHX’s issue that they had asked be brought up on the record, because they couldn't be presen. I explicitly support stage 4 for this. I think JHX’s suggestions, are actually a great idea for HTML, but I also think it's completely out of scope of this committee, and it's unrelated to whether this feature is ready for stage 4. -YSV: Great and do we have anybody present who would like to speak more to JHX’s point? +YSV: Great and do we have anybody present who would like to speak more to JHX’s point? SYG: I just want to make sure I understand the point that JHX is making. Is the problem is that you have two modules, both of which are sync and both of which are doing some kind of global State mutation. And if one of them Becomes async by virtue of depending on an async module with TLA. Then, even with, without the module script itself changing the behavior changes, is that his point? - + YSV: Yes, that sums it up quite innocent, okay? -JHD: Usually adding TLA is a breaking change. +JHD: Usually adding TLA is a breaking change. SYG: Right.. and his suggested fix here is to distinguish these somehow. Okay, Okay, I yeah. I don't know how that would work, but okay, I think I agree that this is yes, this is out of scope for TC39 and should not hold up, stage 4. -AKI: Great. Thank you. Queue is empty. +AKI: Great. Thank you. Queue is empty. -MM: I support. +MM: I support. AKI: Is the await truly over? -YSV: The await might be truly over. If it is, I have one last slide I want to show which is a big thank you. So I came in super late to this and I just want to say thank you to people who did incredible work on this. Guy Bedford did amazing work getting this across the finish line. So dude, and a stooge ER Den Ehrenberg of course to be as copper coppers JSON Orndorff and everybody else who helped rather long-running piece of spec text. Make it a Slime. So if this is stage 4, then Bravo, everybody unless there are any objections, I do believe this is stage 4, congratulations to all of you. Thank you for your hard work. +YSV: The await might be truly over. If it is, I have one last slide I want to show which is a big thank you. So I came in super late to this and I just want to say thank you to people who did incredible work on this. Guy Bedford did amazing work getting this across the finish line. So dude, and a stooge ER Den Ehrenberg of course to be as copper coppers JSON Orndorff and everybody else who helped rather long-running piece of spec text. Make it a Slime. So if this is stage 4, then Bravo, everybody unless there are any objections, I do believe this is stage 4, congratulations to all of you. Thank you for your hard work. + ### Conclusion/Resolution + Stage 4 ## Temporal Normative PRs + Presenter: Justin Grant (JGT) --[slides](https://justingrant.github.io/temporal-slides-in-progress/) +-[slides](https://justingrant.github.io/temporal-slides-in-progress/) JGT: [Slide 1] Hi I'm Justin, I'm a champion of the Temporal proposal. JGT: [Slide 2] Today we're going to ask for consensus on two minor normative PRs to address issues that came up since the last plenary. First, a process note: we got conflicting advice about whether all agenda items should be posted 10 days in advance, or just stage advancement items. This item was added 7 days in advance. We apologize for the confusion. Before digging in, here's a quick update on what else is going on with Temporal: -JGT: We're starting to get implementer feedback. We got really good, really detailed (and really voluminous!) feedback from Andre Bargull on the SpiderMonkey team, and we're working through that feedback. So far, no normative changes have resulted, but if there are any normative changes required then we'll come back to a future plenary to ask for consensus on those. +JGT: We're starting to get implementer feedback. We got really good, really detailed (and really voluminous!) feedback from Andre Bargull on the SpiderMonkey team, and we're working through that feedback. So far, no normative changes have resulted, but if there are any normative changes required then we'll come back to a future plenary to ask for consensus on those. JGT: We're also porting the Temporal docs over to MDN. Eric Meyer has been working on this. There were some licensing issues that held us up for a while, but those are now resolved and the content migration is well underway. And we're writing more tests! So, let's look at the two PRs... JGT: [Slide 3] The first one is a fairly straightforward spec bug, where the spec text doesn't match the intended behavior of the `getISOFields` method. This method is used by userland custom calendar implementations to read the values of internal slots of Temporal objects. The output of this method on Temporal types is supposed to be an object whose unit properties like "month" or "day" should have an "iso" prefix. Also, properties should be added in alphabetical order. -JGT: We discovered that the spec text of two Temporal types doesn't match this expected behavior. The ZonedDateTime type doesn't use the correct prefix. The PlainDateTime type doesn't emit properties in the right order. As you can imagine, this is a straightforward fix. It's a breaking change but will only break custom userland calendar authors, of which there are very few at this point… and we know most of them. +JGT: We discovered that the spec text of two Temporal types doesn't match this expected behavior. The ZonedDateTime type doesn't use the correct prefix. The PlainDateTime type doesn't emit properties in the right order. As you can imagine, this is a straightforward fix. It's a breaking change but will only break custom userland calendar authors, of which there are very few at this point… and we know most of them. JGT: [Slide 4] The second PR is a little more involved. It was brought to our attention by FYT from Google. The problem is that the current Temporal spec isn't fully compatible with how Intl handles the names of date & time units like "day" or "month" or "week". In Intl, the canonical name of a unit is always singular. This matches other industry standards for unit names, like SI units such as "meter" or "second". When Intl outputs unit names, the output is a singular string. In the code sample on the slide, the output of `Intl.DateTimeFormat.formatToParts` is an object with a `type` property that has a string value of "day"... not a string value of "days". The Intl docs also favor the singular form. Now, when Intl accepts unit names as inputs, it's more flexible: either singular or plural is OK. JGT: Temporal is already very close to this behavior, with a few exceptions. Temporal doesn't generally output unit names, but there is one case (when normalizing options objects passed to a custom userland calendar) where it could be possible for Temporal to send a plural unit name back into userland code. For inputs, Temporal (like Intl) currently accepts both plural and singular unit names like "day" or "days" or "month" or "months", with one exception: the singular unit "week" is not currently supported by Temporal. The current Temporal docs include a mix of both plural and singular unit names. Anyway, FYT made a compelling case that we should align Temporal behavior to Intl's existing behavior, and we agreed with his argument. So we want to make some changes. -JGT: [Slide 5] To summarize these changes: we want to make the singular form of unit names the canonical form throughout Temporal, just like Intl does. This means the following changes. A non-breaking change to accept "week" just like all the other singular forms: "month", "day", "hour", etc. that are already currently supported. Normalize options values as singular strings when options objects are passed from Temporal into userland code. This only affects one method, and only in custom calendar implementations, and like the previous PR it's a breaking change but only for a handful of custom calendar authors. Finally, a non-breaking change to the Temporal docs to emphasize that the singular form of unit names is canonical. This is exactly what the Intl docs do on MDN. +JGT: [Slide 5] To summarize these changes: we want to make the singular form of unit names the canonical form throughout Temporal, just like Intl does. This means the following changes. A non-breaking change to accept "week" just like all the other singular forms: "month", "day", "hour", etc. that are already currently supported. Normalize options values as singular strings when options objects are passed from Temporal into userland code. This only affects one method, and only in custom calendar implementations, and like the previous PR it's a breaking change but only for a handful of custom calendar authors. Finally, a non-breaking change to the Temporal docs to emphasize that the singular form of unit names is canonical. This is exactly what the Intl docs do on MDN. -JGT: One clarification: these changes only apply to property *values* in options objects. Property *names* on Temporal's Duration type will stay plural like `days` or `weeks` to align with the naming of every other date & time API we could find, including both ECMAScript APIs like moment.js and non-ECMAScript APIs like .NET. +JGT: One clarification: these changes only apply to property *values* in options objects. Property *names* on Temporal's Duration type will stay plural like `days` or `weeks` to align with the naming of every other date & time API we could find, including both ECMAScript APIs like moment.js and non-ECMAScript APIs like .NET. JGT: [Slide 6] OK, these are our two PRs. Does anyone have concerns with these PRs? If not, we'd like to ask for consensus. Thank you! -WH: Are the unit names always in English or are they internationalized? +WH: Are the unit names always in English or are they internationalized? -JGT: They are always in English. This is not for localization. These are programmer-facing enumerations. +JGT: They are always in English. This is not for localization. These are programmer-facing enumerations. -AKI: Cool. Other than that, the queue is empty. +AKI: Cool. Other than that, the queue is empty. SFC: Yeah, just to clarify, USA is the progress of working on the proposal to allow for proper internationalisation of these names, which is exactly how this request came to be, so that they are localized when you call `toLocaleString` on the duration. But the identifier is, as usual in ECMAScript, still in English. @@ -233,22 +248,22 @@ WH: The reason for my question was that there is only one plural form in English JGT: Yeah, these are enumerated string values. They're not localized. -AKI: Are we clarified? Are we good? +AKI: Are we clarified? Are we good? WH: Yes. AKI: Great great. Okay. So do we have consensus? Sounds like a yes to me. -JGT: That's great. Thanks everybody. +JGT: That's great. Thanks everybody. AKI: Thank you. We are a little bit ahead of schedule, which is fantastic because it means we can move TCN to now. And that way, we don't have to worry about timing later in case FYT is fast. Thanks for being flexible. - ### Conclusion/Resolution -All normative changes achieved consensus +All normative changes achieved consensus ## Accessible Object.prototype.hasOwnProperty() for Stage 3 + Presenter: Tierney Cyren (TCN) - [proposal](https://tc39.es/proposal-accessible-object-hasownproperty/) @@ -262,18 +277,18 @@ TCN: So updates for stage 3. Committee feedback again we've updated it from `has TCN: so in addition to the four billion after mentioned, two billion downloads there has been and the the overwhelming positive feedback. three Frameworks and Ember, Vue, and React all use some version of that boilerplate. Those are all examples of, know, specific lines in files and GitHub where they're actively developed, where they are using some version of this of this boilerplate. So in addition to, you know, a lot of ecosystem usage, a lot of positive feedback. There are also signals from major Frameworks that are powering a you know, a decent chunk of the web that this is something that you know, they can just delete this code I -TCN: In Specification, we did update the spec text a bit. You can find the specs proposal. There, we updated the spec text to reflect reviewer feedback, I believe, Leo had some good feedback that we updated, nothing, immensely major. I think it was specifically, like, the the letter, O and obj, and key and P, Those things there was also an update to remove a legacy order. So I believe the order of this was flipped for hasOwn property. Yeah. So this this is the updated version. These two properties were flipped. This was reviewed by, JHD was one of the reviewers that Legacy has been removed. It's unnecessary at this proposal from what I understand. So Yep, this is the entire spec text. Yep. Yeah, so quick, quick note on those updates, we did update parameters, the specs step values were updated to consistent with the rest of the specs of that was Leo's feedback. The Legacy ordering was removed just as a result of that both, or as a result of that unnecessary link rest. Removed and we updated the specs to stage two. That's those are the only changes that were made. Again..here's the polyfill. There's there's a link to the polyfill and it's just in the in the repo if you'd like to take a look and we are seeking stage 3. -So anything in the queue? +TCN: In Specification, we did update the spec text a bit. You can find the specs proposal. There, we updated the spec text to reflect reviewer feedback, I believe, Leo had some good feedback that we updated, nothing, immensely major. I think it was specifically, like, the the letter, O and obj, and key and P, Those things there was also an update to remove a legacy order. So I believe the order of this was flipped for hasOwn property. Yeah. So this this is the updated version. These two properties were flipped. This was reviewed by, JHD was one of the reviewers that Legacy has been removed. It's unnecessary at this proposal from what I understand. So Yep, this is the entire spec text. Yep. Yeah, so quick, quick note on those updates, we did update parameters, the specs step values were updated to consistent with the rest of the specs of that was Leo's feedback. The Legacy ordering was removed just as a result of that both, or as a result of that unnecessary link rest. Removed and we updated the specs to stage two. That's those are the only changes that were made. Again..here's the polyfill. There's there's a link to the polyfill and it's just in the in the repo if you'd like to take a look and we are seeking stage 3. +So anything in the queue? AKI: There is explicit support for stage 3. PFC and MM, if either of you want to say anything, speak up. PFC: I just want say that I think this is great because it eliminates a pitfall that's really not obvious to programmers who aren't deep into what's going on behind the scenes in JavaScript. -AKI: Awesome. wonderful, There's a +1 from JHD and YSV. +AKI: Awesome. wonderful, There's a +1 from JHD and YSV. -YSV: Yeah, we already have a PR ready to be merged later today and we'll see how it goes. +YSV: Yeah, we already have a PR ready to be merged later today and we'll see how it goes. -TCN: Excellent. So I think I think this is a pretty supported proposal. I think great love to hear that. Cool with that. I guess I'm going to ask for stage 3 directly. +TCN: Excellent. So I think I think this is a pretty supported proposal. I think great love to hear that. Cool with that. I guess I'm going to ask for stage 3 directly. AKI: You have consensus for stage 3…? I think we do. awesome. Great. Congratulations. @@ -281,89 +296,92 @@ TCN: Thank you. Appreciate y'all. MPC: Congrats TCN! - ### Conclusion/Resolution + Stage 3 ## Symbols as Weak Keys for Stage 3 + Presenter: Leo Balter (LEO) - [proposal](https://github.com/tc39/proposal-symbols-as-weakmap-keys) - [slides](https://github.com/tc39/agendas/blob/master/2021/05.md) -LEO: Yeah. All right. This is This presentation is asking symbols as weak keys. This means the weakmap mostly as you might have known, but also some other collections I'm going to be chasing them here. Sentence is a presentation to request advancement for stage 3 and the current implications of it. The main goal of this proposal is to use unique primitive values and keys for weak reference. As we reference values. Sorry, there is a typo here and by those unique values producer. Are not objects. We can look at this with symbols. There's nothing really new here you already know this part but, this is actually new for this proposal. Here, you have a single set of objects being used as a key for weakmap. Yes. In those have support for the weak, other weak APIs, including weak sets, weak tracks and finalization registry. This is with just the purpose of matching consistency, consistency, with, with Napa case. so, the examples are basically like this. "I don't have an object as a symbol being used here. So I see there's a question from SYG. There is nothing new Since the meeting. What is new here for from this proposal, to what we have today in ecmascript, It's just using the same way as the keys sir, if I can interject that that was a question for in the DMT. You about your PR not to current. okay, I'm sorry I but that's actually okay because I'm gonna talk about that anyway. So the current status is that we have the Spec draft with no motive changes in this proposal repo we the current draft or request to ECMA262. We've had a lot of previous discussions offline. And yes, it's in stage 2. I've got reveals from Bradford as myth, JHD, Daniel Ehrenberg. Thank you. I've got interesting feedback from Kevin Gibbons that I consider as editorial feedback, like editorship feedback and I am gonna link this here. So, we have a chapter ecmascript about the liveness of objects that defines it. So that's why I was also pinging Shu, My PR came in, as a late notice so this might be one of the reasons to not advance it right now but I think we can work from here. in the repo. We have the pull requests #20 and they say is just a change to the liveness section in ecmascript, Kevin Gibbons pointed out that as this proposal was having weakRef to a symbol, we need to also take the definition of lightness andI just did some minimal changes but for the changes that I'm proposing, I believe they should be addressed with a back and forth from the editors including Shu as one of the main authors of that section, these are more like referring to text that goes as like, saying it's not only two objects but also two symbols as well. And most of the time I just use reference here and there. This needs back and forth from the editors. This change can be seen from the Spec to the PR2 ECMA402. Also 402. Also, to update this VR and make this right, I also going to coordinate with Daniel Ehrenberg, but, yes, we have some key work to start this. I think, like, for the main reasons, this could be like the blocker for stage 3, but hope we can still advance and work through it as like a part of the pr crosses as We need, I believe it's implicit that the liveness would also imply like the definitions of liveness, this would also apply to single values. So yeah, I'm requesting for stage 3, of course. And as I guess, a nice to have some more characters here. Saying, thank you for all the people doing this review, work, etc. If we have any questions we can go through Questions. I want to make this short, it doesn't need to be any longer. +LEO: Yeah. All right. This is This presentation is asking symbols as weak keys. This means the weakmap mostly as you might have known, but also some other collections I'm going to be chasing them here. Sentence is a presentation to request advancement for stage 3 and the current implications of it. The main goal of this proposal is to use unique primitive values and keys for weak reference. As we reference values. Sorry, there is a typo here and by those unique values producer. Are not objects. We can look at this with symbols. There's nothing really new here you already know this part but, this is actually new for this proposal. Here, you have a single set of objects being used as a key for weakmap. Yes. In those have support for the weak, other weak APIs, including weak sets, weak tracks and finalization registry. This is with just the purpose of matching consistency, consistency, with, with Napa case. so, the examples are basically like this. "I don't have an object as a symbol being used here. So I see there's a question from SYG. There is nothing new Since the meeting. What is new here for from this proposal, to what we have today in ecmascript, It's just using the same way as the keys sir, if I can interject that that was a question for in the DMT. You about your PR not to current. okay, I'm sorry I but that's actually okay because I'm gonna talk about that anyway. So the current status is that we have the Spec draft with no motive changes in this proposal repo we the current draft or request to ECMA262. We've had a lot of previous discussions offline. And yes, it's in stage 2. I've got reveals from Bradford as myth, JHD, Daniel Ehrenberg. Thank you. I've got interesting feedback from Kevin Gibbons that I consider as editorial feedback, like editorship feedback and I am gonna link this here. So, we have a chapter ecmascript about the liveness of objects that defines it. So that's why I was also pinging Shu, My PR came in, as a late notice so this might be one of the reasons to not advance it right now but I think we can work from here. in the repo. We have the pull requests #20 and they say is just a change to the liveness section in ecmascript, Kevin Gibbons pointed out that as this proposal was having weakRef to a symbol, we need to also take the definition of lightness andI just did some minimal changes but for the changes that I'm proposing, I believe they should be addressed with a back and forth from the editors including Shu as one of the main authors of that section, these are more like referring to text that goes as like, saying it's not only two objects but also two symbols as well. And most of the time I just use reference here and there. This needs back and forth from the editors. This change can be seen from the Spec to the PR2 ECMA402. Also 402. Also, to update this VR and make this right, I also going to coordinate with Daniel Ehrenberg, but, yes, we have some key work to start this. I think, like, for the main reasons, this could be like the blocker for stage 3, but hope we can still advance and work through it as like a part of the pr crosses as We need, I believe it's implicit that the liveness would also imply like the definitions of liveness, this would also apply to single values. So yeah, I'm requesting for stage 3, of course. And as I guess, a nice to have some more characters here. Saying, thank you for all the people doing this review, work, etc. If we have any questions we can go through Questions. I want to make this short, it doesn't need to be any longer. AKI: The queue is un-empty. Waldemar you're first in line. -WH: I support the inclusion of symbols which are identity-based as keys in WeakMaps. However, symbols which are created by `Symbol.for` are not identity-based. That's a crucial difference. Those will keep WeakMap entries live even if all references to that symbol disappear. I don't want to have such things usable as keys. Or at least, if we do, we should make that a conscious decision and allow all of them including things like numbers, strings, and so on. +WH: I support the inclusion of symbols which are identity-based as keys in WeakMaps. However, symbols which are created by `Symbol.for` are not identity-based. That's a crucial difference. Those will keep WeakMap entries live even if all references to that symbol disappear. I don't want to have such things usable as keys. Or at least, if we do, we should make that a conscious decision and allow all of them including things like numbers, strings, and so on. LEO: well, I think this is something that we discussed in the last meeting and we also in all of these discussions from the content, yes, we just recognize we symbols might not be like, not all the symbols are unique, as you can still fetch simple values from the global single registry, and these Global symbol registry is a list that is shared. All the Realms that's WH: I think that's not the way to characterize that. `Symbol.for` symbols don’t allow weak map entries to be collected even if there are no references to them. The Symbol Registries are spec fiction — you can create and destroy an unlimited number of `Symbol.for` symbols and they will not take up memory as you would if you had to hold references to them. So it's not like you have references. -LEO: we have a just in case I was finishing a part. we have a hard constraint and I don't intend to Champion anything that includes other primitive values. This is also like being formed by all the people about this proposal. Like we don't think there is no intention here of for this proposal to work to include other primitive values right now, there might be some consideration in the future about records and tuples. It's not the time or the space of this proposal right now. I'm just considering adding symbols and what are the implications for me? consider the symbols to be primitive values that are able to make unique values in ecmascript. Some of the values are registered in the global single registry list share across shared among all the realms being used by that host. And that means you can fetch those symbols. Yes. And one of the decisions that we just decided to not make add any restriction to those symbols is actually to not use. These APIs, like, with map, we graph and it right thing as something to check. If a symbol is part of the global is and in that list, and just for like responsibility of these APIs? Yes. +LEO: we have a just in case I was finishing a part. we have a hard constraint and I don't intend to Champion anything that includes other primitive values. This is also like being formed by all the people about this proposal. Like we don't think there is no intention here of for this proposal to work to include other primitive values right now, there might be some consideration in the future about records and tuples. It's not the time or the space of this proposal right now. I'm just considering adding symbols and what are the implications for me? consider the symbols to be primitive values that are able to make unique values in ecmascript. Some of the values are registered in the global single registry list share across shared among all the realms being used by that host. And that means you can fetch those symbols. Yes. And one of the decisions that we just decided to not make add any restriction to those symbols is actually to not use. These APIs, like, with map, we graph and it right thing as something to check. If a symbol is part of the global is and in that list, and just for like responsibility of these APIs? Yes. And I'm also adding the line, too. I might add a line. For this request to this section. I can fetch it render here. Just donor, don't have any front of me but it's something that I want to work the section saying "the presence of a symbol in the global registry list might keep the reference alive". It should probably change to "must keep the reference alive". This is something really recognized. If the symbol is in that list, it's definitely totally won't be really useful to add a symbol that is created with symbol.for to use as a weak map key. It's like, it's not useful but we, you know, we know that we have, I'm trying to add a note about that in the liveness section and I think this is a constraint that we have to consider. We had some discussions. I not sure if I, I don't see how we connect that to a need of consistency to add or the Primitive values and I don't intend to change them apart. -WH: You're missing the point. My point is that the global symbol registries are a fiction that can hold infinitely many symbols, but nobody is expected to implement them that way. `Symbol.for` symbols behave like strings and numbers in that way. I don't want to support those as WeakMap keys. +WH: You're missing the point. My point is that the global symbol registries are a fiction that can hold infinitely many symbols, but nobody is expected to implement them that way. `Symbol.for` symbols behave like strings and numbers in that way. I don't want to support those as WeakMap keys. -LEO: Did you have this same objection when we advanced to stage two? +LEO: Did you have this same objection when we advanced to stage two? WH: Yes. And I stated that. We let it through to stage two because that was the stage to work out the details and explore the space of this. So yes, I stated this before. -LEO: So yeah, we have our use cases. Here are use cases are not very strong as I have use cases for many other proposals. I think this is this brings like some very nice convenience to the code, and allow exploration. Our intention here with this proposal is too Those to explore usage of symbols to check out, mem memory, footprint of using like, membranes Frameworks for Realms. I definitely don't have data for that to transform that in a very solid use case to say like hey we can really benefit of that. But without it, we'd also don't have the proper exploration. I think it's useful. I think it's consistent. And to be honest, I don't, I'm going to be honest, I don't see this objection being like a technical restriction, that really like, we should really deal with all this, but I don't have, this is the objection, Yeah, like I think should like, I'm not going to be try more than this because this proposal like is good as it is. This is my opinion, but we can't talk about the proposal all day long. +LEO: So yeah, we have our use cases. Here are use cases are not very strong as I have use cases for many other proposals. I think this is this brings like some very nice convenience to the code, and allow exploration. Our intention here with this proposal is too Those to explore usage of symbols to check out, mem memory, footprint of using like, membranes Frameworks for Realms. I definitely don't have data for that to transform that in a very solid use case to say like hey we can really benefit of that. But without it, we'd also don't have the proper exploration. I think it's useful. I think it's consistent. And to be honest, I don't, I'm going to be honest, I don't see this objection being like a technical restriction, that really like, we should really deal with all this, but I don't have, this is the objection, Yeah, like I think should like, I'm not going to be try more than this because this proposal like is good as it is. This is my opinion, but we can't talk about the proposal all day long. WH: I think you're missing the point. I'm not objecting to … [interrupted] -LEO: Yeah, I'm defeated. You, I am not going to say this with you. I think I have better things to do than that, we need to address this median and better topics. We had a very extensive list of discussions about this proposal. Yeah I don't think this is like we I'm saying like my opinion it's my opinion. It's not a technical, my opinions that I don't see a technical issue to have this. And I think so in my opinion, again, this brings some useful convenience that allows exploration of this feature. I don't ses a technical issue to not to not have this. but yes. +LEO: Yeah, I'm defeated. You, I am not going to say this with you. I think I have better things to do than that, we need to address this median and better topics. We had a very extensive list of discussions about this proposal. Yeah I don't think this is like we I'm saying like my opinion it's my opinion. It's not a technical, my opinions that I don't see a technical issue to have this. And I think so in my opinion, again, this brings some useful convenience that allows exploration of this feature. I don't ses a technical issue to not to not have this. but yes. WH: Okay, you just said that you don't want to hear what I have to say. So I'm not going to say things. -LEO: is there a way that we can work through in this proposal +LEO: is there a way that we can work through in this proposal WH: Until you want to hear what I have to say, I don't see how we're going to make progress here. AKI: okay, so first of all we went over the original time box. Now it's only five minutes until lunch, so we're going to keep going. There's quite a bit more in the queue and perhaps people who could address either of your concerns further. Maybe maybe some different perspectives could help, everyone understand each other a little better. Yeah. Shu? -SYG: I think WH certainly has a point. I think there is a technical issue. I know that you said that there is no technical issue, that you don't see a technical issue for implementations. Because currently identity of symbols are and symbol that four symbols, the fact that they don't have identity is not really observable. It is the case today that for non-`Symbol.for` symbols, if all references to them disappear, they could be collected because you can observe that when you reconstruct them that you, in fact got a different allocation with this change. If a `Symbol.for` symbol goes into a weak map or any weak collection, the implementations now have to do something different. They now have to have a bit or something, they now have to track all the `Symbol.for` symbols that in any weak collection across basically all weak collections in the runtime and make sure that they are not collected per the normal way, that symbols are collected. This is not a good change for implementations to do this extra bookkeeping for what I understand to be a pretty weak use case anyway. I think when we discuss this in the last meeting, Kevin Gibbons and other folks gave the inclination that `Symbol.for` symbols are by and large pretty rarely used in the wild. And if that is the case, I think you know, agree with Waldemar’s point and I would like to disallow `Symbol.for` symbols. I was more neutral last meeting and as I thought more about it between last meeting and this, and the implementation implications, I think that pushed me to the side that allowing `Symbol.for` symbols would be more harmful than not. +SYG: I think WH certainly has a point. I think there is a technical issue. I know that you said that there is no technical issue, that you don't see a technical issue for implementations. Because currently identity of symbols are and symbol that four symbols, the fact that they don't have identity is not really observable. It is the case today that for non-`Symbol.for` symbols, if all references to them disappear, they could be collected because you can observe that when you reconstruct them that you, in fact got a different allocation with this change. If a `Symbol.for` symbol goes into a weak map or any weak collection, the implementations now have to do something different. They now have to have a bit or something, they now have to track all the `Symbol.for` symbols that in any weak collection across basically all weak collections in the runtime and make sure that they are not collected per the normal way, that symbols are collected. This is not a good change for implementations to do this extra bookkeeping for what I understand to be a pretty weak use case anyway. I think when we discuss this in the last meeting, Kevin Gibbons and other folks gave the inclination that `Symbol.for` symbols are by and large pretty rarely used in the wild. And if that is the case, I think you know, agree with Waldemar’s point and I would like to disallow `Symbol.for` symbols. I was more neutral last meeting and as I thought more about it between last meeting and this, and the implementation implications, I think that pushed me to the side that allowing `Symbol.for` symbols would be more harmful than not. -LEO: This is a direction I Look in adress, I can work through. +LEO: This is a direction I Look in adress, I can work through. -AKI: Okay, Robin. +AKI: Okay, Robin. -RRD: Yeah, thereís. Been some talk about recording Tuple and originally we brought up symbols of his for stage 1 because we needed a mechanism in record and to culture reference objects through primitive. So, in that use case, didn't have any use. Receive all the for. So we didn't need to reference them in my purse, please either. It does can help. but also to give more context right now, we're trying to replace as the mechanism offered by symbol AS weak map keys using box, which is another mechanism that do something similar. in a more are going to be either way, we need to have this link in In this representation between Primitives and object identities and that is a nice mechanism together. +RRD: Yeah, thereís. Been some talk about recording Tuple and originally we brought up symbols of his for stage 1 because we needed a mechanism in record and to culture reference objects through primitive. So, in that use case, didn't have any use. Receive all the for. So we didn't need to reference them in my purse, please either. It does can help. but also to give more context right now, we're trying to replace as the mechanism offered by symbol AS weak map keys using box, which is another mechanism that do something similar. in a more are going to be either way, we need to have this link in In this representation between Primitives and object identities and that is a nice mechanism together. -JHN(may also have been SYG): Sorry, I didn't get to ask my actual queue question before because I was responding, I'm sorry. Maybe Robin was trying to respond to him to come to me. I was wondering. What are the room? Remind me again. What are the use cases for this today? Not for how a my synergize with future proposals. -So that is why we brought it for stage 1. So this is what I was explaining here. And that said, I think that Leo can probably give my information and Yesterday. +JHN(may also have been SYG): Sorry, I didn't get to ask my actual queue question before because I was responding, I'm sorry. Maybe Robin was trying to respond to him to come to me. I was wondering. What are the room? Remind me again. What are the use cases for this today? Not for how a my synergize with future proposals. +So that is why we brought it for stage 1. So this is what I was explaining here. And that said, I think that Leo can probably give my information and Yesterday. LEO: Yeah, we don't have. So today, I don't have like a strong use case other than common user experience convenience for users, the pain point is just the the Annoyance of needing to create an object to use it as a weak map key. And that doesn't really make like a very solid strong pain point that we are going a use case. We want to explore this. We want to explore days in membrane systems, but I don't have a concrete data to provide it to you. AKI: Okay, so it's time and we do have a lot of spare open time on the, on the schedule, the rest of the today. And tomorrow, if you want to come back to this, -LEO: I have a non-actionable question, that should be quickly. Addressed Waldemar. If adopt choose a suggestion to restrict this proposal to only allow symbols there are not registered in the symbol global registry lists, are you okay going forward with this +LEO: I have a non-actionable question, that should be quickly. Addressed Waldemar. If adopt choose a suggestion to restrict this proposal to only allow symbols there are not registered in the symbol global registry lists, are you okay going forward with this -WH: This is exactly what I have been asking for all along. +WH: This is exactly what I have been asking for all along. -LEO: I understood your first request as adding other primitive values. +LEO: I understood your first request as adding other primitive values. WH: No. All I have been asking for is to restrict this to symbols which are not generated by `Symbol.for`. -LEO: Okay, so saying that is there any seen objection is there any like quick objection? +LEO: Okay, so saying that is there any seen objection is there any like quick objection? -??: I cannot yet. We haven’t moved through the queue yet and I have an explicit item about that. +??: I cannot yet. We haven’t moved through the queue yet and I have an explicit item about that. -AKI: yeah, yeah. There's a couple different options we can come back to this. -LEO: I'm not requesting for stage 3 in this meeting, because if I don't have time, I'm happy to take a look at the queue and see the current issues, but it, I know, I know an objection right now. now. +AKI: yeah, yeah. There's a couple different options we can come back to this. +LEO: I'm not requesting for stage 3 in this meeting, because if I don't have time, I'm happy to take a look at the queue and see the current issues, but it, I know, I know an objection right now. now. AKI: Okay, so we'll save this. RPR: Frank has 3 proposals, 30 minutes each - Any others? Right, and the bot is paused, Kevin. It should back up. Okay. Then then let us begin Frank over T. +Any others? Right, and the bot is paused, Kevin. It should back up. Okay. Then then let us begin Frank over T. ### Conclusion/Resolution + No conclusion ## Intl Displaynames + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/intl-displaynames-v2) @@ -377,7 +395,7 @@ FYT: So a little history, V1 of Intl DisplayNames already got to stage four arou FYT: So I'll go through the action items from TC39 last time. First of all, is default value for the dialogue, handling property. And that time I it, we actually later on decide to rename that. So, consensus, is that both Mozilla and V8, although currently using different kind of value, they both agree that we should keep it as "dialect" in the proposal. The other things that the train value for dialect handling, that Mozilla suggest was shortened, it to just dialect and standard, and dropped in the name part. And I think after a discussion with TG funding, didn’t believe that it was the right thing to do, So we dropped it. Another one's really just a question bungee out. but that I think we have some confusion, whether it should have additional value called menu. And after closer look at that it is something people bring up. unfortunately, one of the for TG2 is the thing we bring up here have should have a prior art. such a "menu" thing have been proposed by Apple to the cidr, but it hadn't been really kind of in the Upstream. Haven't been reportedly resolved yet, so some of the data is in CLDR, but a lot of implementations are not picking this up yet. So we don't really have the prior art establishment for that particular value. Therefore, although is a increasing mean, I think both Mozilla and Google and I think other members all talk about that believe this may not be the right time to bring and the way we currently proposal written this way later on, if we really needed it a couple of years later, when there is the prior art and C++ and Java implementation, got sorted out, what still add as an as an additional value. But right now, now is not the right timing, because there are a lot of issue Upstream data up and implementations is not addressed. yet. There's a steel help lot of confusion in that space, there's still actively develop on that. So, there's a clear need but it's just, we don't believe that's the right timing to bring it into this proposal. So we've decided to not keep not adding -FYT: The other thing is, as I mentioned Dan asked us to re-examine the neat, use case for unit, type to display main. And after deeper look, we decide to drop unit support. I think then you have a good point and we just don't believe we can find strong enough use case in JavaScript. A lot of needed we found in the past is from Apple is for the OS level, for the C++ API level or whatever their API for guys Objective C or C. They have that need but internal javascript website. We just have hard time together directly answer at the end of questions so we say okay probably the right thing to do is drop the support and TG2 have some discussion about that agree about that. Therefore, I went back here to change the spec. +FYT: The other thing is, as I mentioned Dan asked us to re-examine the neat, use case for unit, type to display main. And after deeper look, we decide to drop unit support. I think then you have a good point and we just don't believe we can find strong enough use case in JavaScript. A lot of needed we found in the past is from Apple is for the OS level, for the C++ API level or whatever their API for guys Objective C or C. They have that need but internal javascript website. We just have hard time together directly answer at the end of questions so we say okay probably the right thing to do is drop the support and TG2 have some discussion about that agree about that. Therefore, I went back here to change the spec. FYT: Other changes. First of all, we're going to rename the file like handling. And after Shane suggest we all believe that is better because this only apply to the language type. So we would kind of follow outer Intl API to chance that thing to just call language display. Instead of called I like handling a have two possible value when it's like a dialect one in standard as I mentioned in the future, maybe the time is right? That we can add additional value "manual" there, but right now it is not on the table for us to in this particular proposal, the other changes would drop the section about supporting the unit type as a direct response to Daniel’s question. And we think that probably the right thing to do. @@ -385,13 +403,13 @@ FYT: therefore, where is the spec text. I have a link in the proposal in the sli FYT: So, here's an example. Well what will it look like only showing that for the language style acting so basically, you can have length pie before language and have this option bag have aligned with the displays Date, and for example, this is just English and you'll see in left-hand side with that lot are like Eng be, will get an error in British English. But if you, there's a stand-up that language display or showing as English, English, Primary United Kingdom, Etc, right? So those are there. Here are some other spec change. As I mentioned, adding two additional type and dropping the unit type the to audition time. One is the calendar to get an anaphor. The calendar one is daytime feel, which is not any of the data in them, but the field name of the datetime, I will show some examples later. So for example here that we showing name for that Calendar. The name of the calendar will go in the calendar, here is Showing that in simplified Chinese example. Here is more detailed, there's another aspect part because we are having the daytime field so this are the only possible value just getting the name of those fields, right? So example, same thing here, this is an example left-hand sides in Chinese right hand side in Spanish so you see here - I don't know how to pronounce this correctly Spanish (...) and so on and so forth. So those are the near of that field. So it will that's one thing that I think about health for some of the application that commonly need to display. That thing about that particular field and inside that there will be some value, from the date time formatting. Of course, you have to have chance to the internal slots. So this is also the recent change going through careful review by several members. And we have to have additional internal resolve options property, for language display. similarly, that we have some property for the instance, we have to add it but that's all the spec change. We have Shane and (?), both the reviewer and we'll have a V8 prototype available. So this year's main TG2 meeting the I think we've got support from the attendees to bring forward to TC39 for the stage three advancement. So any questions? -USA: Yeah, I am. I Audible Yes, it's low quality but it's good. Okay, I'm sorry about that. Yeah, I just wanted to say that as a stage 3 reviewer, I wanted to sign off on that. Thank you Frank for going doing all back and forth and making sure all the changes or questions are resolve. +USA: Yeah, I am. I Audible Yes, it's low quality but it's good. Okay, I'm sorry about that. Yeah, I just wanted to say that as a stage 3 reviewer, I wanted to sign off on that. Thank you Frank for going doing all back and forth and making sure all the changes or questions are resolve. -YSV: Just wanted to chime in and say that I'm happy to see this. Go to stage 3. +YSV: Just wanted to chime in and say that I'm happy to see this. Go to stage 3. -RPR: All right, let's just do a final check. I think this. We're always asking out any objections stage 3. +RPR: All right, let's just do a final check. I think this. We're always asking out any objections stage 3. -SYG:I have no objections. This is Shu. I have a question though. Sorry, I didn't get to myself on the queue in time there. There was a it's true. Therefore General proposals that we need 262 editor, sign off for Intl stuff. It's not my knowledge that the 262 editors ever reviewed Intl stuff. So they all ecmascript editors have It's that editors have signed off line. mean the 402 editors have reviewed it? +SYG:I have no objections. This is Shu. I have a question though. Sorry, I didn't get to myself on the queue in time there. There was a it's true. Therefore General proposals that we need 262 editor, sign off for Intl stuff. It's not my knowledge that the 262 editors ever reviewed Intl stuff. So they all ecmascript editors have It's that editors have signed off line. mean the 402 editors have reviewed it? FYT: I believe so. I believe that's the, you know, for each that's minus ending. So, you know, we do have four to editor, I think right now we have an actual Leo, still on that, but reach, our and wuzhou are at least will Joe and reach our are actively editing and Richard. @@ -402,22 +420,23 @@ YSV: We do have a bug for this. I'll post it to the repo. RPR: Okay, nothing on the key. So I'll ask again - any objections to stage 3? [silence] No objections. So congratulations, Frank, you have stage 3. ### Conclusion/resolution + - Stage 3 ## Extend timeZoneName Option for stage 3 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-extend-timezonename) -- [slides]() +- slides +FYT: The next proposal I will talk about is the Extend `timeZoneName` Option proposal for Stage 3. This is actually a smaller proposal. Basic idea is that we already have `Intl.DateTimeFormat` for years. It accepts a value for `timeZoneName`, and there's some people who suggest we extend the possible values for that option. We can have a different kind of time zone display. So, first of all, that the proposal is to - the changes that we currently have a Timezone option for interdisciplinary and time format and currently have a "long" and "short" option to display and this proposal basically is adding four additional possible values for the naming of this attribute. Recently changed. The new four are `"shortOffset"`, `"longOffset"`, `"shortGeneric"`, and `"longGeneric"`. This is the code example, what will happen and highlight one are the newly added ones. `"short"` and `"long"` are the pre-existing ones. So the short and long will display the time zone as GMT offset with short style, and long offset will show in longer style in short, generic and non-generic or (?) in PT, or Pacific Time. For example, in PST time, so here are some also example traditional Chinese, what will display, notice the GMT caches not necessary. Always mentioned the GMT, You know, in some localities may say UTC or whatever it based on their indulge the linguists believe what that value should be and there's not always a full hour for example. India have a half-hour offset. -FYT: The next proposal I will talk about is the Extend `timeZoneName` Option proposal for Stage 3. This is actually a smaller proposal. Basic idea is that we already have `Intl.DateTimeFormat` for years. It accepts a value for `timeZoneName`, and there's some people who suggest we extend the possible values for that option. We can have a different kind of time zone display. So, first of all, that the proposal is to - the changes that we currently have a Timezone option for interdisciplinary and time format and currently have a "long" and "short" option to display and this proposal basically is adding four additional possible values for the naming of this attribute. Recently changed. The new four are `"shortOffset"`, `"longOffset"`, `"shortGeneric"`, and `"longGeneric"`. This is the code example, what will happen and highlight one are the newly added ones. `"short"` and `"long"` are the pre-existing ones. So the short and long will display the time zone as GMT offset with short style, and long offset will show in longer style in short, generic and non-generic or (?) in PT, or Pacific Time. For example, in PST time, so here are some also example traditional Chinese, what will display, notice the GMT caches not necessary. Always mentioned the GMT, You know, in some localities may say UTC or whatever it based on their indulge the linguists believe what that value should be and there's not always a full hour for example. India have a half-hour offset. +FYT: So in Ecma-402, we have an additional requirement that the TG2 has to consider: data size issues. Usually we have have established require art and believe is difficult to increment efficiently in userland and also in the stage 3, we need to have some analysis of payload. Make mitigation and see whether that will be reasonable payload increase because usually of data make comparative speaking. Therefore, so let's go through those. So brought the pier we see many example in web page, in addition, some time they were using EST or EDT, they may only want to display. Probably, those are generated by PHP or Java, backend, or see how this plug-in, we have no idea, But they are presenting those kinds of information, ET for Eastern Time. MT for Mountain Time generic, we use this Eastern Standard Time for in daylight saving time, okay? The right hand side is also an example. We believe there's a broad appeal for the user because a lot of example, got files on the web. Of course, we also have the ICU and ICU4J and many other application help that time zone. So that we also believe that fulfills our prior art establishment. So in order, as I mentioned, for stage 3, Ecma-402 also cares about payload mitigation. So we try to do some high-level back of (?)study. See how much data size will be increase for the short offset. Because only, we need like to patent her Locale. So, for in the CLDR, you know, consider all the 406 Locales. The total size, we believe, it's just about 1.8K. If we compress is about 400 bytes and short. Generic are a little bit more, because there are a lot of fallback for those, you know, for example, Japan time, you know, just have Japan time, Japan standard,. They're not using daylight saving time. So there are no additional size increase. Long generic needs a little bit more so there are some requirement bringing a little more data. We believe that after compression it is less than (?)k if shipped with all 476 locales but for (?), for example, we're not shipping that many locales. We have a reduced size of a locale, we should. So the sizing (?). -FYT: So in Ecma-402, we have an additional requirement that the TG2 has to consider: data size issues. Usually we have have established require art and believe is difficult to increment efficiently in userland and also in the stage 3, we need to have some analysis of payload. Make mitigation and see whether that will be reasonable payload increase because usually of data make comparative speaking. Therefore, so let's go through those. So brought the pier we see many example in web page, in addition, some time they were using EST or EDT, they may only want to display. Probably, those are generated by PHP or Java, backend, or see how this plug-in, we have no idea, But they are presenting those kinds of information, ET for Eastern Time. MT for Mountain Time generic, we use this Eastern Standard Time for in daylight saving time, okay? The right hand side is also an example. We believe there's a broad appeal for the user because a lot of example, got files on the web. Of course, we also have the ICU and ICU4J and many other application help that time zone. So that we also believe that fulfills our prior art establishment. So in order, as I mentioned, for stage 3, Ecma-402 also cares about payload mitigation. So we try to do some high-level back of (?)study. See how much data size will be increase for the short offset. Because only, we need like to patent her Locale. So, for in the CLDR, you know, consider all the 406 Locales. The total size, we believe, it's just about 1.8K. If we compress is about 400 bytes and short. Generic are a little bit more, because there are a lot of fallback for those, you know, for example, Japan time, you know, just have Japan time, Japan standard,. They're not using daylight saving time. So there are no additional size increase. Long generic needs a little bit more so there are some requirement bringing a little more data. We believe that after compression it is less than (?)k if shipped with all 476 locales but for (?), for example, we're not shipping that many locales. We have a reduced size of a locale, we should. So the sizing (?). +FYT: In history, we have advanced this one to stage 1. In general this week, this month and a propose that for this meeting or we do there's some people Was that naming for the offset? I think that time we call short GMT and Long GMT to believe that is not a good idea. And so there are some discussion and we renamed need to `"shortOffset"` and `"longOffset"`. And then the last TC39 we advanced this to stage 2. A lot of people had an opinion about what was called `"shortWall"` and `"longWall"` in our original proposal. I think this original name was picked by one of the ex-members from IBM, but nobody liked that name including himself. There was a long discussion in the main meeting, we talked about probably 10 possible names. Initially, I think is you to app from voting and very creative discussion. We picked `"shortGeneric"` and `"longGeneric"`. If you are interested, you can take a look at the TG2 discussion; we probably spent 40 minutes on that. Also we agreed that we should bring this proposal into TC39 for stage 3 advancement during that meeting. Here again, this is a recent changes that part that we change it to `“shortGeneric”` and `“longGeneric”`, there is some part of the Forum at the time because. it is basically just in the algorithm for the patent that how to pick that time zone name only and you can look at it if it is still not very clear can look at the spec text to see it. Again, also the BasicFormatMatcher which is in the spec of folks. I would be a within the increment is par. We're using the BestFormatMatcher. We also have the need to implement some penalty to decide the pattern. There was also a long discussion during the last month about how to spec it out better. I think this was originally raised by Andre from Mozilla about the spec and original form that have some issue and really appreciate his review and we'll keep looking through that. So we kind of had a long discussion about that offline. But here's the current spec. We have PFC and RBU sign up as the reviewers. PFC has been involved with a lot of the recent changes in reviewing those things. Unfortunately, I didn't explicitly ask for a sign off from RBU. I didn't get a final signal, but I believe that they should have no problem because he get through all the videos back. -FYT: In history, we have advanced this one to stage 1. In general this week, this month and a propose that for this meeting or we do there's some people Was that naming for the offset? I think that time we call short GMT and Long GMT to believe that is not a good idea. And so there are some discussion and we renamed need to `"shortOffset"` and `"longOffset"`. And then the last TC39 we advanced this to stage 2. A lot of people had an opinion about what was called `"shortWall"` and `"longWall"` in our original proposal. I think this original name was picked by one of the ex-members from IBM, but nobody liked that name including himself. There was a long discussion in the main meeting, we talked about probably 10 possible names. Initially, I think is you to app from voting and very creative discussion. We picked `"shortGeneric"` and `"longGeneric"`. If you are interested, you can take a look at the TG2 discussion; we probably spent 40 minutes on that. Also we agreed that we should bring this proposal into TC39 for stage 3 advancement during that meeting. Here again, this is a recent changes that part that we change it to `“shortGeneric”` and `“longGeneric”`, there is some part of the Forum at the time because. it is basically just in the algorithm for the patent that how to pick that time zone name only and you can look at it if it is still not very clear can look at the spec text to see it. Again, also the BasicFormatMatcher which is in the spec of folks. I would be a within the increment is par. We're using the BestFormatMatcher. We also have the need to implement some penalty to decide the pattern. There was also a long discussion during the last month about how to spec it out better. I think this was originally raised by Andre from Mozilla about the spec and original form that have some issue and really appreciate his review and we'll keep looking through that. So we kind of had a long discussion about that offline. But here's the current spec. We have PFC and RBU sign up as the reviewers. PFC has been involved with a lot of the recent changes in reviewing those things. Unfortunately, I didn't explicitly ask for a sign off from RBU. I didn't get a final signal, but I believe that they should have no problem because he get through all the videos back. - -PFC: I did review it and I can give my explicit sign off. I think this is good. All of the issues that I saw in the spec text have been fixed up and I think this fulfills a clear user need. So I support this going to stage 3. +PFC: I did review it and I can give my explicit sign off. I think this is good. All of the issues that I saw in the spec text have been fixed up and I think this fulfills a clear user need. So I support this going to stage 3. RBU: +1 @@ -426,10 +445,13 @@ FYT: Thank you. It's my fault. I should have pinged you earlier. Anyway, we have YSV: Yes, I would like to give an explicit stage 3 +1 from our side. And we also have a prototype ready for this. RPR: Excellent. So any objections to stage 3? [silence] There are no objections. Congratulations, FYT, you have stage 3. + ### Conclusion/resolution + - Stage 3 ## Intl Enumeration API Stage 2 Update + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-enumeration) @@ -451,7 +473,7 @@ YSV: I'm just going to raise a concern that we brought up earlier, that is our r FYT: Sorry, can you clarify, when you say the entire payload, what does that mean? -YSV: So you can't for example ship part of the data you're referencing. For example, for the calendars and languages, you have to ship all of it, you can't just ship a subset. +YSV: So you can't for example ship part of the data you're referencing. For example, for the calendars and languages, you have to ship all of it, you can't just ship a subset. FYT: One thing maybe I need to spend more time to write in the readme is the following: Let's say the Intl.LocaleInfo API, which is already in stage 3, this the the locale data that the application already knows. Let's say Arabic right, so intl Locale API can say, okay, this user is in Arabic, now what kind of calendar, for example, do they prefer, and the answer is one of three of them right? One way to use this API is then, we asked this API is a for which calendar you're supporting well. Somehow, if there are of them was that called calendar is not supported by the application, one thing the application could do is calling back to the server side, to say hey, give me a polyfill for this thing because native application does not support this. So one of the important use cases for us is to discover what is preferred but not included in the implementation in order to bring in a polyfill or bring in data in some way to fill the gap. Of course that's not the only use case, there's some other use cases for that. So I probably need to spend more time to address that, that, but it's not limited to the calendar, you can think about all the other things, whatever we're currently at them afford to support. I think here we have collation for numbering system or some other things that it could all could be, for example, in the Closure library we could use this API to decide what additional things to download in order to fulfill the needs. That's one of the reasons why we think this API is very important, it will from a very important cornerstone for the AJAX kind of paradigm to make sure the application does not solely depend on the implementation. But in case the implementation lacks something, we have a way to dynamically bring it down in an efficient way. @@ -463,7 +485,7 @@ YSV: I should have been more specific, I meant the explainer. Of course it is fa SFC: I wanted to reply and clarify on YSV's second ask about requiring that implementations ship all the data. If I understand that correctly, I interpret that to mean that in order to resolve the fingerprinting concern, the set returned by this function needs to be equal to the browser version number. And I think it's an interesting concern that warrants additional discussion. I'm not convinced that that this API is going to cause any fingerprinting concerns in its own right. Because if we do get to a point where browsers are, you know, downloading additional locales on the fly, which would be definitely a nice thing to shoot for, that's going to raise the same types of fingerprinting concerns that this proposal would expose. For example, if you were to call create a date-time format in — pick your favorite esoteric locale, how about Klingon — if you were to create a DateTimeFormat in Klingon, and that is producing strings for you, and then Klingon gets added to the available numbering systems, both of those are equivalent fingerprinting concerns. So I’m not convinced that the requirement that browsers ship all data is necessarily unique to this proposal and I don't necessarily see that fitting in here. -YSV: Yes, you're absolutely right. This isn't going to be unique to this proposal. This is the first proposal that we're asking for this, but this is, if this goes forward we will be asking this from all proposals, that there will be no partial data sets being shipped. +YSV: Yes, you're absolutely right. This isn't going to be unique to this proposal. This is the first proposal that we're asking for this, but this is, if this goes forward we will be asking this from all proposals, that there will be no partial data sets being shipped. FYT: Can I ask something here with that? I'm not quite sure about that. Because as you can see here, in this particular API the possible keys we have are `"calendar"`, `"collation"`, `"currency"`, `"numberingSystem"`, `"timeZone"`, and `"unit"`. So whether we ship 100 locales or 10 locales is not discoverable by this API, because `"locale"` is not one of the keys. Whether we ship with ten locales may impact it, but not 100 locales. First 10, locale. So, I'm not sure. I mean, I understand the fingerprint concern. If one of the possible keys is `"locale"`, fine, the thing you require has merit. And if you're saying, please ship all the calendars, then I agree with you. But in this particular case with regard to not shipping all the locales, ten or 100 shouldn't be observable by this API at all. So I don't know that that will be impacting the fingerprinting concern. @@ -475,9 +497,9 @@ YSV: Yes. FYT: I understand what you're asking. I will bring that back to TG2 for that discussion. Thank you. -YSV: Yeah. And this also comes with — any future APIs that do something like this, are required to ship the full data set as part of their implementation. +YSV: Yeah. And this also comes with — any future APIs that do something like this, are required to ship the full data set as part of their implementation. -FYT: Could you clear up? What do you mean by 'like this'? +FYT: Could you clear up? What do you mean by 'like this'? YSV: Because there's a very vague if we have other enumeration APIs we don't ship partial data sets. @@ -489,7 +511,7 @@ SFC: To discuss the iterator versus arrays question, just to follow up what I've JHD: I have a quick reply to SFC. I think all of us largely understand the usability improvement or advantage of an array, but also the performance advantage of iterators. If we decide to go with iterators - the `SupportedValuesOf.prototype` approach - we do have a pattern of ArrayIterator StringIterator, RegexStringIterator for matchAll… we have a pattern of making new primordial types for each of these special kinds of iterations. If iterators are going to be a pattern, if we actually want to have more of them going forward, we should think about it. Is that a pattern we want to continue? Or if there's a different way of specifying it, that doesn't create new primordials and prototype methods and stuff. So if we decide to go with arrays here of course, that short circuits that whole discussion. -FYT: JHD, could you elaborate a little bit? More is so you're that currently my understanding is you're saying currently in 262 and for to in order to go through iterator we have to follow certain patterns or create a prototype object, right? So is that what you're saying, right? +FYT: JHD, could you elaborate a little bit? More is so you're that currently my understanding is you're saying currently in 262 and for to in order to go through iterator we have to follow certain patterns or create a prototype object, right? So is that what you're saying, right? JHD: That's the current precedent, exactly - and it’s not about how pretty this is to specify; the fact that it's complex to specify is an “us” problem. It's more that it creates another built-in object prototype, that in all of these cases is not available on the global prototype but you can still get to it through a number of other means. It may be too late because we have this precedent established, right? But it seems like looking with hindsight perhaps this was there would have been an alternative way to design these iterators in ES6, I wanted to put it in the committee's heads that we should decide consciously "Yes, we want to continue that precedent" or perhaps there's a better form we want to start using if we're going to have more iterator-producing APIs in the spec. @@ -507,7 +529,7 @@ JHD: MM, I can answer that because we went through the same question with `Strin MM: I'm insisting that we don't, that this is a blocking issue and I should have paid more attention to `matchAll()`. I did not realize until this conversation that that had introduced another hidden primordial. We must stop introducing hidden primordials, or we must stop introducing them until and unless there is a standard way to get them. Introducing new ones is a breaking change to systems that are already deployed out there. -SFC: I believe `Intl.Segmenter` is another proposal we advanced, which also has the hidden primordial for the iterator, correct? +SFC: I believe `Intl.Segmenter` is another proposal we advanced, which also has the hidden primordial for the iterator, correct? RGN: That's correct, but the `Intl.Segmenter` iterator prototype also has other methods on it. @@ -517,7 +539,7 @@ FYT: Sorry because I'm the champion and I want to partially answer this. I have MM: I agree exactly with everything you just said. My apologies for not having paid enough attention to the proposals that during my inattention had introduced new primordials, I should have caught this much earlier. This is exactly an example of the process issue that YSV has brought up with the need to write down in the spec normative high level invariants. This is an example of such an invariant. This would be so that the people guarding the invariants don't have to pay attention to every single spec to make sure that a violation of the invariant doesn't sneak through. But now that I am aware of this, this is a blocking objection. -USA: I was just thinking if there was a convenient way for us to conclude that invariant right away. One way we could do it, would be maybe to introduce editorial notes in the places where this was done, make it so nobody copies that and and add somewhere very fine. +USA: I was just thinking if there was a convenient way for us to conclude that invariant right away. One way we could do it, would be maybe to introduce editorial notes in the places where this was done, make it so nobody copies that and and add somewhere very fine. YSV: I got distracted by other work, but I'll make sure that we start working on the invariants again and finding out an appropriate way to do it and I invite anybody else who wants to do this work to join me. I think we should also integrate the invariant by Moddable and continue that discussion. Maybe it's the SES calls or somewhere else. @@ -530,15 +552,17 @@ JHD: If we decide to stick with iterators and not arrays, I'm happy to work with MM: That would be wonderful. To be very clear, I'm not objecting to iterators, JHD plans for iterators and we find that the proposal as written. Now following the old pattern, I do object, that's a blocking objection. ### Conclusion/resolution + - Not attempting to advance ## Resizable ArrayBuffers for stage 3 + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-resizablearraybuffer) - [slides](https://docs.google.com/presentation/d/1K7t8lphY45yOfvsTOHxF4wZiMFCsVZZ_Bf_Wc7S3I_g/edit?usp=sharing) -SYG: All right. Resizable buffers again for stage 3. To recap, the action item from last time was to address the global constructor issue as raised by Moddable and before that JHD as well. We had an incubator call about this and the path that I decided to square the circle here and not introduce any new global constructors is to extend the existing ArrayBuffer and SharedArrayBuffer constructors instead, as presented very shortly at the end of last evening. +SYG: All right. Resizable buffers again for stage 3. To recap, the action item from last time was to address the global constructor issue as raised by Moddable and before that JHD as well. We had an incubator call about this and the path that I decided to square the circle here and not introduce any new global constructors is to extend the existing ArrayBuffer and SharedArrayBuffer constructors instead, as presented very shortly at the end of last evening. SYG: So concretely what that looks like is the following. So on the top section is the status quo when you make ArrayBuffer today, it takes exactly 1 parameter, which is the initial length of the buffer. I am proposing to add a read-only getter, so without a setter, that lets you check if the buffer is in fact resizable. So if you use, if you don't pass a second item into the array buffers, it is not resizable. Its byte length and its max byte length are the same and if you try to resize it by the resize method that I am proposing to add, or rather am moving from the ResizableArrayBuffer constructor prototype to ArrayBuffer's prototype, it would throw. If you pass an options bag, specifying the maximum byte length to the ArrayBuffer constructor you instead get a resizable buffer where the resizable getter would return true. You still have byteLength, and of course you can resize. You can resize the resizable buffer up to the maximum byte length and yeah, the the getters do exactly what you expect them to do. How these buffers behave is exactly unchanged from the previous iteration, where they were a different type under ResizableArrayBuffer. The only difference is that they are under the same constructor now, but must be constructed with this additional options bag. @@ -548,11 +572,11 @@ PHE: This is Peter from Moddable. This is great. This is exactly what I kind of SYG: So that's basically the big API surface change. Everything else, the core semantics, how things behave, how resize works. How grow works, that remains as it has been for a couple meetings now. -SYG: Some other updates since last time here is an interesting thing. Concurrency is hard as we all know, there was a bug in the spec that we discovered during implementation in V8. What happens when you have concurrent calls to SharedArrayBuffer.prototype.grow. So, when you're trying to race your grows on the same buffer. Imagine the following situation: you have some growable SharedArrayBuffer that currently has a length of 10. You have two threads that are concurrently running andaAnd they are concurrently racing to grow the array buffer. Thread 1 tries to grow it to a byte length of 20 and thread 2 tries to grow it to a byte length of 40. One such execution that could happen is the following. One reads, the current length, sees it's 10, thread two at the same time reads, the current length, sees it's 10. Thread 2 wins the race and grows the shared buffer to 40. Now thread 1 tries to grow. At this point, the SAB has already grown to a length of 40. So what happens if you try to grow it to 20, there was a bug that actually allowed thread one to grow it to 20, in effect causing a shrink, which definitely should not be allowed. The fix here is a bunch of memory model arcana, like usual, but the general idea is that we impose a total order on all calls to share a buffer that prototype dot grow. So this kind of race cannot happen. That if thread two race in this case, then thread one would just fail. That there will always be a total order regardless of whether they're being raced or not. For the implementers, the idea is that when you are implementing grow, you have to update the length. The length can be updated atomically by either a single compare-and-swap, or a pair of load-linked and store unconditional. With architectures like x86, having compare-and-swap instructions, and archs like ARM don't have a single instruction to do compare-and-swap but they have these paired load and store instructions that kind of put an exclusive monitor bit on a particular memory location such that when you store it, it, the store would fail if the value if the bit that the load link - So the load link when you load a memory thing, it like put to puts a bit says, I put this into exclusive mode and then when try to store it store a value to the same location. The store checks if the exclusive bit is still there. If not the store will actually fail and the idea is that if something happened in between like somebody else updated the value, or even like the cache lines clear, it would fail and you lose the atomicity guarantee. and the and the idea here is that updating the length can be either done with a CAS or an LL/SC, either a single one, or in a loop. So, why do I think it's a good idea to allow this latitude to either have a single compare-and-swap or a looped compare-and-swap such that if there are spurious failures, that you just try again. My thinking here is that because ARM with LL/SC has more spurious failures due to, you know, how far apart the instructions are in the instruction stream and the likelihood that the cache line is cleared. I don't want to require a loop. I think forward progress is harder to guarantee in spec if we allow it, if we require the implementation to be looped, like, how do I guarantee that the loop is not going to stay forever? It's not going to become an infinite Loop? And most importantly, I want allow this implementation latitude because I don't want to put extra work in restricting. the kind of implementation that can be done for obviously bad behavior. Like you really should not be racing your grows. We of course as specification authors and as the standards committee have to give exact behavior and exactly specify the behavior here but the take-home should be that you should not be concurrently growing your buffers. That is a bad idea. Synchronize another way and grow them in a predictable way. +SYG: Some other updates since last time here is an interesting thing. Concurrency is hard as we all know, there was a bug in the spec that we discovered during implementation in V8. What happens when you have concurrent calls to SharedArrayBuffer.prototype.grow. So, when you're trying to race your grows on the same buffer. Imagine the following situation: you have some growable SharedArrayBuffer that currently has a length of 10. You have two threads that are concurrently running andaAnd they are concurrently racing to grow the array buffer. Thread 1 tries to grow it to a byte length of 20 and thread 2 tries to grow it to a byte length of 40. One such execution that could happen is the following. One reads, the current length, sees it's 10, thread two at the same time reads, the current length, sees it's 10. Thread 2 wins the race and grows the shared buffer to 40. Now thread 1 tries to grow. At this point, the SAB has already grown to a length of 40. So what happens if you try to grow it to 20, there was a bug that actually allowed thread one to grow it to 20, in effect causing a shrink, which definitely should not be allowed. The fix here is a bunch of memory model arcana, like usual, but the general idea is that we impose a total order on all calls to share a buffer that prototype dot grow. So this kind of race cannot happen. That if thread two race in this case, then thread one would just fail. That there will always be a total order regardless of whether they're being raced or not. For the implementers, the idea is that when you are implementing grow, you have to update the length. The length can be updated atomically by either a single compare-and-swap, or a pair of load-linked and store unconditional. With architectures like x86, having compare-and-swap instructions, and archs like ARM don't have a single instruction to do compare-and-swap but they have these paired load and store instructions that kind of put an exclusive monitor bit on a particular memory location such that when you store it, it, the store would fail if the value if the bit that the load link - So the load link when you load a memory thing, it like put to puts a bit says, I put this into exclusive mode and then when try to store it store a value to the same location. The store checks if the exclusive bit is still there. If not the store will actually fail and the idea is that if something happened in between like somebody else updated the value, or even like the cache lines clear, it would fail and you lose the atomicity guarantee. and the and the idea here is that updating the length can be either done with a CAS or an LL/SC, either a single one, or in a loop. So, why do I think it's a good idea to allow this latitude to either have a single compare-and-swap or a looped compare-and-swap such that if there are spurious failures, that you just try again. My thinking here is that because ARM with LL/SC has more spurious failures due to, you know, how far apart the instructions are in the instruction stream and the likelihood that the cache line is cleared. I don't want to require a loop. I think forward progress is harder to guarantee in spec if we allow it, if we require the implementation to be looped, like, how do I guarantee that the loop is not going to stay forever? It's not going to become an infinite Loop? And most importantly, I want allow this implementation latitude because I don't want to put extra work in restricting. the kind of implementation that can be done for obviously bad behavior. Like you really should not be racing your grows. We of course as specification authors and as the standards committee have to give exact behavior and exactly specify the behavior here but the take-home should be that you should not be concurrently growing your buffers. That is a bad idea. Synchronize another way and grow them in a predictable way. -SYG: So, the observable implications for this implementation latitude, is that if you have user code, that is racing should a robot for grows, if your implementation has a single compare-and-swap that might fail while loop is comparing swap might succeed. So if trying many times until the grow succeeds is important, my recommendation you should write to looping in user land. And in my opinion, this difference is not a big deal because SharedArrayBuffer.prototype.grow can already throw due to, for example the time we were trying to grow there was temporary memory pressure in the system and you didn't have extra memory to commit. So so in general grow and resize can throw due to memory pressure. Which at some memories are at any particular time, you might at some later time you might have a bit get a failure and users all of resize them grow must deal with that failure anyway, possibly retrying later. And yeah so that that all comes back to like don't race your grows if you really want to race your grows and you want to for some reason, make them as robust as possible and trying as many times as possible to succeed then write the loop yourself. This is my thinking, +SYG: So, the observable implications for this implementation latitude, is that if you have user code, that is racing should a robot for grows, if your implementation has a single compare-and-swap that might fail while loop is comparing swap might succeed. So if trying many times until the grow succeeds is important, my recommendation you should write to looping in user land. And in my opinion, this difference is not a big deal because SharedArrayBuffer.prototype.grow can already throw due to, for example the time we were trying to grow there was temporary memory pressure in the system and you didn't have extra memory to commit. So so in general grow and resize can throw due to memory pressure. Which at some memories are at any particular time, you might at some later time you might have a bit get a failure and users all of resize them grow must deal with that failure anyway, possibly retrying later. And yeah so that that all comes back to like don't race your grows if you really want to race your grows and you want to for some reason, make them as robust as possible and trying as many times as possible to succeed then write the loop yourself. This is my thinking, -SYG: And finally, another change that was made was Yulia from from Mozilla brought up that they would really like to see WebIDL integration be done before stage 3 +SYG: And finally, another change that was made was Yulia from from Mozilla brought up that they would really like to see WebIDL integration be done before stage 3 WH: It seems that you’re specifying that the CAS might fail. CAS might fail for all kinds of reasons, even if you don't have other workers. Does that mean that everybody has to worry about the shared array buffer grow not working? @@ -562,17 +586,17 @@ WH: So what happens if it fails? You get an exception? SYG: Yes. Any grow can cause an exception even if there's nothing else in your program which changes your length. I don't know how to distinguish that. Like, it'd be nice if wasn't the case. -WH: This answers my clarifying question. I wish to come back to this when you're done. +WH: This answers my clarifying question. I wish to come back to this when you're done. SYG: Our integration is thankfully, it was fairly straightforward. For folks who aren't familiar with WebIDL, it defines some types and also its defined some what it calls extended attributes over types. For its types that have to do with ArrayBuffers and things that can be backed by buffers like typed arrays and data views, there are extended attributes. New attribute, [AllowResizable], that lets those types allows any APIs that have that attribute to allow types to also be backed by resizable buffers. The default, which is the status quo today of all existing web APIs do not of course have that extended attribute. No Existing API allows resizable buffers and resizable buffer back to type the race to be passed. In the future new APIs may allow. Future extensions to APIs might. This doesn't change what happened today. And yeah, that's it. So let's take the queue. JRL: Question about the exposed API surface. So we have array buffer which will have a resize method and then we'll have shared array buffer which will have a grow method. Is there a need to have a different method for the two of them? Particularly could shared array buffer, just have a resize method that threw if you gave it a smaller size? -SYG: You're talking about a name, not not the actual message because they will still be two distinct methods that cannot be used with with receivers of the wrong type. +SYG: You're talking about a name, not not the actual message because they will still be two distinct methods that cannot be used with with receivers of the wrong type. JRL: Yeah, I understand. The same way that set has `has` and map has `has`. They share the same name because they operate similarly. -SYG: Yeah. I mean, I'm trying to figure out how strongly I feel about this. It is possible, of course, to rename it to to have both be just called resizable. Thought that would give the wrong impression because it's not a choice that SABs cannot shrink. And what is the value I suppose of having the consistent naming? Perhaps you want library code, that transparently deals with resizable buffers regardless of whether it's shared or not. I don't think that's a good idea. Dealing with shared buffers versus non-shared buffers is a pretty different thing. So I'm not convinced there's much value in having both being named resize. +SYG: Yeah. I mean, I'm trying to figure out how strongly I feel about this. It is possible, of course, to rename it to to have both be just called resizable. Thought that would give the wrong impression because it's not a choice that SABs cannot shrink. And what is the value I suppose of having the consistent naming? Perhaps you want library code, that transparently deals with resizable buffers regardless of whether it's shared or not. I don't think that's a good idea. Dealing with shared buffers versus non-shared buffers is a pretty different thing. So I'm not convinced there's much value in having both being named resize. AKI: Support having the same name. [nothing further because mic issues] @@ -590,7 +614,7 @@ SYG: That's a good point. Okay. Then with that argument, I retract my position o YSV: I'm pretty convinced by that argument. -SYG: Of course, I don't have that language written up, I hope we can iterate on that language. +SYG: Of course, I don't have that language written up, I hope we can iterate on that language. WH: Okay, it's fine to iterate on how to say it well in the spec, but the goal, either you get a consistent result or you run out of memory, is perfectly achievable in any reasonable implementation which supports locks. @@ -617,41 +641,43 @@ WH: Sounds good to me. RPR: All right. Any objections to stage 3? [silence] ### Conclusion/resolution + - Stage 3, with a change to the locking semantics for growing loops as proposed by WH ## Symbols as Weak Keys, pt 2 + Presenter: Leo Balter (LEO) -- [proposal]() -- [slides]() - +- proposal +- slides + LEO: This is something that I would like to bring up. Bring back the discussion that was in the queue, but also try to sum up the idea of what the objections from Waldemar. and Shu were about restricting symbols to be truly unique. I'm sorry if it's it's technically wrong and by that there are two things to addressed. This allows symbols registering a global symbol registry list single square root of LIF symbol for and also well known symbols separating them both, we should just Just create one distinction here and trying to capture the reasons. Yes. This adds a an internal Extra step for liveness of the symbol for the values. I understand this concern, although I'm not an implementer, I'm just taking these as yes, this is not desirable and understand this to you as well. and, One of the point talking to Rick has said, the weak map key should not be a guest board and reach unreachable by any code. That does not also not Explicitly, have access to the WeakMap. Hello. in symbol would break that. and trying to also consider the audio options. Like why not restricting them? Just give it a quick recap as well. Like the weak collections have a check for known truly unique symbols. This is also odd from the developers perspective, and I believe some of the feedback were related to that. But I'm not expanding this because I know that the next subs will be from JHD and just They might be related to this. JHD: I had a clarifying question about the previous slide if you don't mind. if you don't have access to the WeakMap, then you can't get like, then you already can't to any of the keys. but like, you might if someone puts like globalThis in a WeakMap you can get to it even if you don't have a WeakMap. So I don't see how the how that is a property that exists or how adding a symbol breaks that. So I was wondering if we could get that clarified. -LEO: Um from my vision again, I'm not an implementer. think the point here is like if you add globalThis. Yeah, you can still add that and Google to exclude probably still be alive. Will be alive. Yes, you can do that today. But the thing is there is no extra step in the verification today that you have for adding globalThis. I think, what is implied here internally there will be, like, when Engine runs some internal check or ever how it goes there. +LEO: Um from my vision again, I'm not an implementer. think the point here is like if you add globalThis. Yeah, you can still add that and Google to exclude probably still be alive. Will be alive. Yes, you can do that today. But the thing is there is no extra step in the verification today that you have for adding globalThis. I think, what is implied here internally there will be, like, when Engine runs some internal check or ever how it goes there. JHD: The last one on the slide. -LEO: Yeah, that's why I'm also separating them. This is actually a question for like and all the Mark. Should we just disallow owe symbols listed in the global symbol registry? Or should we just a little both of these points? +LEO: Yeah, that's why I'm also separating them. This is actually a question for like and all the Mark. Should we just disallow owe symbols listed in the global symbol registry? Or should we just a little both of these points? -JHD: sorry question That the weakMap, he should be unguessable in unreachable by any code that does not also have access to the WeakMap. I don't understand that point specifically, I understand that the liveness stuff, that's not what I'm talking about here. I'm I don't understand that last bullet point and that's what I was hoping to get clarification. +JHD: sorry question That the weakMap, he should be unguessable in unreachable by any code that does not also have access to the WeakMap. I don't understand that point specifically, I understand that the liveness stuff, that's not what I'm talking about here. I'm I don't understand that last bullet point and that's what I was hoping to get clarification. -LEO: Well, its reachable if you create something with symbol table for you can get it from Anything. This second bullet point should definitely not. Might not. Fully get to the well known symbols but definitely for the symbol that for you can I mean I understand that crossrealms, right? +LEO: Well, its reachable if you create something with symbol table for you can get it from Anything. This second bullet point should definitely not. Might not. Fully get to the well known symbols but definitely for the symbol that for you can I mean I understand that crossrealms, right? -JHD: I can yeah, that I could not only Crossroads in the same project that could create the symbol and then if also had access to the WeakMap, even indirectly, then I could pass that in, I get that first, but I don't have to have access to the WeakMap to be to have the ability to reach The key If the key happens to be already accessible to me, so I'm does that mean because not all explicitly have access to the key or to like like is that Miss phrased. Or I'm trying to understand why having access to the weakMap like makes a difference. +JHD: I can yeah, that I could not only Crossroads in the same project that could create the symbol and then if also had access to the WeakMap, even indirectly, then I could pass that in, I get that first, but I don't have to have access to the WeakMap to be to have the ability to reach The key If the key happens to be already accessible to me, so I'm does that mean because not all explicitly have access to the key or to like like is that Miss phrased. Or I'm trying to understand why having access to the weakMap like makes a difference. LEO: I need for forward this to Rick. This is one of one of the things when I was Consulting here and I'm just trying to channel this conversation, but yes, I don't have the full information about his feedback. -LEO: I'm sorry about this Interruption and my goal here is just to set it step forward. I don't think this is like Is, it's too much for me to request stage 3 in this meeting, but I want to make sure that we have a step forward. Like the next steps for to be addressed for the next meeting. So this is more like clarification concerns. If I don't answer all everything. Yes, I am also not going to ask for Stage 3 today. +LEO: I'm sorry about this Interruption and my goal here is just to set it step forward. I don't think this is like Is, it's too much for me to request stage 3 in this meeting, but I want to make sure that we have a step forward. Like the next steps for to be addressed for the next meeting. So this is more like clarification concerns. If I don't answer all everything. Yes, I am also not going to ask for Stage 3 today. -MM: First of all, I want to take a moment to just address the previous question the phrase and that does not also have explicitly have access to the WeakMap. Cannot possibly be a correct phrase to include in this bullet point because weakMap, you not provide access to their whatever access you have with them without the weakMap. You have the same access with You can now with regard to the overall question I prefer and have always preferred that the version that Leo is currently shown of the one that expresses the objections from Waldemar and Shu. Historically, I want to just say a little bit about how we got here which is historically I objected I raised the same object that Waldemar and Shu are now raising, and then JHD raised the objection that allowing unregistered symbols while disallowing registered symbols created too much of a surprise, which I'm sympathetic to. And initially that took symbols as weak map keys out of the running entirely because there was no way to resolve that. Since then we've come up with some use cases that make it clear that symbols as weakmap keys have some uses. All those use cases only need the unregistered ones, and the use cases are sufficiently obscure for normal users, they are basically systems building use cases, that I think that they can overcome the usability surprise objection. There are some contexts where we want to disallow non primitive values. This is come up twice in the context of new proposals. One is the callable boundary in the revised Realms proposal to build membranes on top of. It's essential that not allow objects, but allowing something that can serve as WeakMap keys and has unforgeable uniqueness enables us to build membranes with good garbage collection and then the other one that came up is a records and tuples following symbols that can be used as WeakMap keys in there. Enables the rights amplification pattern, such that given a registry one can from those symbols look up other objects without contaminating the immutable data with the objects themselves. So even though both of those are future proposals, they're sufficiently different from each other that I think that they show that there is a systemic issue here. So my current position is, I am inclined to allow symbols as WeakMap keys to go forward either way because systemic issue shows that the extra ability is general enough. I'm okay, allowing them including registered symbols and well-known symbols, but I'm uncomfortable with that and I believe that the obscurity of the use of those cases should overcome JHD’s historical objection. +MM: First of all, I want to take a moment to just address the previous question the phrase and that does not also have explicitly have access to the WeakMap. Cannot possibly be a correct phrase to include in this bullet point because weakMap, you not provide access to their whatever access you have with them without the weakMap. You have the same access with You can now with regard to the overall question I prefer and have always preferred that the version that Leo is currently shown of the one that expresses the objections from Waldemar and Shu. Historically, I want to just say a little bit about how we got here which is historically I objected I raised the same object that Waldemar and Shu are now raising, and then JHD raised the objection that allowing unregistered symbols while disallowing registered symbols created too much of a surprise, which I'm sympathetic to. And initially that took symbols as weak map keys out of the running entirely because there was no way to resolve that. Since then we've come up with some use cases that make it clear that symbols as weakmap keys have some uses. All those use cases only need the unregistered ones, and the use cases are sufficiently obscure for normal users, they are basically systems building use cases, that I think that they can overcome the usability surprise objection. There are some contexts where we want to disallow non primitive values. This is come up twice in the context of new proposals. One is the callable boundary in the revised Realms proposal to build membranes on top of. It's essential that not allow objects, but allowing something that can serve as WeakMap keys and has unforgeable uniqueness enables us to build membranes with good garbage collection and then the other one that came up is a records and tuples following symbols that can be used as WeakMap keys in there. Enables the rights amplification pattern, such that given a registry one can from those symbols look up other objects without contaminating the immutable data with the objects themselves. So even though both of those are future proposals, they're sufficiently different from each other that I think that they show that there is a systemic issue here. So my current position is, I am inclined to allow symbols as WeakMap keys to go forward either way because systemic issue shows that the extra ability is general enough. I'm okay, allowing them including registered symbols and well-known symbols, but I'm uncomfortable with that and I believe that the obscurity of the use of those cases should overcome JHD’s historical objection. JHD: This has come up in plenary before, obviously not in any way that it prevents anyone from having their objections now. Mark's recollection of history is accurate. The rationale for trying to not distinguish between the kinds of symbols is also accurate. Essentially there's three kinds of symbols, there's regular ones, well known ones, which are cross realm but not in the registry, and then registry symbols. if we are trying to ensure that the only things you can put in a WeakMap is something that could at some point be collected. Then this proposed, alternative would achieve that but it's really easy to create things that in practice aren't collectible and put them in a week collection, like globalThis or weakMap itself like the Constructor, or anything, that you store as a property on a weak method and then you can make them be collectible, but in practice, if you do one of those things, they'll probably remain uncollectible and that's just not an issue and I think it would be really weird, like weirder than not having this proposal at all, to allow some kinds of symbols, but not others and I think that it would make it become a bad practice to use all three of those kinds of symbols and I don't know in which direction that would move but expect that a lot of folks. Say we'll just don't use the registry, make a regular symbol and pass it between Realms because then you can use it in a WeakMap. For example, I think it's just it's a usability issue and Justin's point next on the queue I think more eloquently explains it. So I think I'd pass to there but I just I really think that this is a unacceptable alternative. SYG: I'll respond to JHD’s analogy first, which I think the analogy falls down a little bit. I don't think it's accurate to make an analogy with things that are global like globalThis or whatever else that that you can put in a weakMap today and like the problem. Isn't that Symbol.for registering from implementation point of view? Anyway, the problem isn't that Symbol.for symbols. The issue is that Symbol.for symbols are collectible today. Unlike things that are global so things that are global are already, not collectible because we keep them alive on the global, But because there's no way to observe the identity of symbol.for symbols, they can, in fact, be collected, when the last reference goes away. But with this proposal, if `Symbol.for` symbols are allowed in WeakMaps, but then the minute they are put into a WeakMap from that point on they be come uncollectible and that's the implementation issue. I think that's the distinction that's important to be. Not that, you know, they are already uncollectible then you know it's what's the problem with putting an already uncollectible thing. It's very strange that the collection name WeakMap would cause something to live forever. I don't think that’s great. I would like to hear what Justin has to say for the, for the other part. I won't respond to the user burden part of it. And then, finally, I think there's agreement from the champion group here that I do think it is important to draw the line at putting things, either with Identity or this, this class of things that class of values that can be garbage collected, once the last reference goes away like that, draw the line there for what can be put as keys. And for collections it would be highly problematic to allow other primitives like numbers but I don't think there's actually any disagreement there so that sounds fine. -JRL: So, I actually have a response to Shu or reply to Shu asking about if any implementations currently collect symbol, that for symbols. +JRL: So, I actually have a response to Shu or reply to Shu asking about if any implementations currently collect symbol, that for symbols. JRL: Okay, so Your argument here. Changes my mind, just because it now becomes the same kind of bug that we observed with tagged template literals, which was their memory was essentially uncollectible and because you could reproduce the tagged template literal at any point in the future, if this is the same bug that could happen with symbol, for then we've just introduced a new GC bug and that seems bad. My original response though was that Banning symbol dot for symbols just pushes the responsibility onto the users who are trying to key with a symbol. So instead of just being able to do WeakMap.get or .set whatever on Ambiguously with a symbol that I am past. I now have to verify that this symbol is not registered globally which just means if I need to key on a weakmap I'm just inserting it into a map. Instead, I have to now switch between a map in a weak map which seems - the fallback that I'm going to have to implement is the exact same uncollectible memory allocation until the map goes out of GC now. @@ -661,9 +687,9 @@ WH: My only concern is about symbols which can be resurrected by naming them aga LEO: The idea was just to actually I just try to put everything in perspective here to just make sure I actually address the concern. So I just wanted to make sure if you just should go with bullet one or both 1 and two. -WH: I strongly do not think that we should go with bullet two, I don't want to get into the business of figuring out what the definition of a well-known symbol is. +WH: I strongly do not think that we should go with bullet two, I don't want to get into the business of figuring out what the definition of a well-known symbol is. -MM: I'm gonna introduce a quick clarifying. Question the well known symbols. Are they? Do they have a unique identity / realm are they the same or know? They are the same across every realm in there knocking registered. +MM: I'm gonna introduce a quick clarifying. Question the well known symbols. Are they? Do they have a unique identity / realm are they the same or know? They are the same across every realm in there knocking registered. MM: If they're the same across realms then they are recreated when you create a new realm. @@ -680,6 +706,5 @@ LEO: Symbol itself can be removed. BN: I like the consistency of being able to put all kinds of symbols into the WeakMap. but I think these objections from WH and others do have merit and I just want to point out that what we're talking about is the cases in which WeakMap#set will throw when you try to put something into it. If we go with the version of the proposal where Symbol.for symbols are disallowed, that means WeakMap#set will throw if you pass such a symbol. That's not great for consistency, but in terms of web compatibility, it's a whole lot easier to undo that, to stop throwing that exception in the future, than it would be to enable Symbol.for symbols and then later disallow them. And, you know, because this is going to be something that people are using all over the world, I think if we ship that more conservative version of this (throwing for Symbol.for symbols), the community will let us know if that was such a painful mistake that we need to revisit it… and then we can do that, right? We can just make it stop throwing and say, “yeah, these symbols aren't going to be collected, but it's better for consistency to allow them in WeakMaps.” So I think I like the strategy/conservatism of that staged approach, of initially disallowing Symbol.for symbols, but considering enabling them at a later date. ### Conclusion/resolution -- LEO to have a thread on Github to discuss allowing Symbol.for symbols and well-known symbols. - +- LEO to have a thread on Github to discuss allowing Symbol.for symbols and well-known symbols. diff --git a/meetings/2021-05/may-26.md b/meetings/2021-05/may-26.md index 24477f04..e26003a6 100644 --- a/meetings/2021-05/may-26.md +++ b/meetings/2021-05/may-26.md @@ -1,7 +1,8 @@ # 26 May, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Mathias Bynens | MB | Google | @@ -18,9 +19,10 @@ | Rob Palmer | RPR | Bloomberg | ## Discussion of globals and guidance for future proposals + Presenter: Shu-yu Guo (SYG) -SYG: This is not a proposal. It is meant for discussion with the folks who are here. It was brought up in the context of the resizable buffers proposal at the last meeting specifically from Moddable (PHE) and JHD that it is undesirable to add new global constructors. Specifically, for Moddable, there is an implementation difficulty with adding new globals in that their engine being in the embedded space, would need to put globals into RAM instead of ROM, which is more expensive and comes dear for embedded devices, I suppose. So that restriction, if we follow it, significantly affects all future proposals as well. So I want to open the floor to discussion here. Should we limit global constructors in the future? What should we do? What are people's thoughts? I want to start with with asking the Moddable folks — did they have any cycles between last meeting and this one to see if they can overcome the technical difficulty first before we move on to the discussion. +SYG: This is not a proposal. It is meant for discussion with the folks who are here. It was brought up in the context of the resizable buffers proposal at the last meeting specifically from Moddable (PHE) and JHD that it is undesirable to add new global constructors. Specifically, for Moddable, there is an implementation difficulty with adding new globals in that their engine being in the embedded space, would need to put globals into RAM instead of ROM, which is more expensive and comes dear for embedded devices, I suppose. So that restriction, if we follow it, significantly affects all future proposals as well. So I want to open the floor to discussion here. Should we limit global constructors in the future? What should we do? What are people's thoughts? I want to start with with asking the Moddable folks — did they have any cycles between last meeting and this one to see if they can overcome the technical difficulty first before we move on to the discussion. PHE: I'll take a few minutes to answer the technical question first, and for comments more globally, I'll get to them later in the time. At the last meeting, just as a refresher, we had raised the issue of global pressure on our engine and SYG had asked for some time to investigate what we might be able to do to mitigate that. We talked about that. Patrick took some time to do some implementation work. That was largely successful. I want to walk you through a couple of details of that so you understand. @@ -36,9 +38,9 @@ SYG: Indeed. Thank you very much for taking the time to the prototype and to fin JHD: I think that we have to be able to create new globals or we've unnecessarily hamstrung ourselves for what we can add to the language. I'm very glad to hear that Moddable has found an approach that relieves that burden but still there's lots of reasons I think why we should still avoid new globals when it makes sense. The implementation concern may no longer be there, but then there's the SES-style concern of enumerating globals, there's usability and linting, concerns of configuring your project linting config so that it knows which things are global in your target environment and which things are not, that's some complexity that the user ecosystem has to bear. The guidance I would hope to see is that a new global is fine, but if there's a way to make something not be global and it doesn't violate other constraints that we have, then I think that should be preferred. -SYG: Thanks. +SYG: Thanks. -MM: I just wanted to enumerate the other ways of addressing the equivalent of global pressure. The other options, other than just adding a new global variable. I want to first of all recapitulate the history. We originally raised built-in modules explicitly. One of the major motivations for that was to avoid having every new proposal add to the global namespace. So when we took built-in modules off the table, when we rejected it as a thing that could withstand TC39, we explicitly rejected a solution that was motivated to solve this problem. The other thing is a namespace object, like `Math`, like `Reflect`, is often where multiple globals will go together to be part of a conceptual unit or have something else in common and introducing a namespace object to aggregate. Those introduce a global namespace for the namespace object and then it's basically your traditional hierarchical namespace. Same reason we have directories rather just flat files in the space. With regard to the implementation pressure, Peter already talked about the bookkeeping that you can do for relieving the implementation pressure. But the implementation pressure is not really the main concern here. The main concern here is the usability problem of polluting, the growth of the global namespace. And the fact that the global namespace is shared with hosts and is shared with applications. So, as we introduce new global names, there's always the hazard that we're either incompatible with some host that had expanded into that same name or incompatible with some global variable of some application that is just simply of that name. These are all issues. +MM: I just wanted to enumerate the other ways of addressing the equivalent of global pressure. The other options, other than just adding a new global variable. I want to first of all recapitulate the history. We originally raised built-in modules explicitly. One of the major motivations for that was to avoid having every new proposal add to the global namespace. So when we took built-in modules off the table, when we rejected it as a thing that could withstand TC39, we explicitly rejected a solution that was motivated to solve this problem. The other thing is a namespace object, like `Math`, like `Reflect`, is often where multiple globals will go together to be part of a conceptual unit or have something else in common and introducing a namespace object to aggregate. Those introduce a global namespace for the namespace object and then it's basically your traditional hierarchical namespace. Same reason we have directories rather just flat files in the space. With regard to the implementation pressure, Peter already talked about the bookkeeping that you can do for relieving the implementation pressure. But the implementation pressure is not really the main concern here. The main concern here is the usability problem of polluting, the growth of the global namespace. And the fact that the global namespace is shared with hosts and is shared with applications. So, as we introduce new global names, there's always the hazard that we're either incompatible with some host that had expanded into that same name or incompatible with some global variable of some application that is just simply of that name. These are all issues. SYG: Thanks, MM. I agree that namespace objects have precedent and seem like a good compromise going forward. If natural grouping is there I suppose you know context of namespace objects. We could as a committee discuss carving those out right now. They're mostly ad hoc. `Temporal` makes sense as a collection of objects. `Math` was there from the beginning. But notably, we didn't put `WeakMap` or `Set` into, say, a `Collections` namespace, given that we have a status quo of being ad hoc. From the web platform's point of view we want to keep using globals. Is the committee interested in carving out namespace objects in this fashion, or continue in an ad hoc fashion to add them as needed? @@ -60,7 +62,7 @@ MM: Excellent. SYG: I'm not sure I would interpret it so positively. I suppose the mechanism, I agree is fine. The sticking point hasn't changed. Namely that even if built-in modules were a thing and TC39 specified a thing that has used built in modules, that this agreement from the web platform was that we will continue to use globals and will not only expose this new thing via built-in modules, and that has not changed. -YSV: Exactly what SYG said. That is the status there. There is also one issue with built-in modules, I believe the layering with the HTML spec that would have allowed polyfilling them is no longer possible, but my memory is failing a little bit there. +YSV: Exactly what SYG said. That is the status there. There is also one issue with built-in modules, I believe the layering with the HTML spec that would have allowed polyfilling them is no longer possible, but my memory is failing a little bit there. MM: Okay, so I'm going to walk away from this taking all of this as a very positive indication compared to what I remember of previous discussion. @@ -84,18 +86,17 @@ JHD: That seems like a reasonable stage 3 requirement for certain, but I'm not s MM: I don't think I would be happy to see that as a stage 2 issue. It's okay, if it turns out be an inaccurate view, but impact on the primordials is definitely something we're going to argue about in stage 2 so it might as well be explicitly required in stage 2. - JHD: Seems like that's a worthy PR to the process document that we could then discuss after some async iteration on it. DE: I want to suggest that these things be placed in how-we-work, which has a lot of guidance for proposal champions, rather than in the process document. Because people currently, just as we saw in this proposal, could already raise these things as concerns. In how-we-work, we can be a lot more descriptive about best practices that aren't that aren't quite binding. I think this would be a funny thing to make binding. There's really a lot of things about conventions for JavaScript, which extend a lot in quite a detailed way beyond whether something is a global or not, you know, just about how built-in objects are made that would be great to describe in how-we-work. So, I think this would be a good effort to follow up with offline. -AKI: This is actually a good point, of the distinction between our prescriptive document and our descriptive document. The process document we'll keep pared down because it's only the most explicit, and in how-we-work we can be a lot more descriptive and a lot more wordy, frankly, documenting things. +AKI: This is actually a good point, of the distinction between our prescriptive document and our descriptive document. The process document we'll keep pared down because it's only the most explicit, and in how-we-work we can be a lot more descriptive and a lot more wordy, frankly, documenting things. JHD: To clarify, what I was saying is SYG had already suggested adding to how-we-work. And I think the majority of all the guidance should go there for the reasons stated, but then someone suggested having an explicit requirement like enumerating expected impact on globals and that's the space. That one line is the part that I was thinking might be useful to put in the process document. The rest, I completely agree with you would go into how-we-work. DE: I don't support that addition to the process document because I don't think we should privilege this one thing over all the other kinds of invariants that people are going for. this really sounds like one of those kinds of invariants. Documenting and agreeing on invariant efforts, I don't think we should rush to put it in the process document. This is the first invariant in the process, right? -JHD: So again to clarify, I'm not suggesting it be an invariant across the stack and I'm suggesting simply that they be explicitly mentioned, not that there be any constraint on what they are or how they're organized. That's how-we-work stuff. Just a suggestion. +JHD: So again to clarify, I'm not suggesting it be an invariant across the stack and I'm suggesting simply that they be explicitly mentioned, not that there be any constraint on what they are or how they're organized. That's how-we-work stuff. Just a suggestion. DE: There's really a lot that goes into making a good explainer good, and I don't think we should be trying to encode all that in the process document. I don't think we'll do a good job that way. @@ -104,23 +105,26 @@ MM: To clarify two things about my previous comments. Whether this conversation YSV: I also support the suggestion of adding this to how-we-work. I can also imagine that this might resolve an issue that I opened recently about how to communicate the ideal process for interacting with host specification integration, which was a hot button issue from last week. I think that maybe that will be good resolution. To move it into how-we-work rather than the process document would be a good resolution if people are open to that. Additionally I think that when it comes to invariants we'll soon have a process for that, to make that clear. And that'll be a stronger guarantee later on. SYG: For what it's worth, as for my original topic that I wanted to discuss, I'm satisfied with the discussion of the outcome here and the concrete action item of doing some work in the how-we-work repo. So there's nothing more from me. + ### Conclusion/Resolution + New globals are ok. Discussion of guidance to continue and hopefully be documented in how-we-work. - ## Resizable Buffers (Continuation) + Presenter: Shu-yu Guo (SYG) -SYG: This is just a clarification from the discussion yesterday with Waldemar to disallow spurious failures in compare-and-swap when growing. It turns out I just don't understand the memory model even though I wrote it, and that the spec already prohibits spurious failures, so nothing needs to change. I misunderstood what I wrote myself and the desired behavior is already there. I just wanted to give an update to plenary, +SYG: This is just a clarification from the discussion yesterday with Waldemar to disallow spurious failures in compare-and-swap when growing. It turns out I just don't understand the memory model even though I wrote it, and that the spec already prohibits spurious failures, so nothing needs to change. I misunderstood what I wrote myself and the desired behavior is already there. I just wanted to give an update to plenary, -WH: I'm glad to hear that. +WH: I'm glad to hear that. YSV: I misunderstood what you wrote in the chat. I thought you said that the SharedArrayBuffer spec already had spurious failures, and it integrates (?) SYG: No, spurious failures are already not allowed. ### Conclusion/Resolution + Proposal is unconditionally stage 3 ## Housekeeping @@ -135,7 +139,6 @@ AKI: There's nothing on the queue. RPR: So please consider yourselves encouraged. - ## Admin: Realtime Chat Networks AKI: Next up, our formally endorsed chat solution. I was planning on waiting until the end of day today because I wanted to give everyone a little bit more time to adjust and get used to new things. But since we have the time, now seems like a good time to talk about it. Freenode is, to quote SYG, up to some cartoon villainy. Some such ridiculous behavior that there's no real good reason to tacitly endorse their existence by having ourselves based there. And since all of the really great and thourough work that the inclusion group had already been up to, I would love if we could talk about using Matrix going forward. If I remember correctly, it was Shu who commented on a Reflector issue many months ago talking about, when it comes to inclusion, there are two different types of inclusion. We talked about ways that we can be inclusive to the community and we also talk about ways we can be inclusive to new delegates. I think this can cover both. We can have Matrix rooms that are accessible to the community that you don't have to have been using real-time chat for 25 years in order to understand (because for lots and lots of people IRC is just incomprehensible). Also for new delegates, it can be intimidating for a new delegate who, going through their onboarding email, see where our work is done: GitHub — great, I use GitHub; Discourse — click that, okay, I see what that is; IRC — oh, that's that thing that people who have been doing this for eternity use and they will make fun of me if I don't understand it. That's a pretty common response to seeing IRC these days. So I think it would be a great idea to formally endorse Matrix and move forward. What do y'all think? @@ -154,9 +157,9 @@ CDA: We'd still be able to have the IRC Bridge within Matrix right? JHD: If that's something we wanted we certainly could, I'm sure. -AKI: Yes, right. I hear it's not quite as solid as we want to believe it is but but it is an option. +AKI: Yes, right. I hear it's not quite as solid as we want to believe it is but but it is an option. -CDA: it's just nice to be able to deduplicate. +CDA: it's just nice to be able to deduplicate. AKI: Sure. @@ -168,9 +171,9 @@ KG: I wanted to say that I have been using IRC forever, and I will continue usin WH: So one thing, as I mentioned yesterday, the one thing that's still missing from Matrix — or maybe is there but I haven't figured out how to do it — is downloading logs. I'd like us to figure that out before we officially endorse Matrix. -AKI: Sure, I don't disagree especially given the history we've had with US trade law and whatnot. It's pretty vital. I know there are ways to access logs. There's probably a script we could write to grab the logs that are—and look, Tierney's on it (TCN pasted https://github.com/rubo77/matrix-dl into the call’s chat). But additionally, WH, I do believe we are looking into if there is a better way for us to be logging. +AKI: Sure, I don't disagree especially given the history we've had with US trade law and whatnot. It's pretty vital. I know there are ways to access logs. There's probably a script we could write to grab the logs that are—and look, Tierney's on it (TCN pasted https://github.com/rubo77/matrix-dl into the call’s chat). But additionally, WH, I do believe we are looking into if there is a better way for us to be logging. -WH: Yes I understand. And I would like us to figure that out before we officially endorse Matrix. +WH: Yes I understand. And I would like us to figure that out before we officially endorse Matrix. KG: WH, are you okay with just saying we are confident that we will have a solution within the next few weeks? @@ -180,25 +183,25 @@ KG: Sure, I can commit to having a tool by the next meeting. WH: That would work. -TCN: To JHD's point about carefully handling the exodus I would definitely say we probably just leave the channel and not say anything and to remove references. But never say we've left. Again, cartoon villainy. It's real bad, real bad, and I would prefer that we retain that control. And I'm happy, I'm going to be sitting on Freenode for a while for similar reasons, regardless. +TCN: To JHD's point about carefully handling the exodus I would definitely say we probably just leave the channel and not say anything and to remove references. But never say we've left. Again, cartoon villainy. It's real bad, real bad, and I would prefer that we retain that control. And I'm happy, I'm going to be sitting on Freenode for a while for similar reasons, regardless. -JHD: I'm happy to be a signpost there to quietly nudge people in the right direction for as long as necessary. +JHD: I'm happy to be a signpost there to quietly nudge people in the right direction for as long as necessary. -PFC: I definitely support not having official things on IRC that the delegates should be recommended to keep up with if they want to participate. Because another problem I've had with IRC is the spam floods. I don't know if they occur in the TC39 channels but especially with the recent controversy around Freenode, I've seen a large uptick of spam floods with offensive and racist language. I don't think that's something that delegates should be subjected to if they want to participate in TC39. So I'd rather move sooner rather than later. +PFC: I definitely support not having official things on IRC that the delegates should be recommended to keep up with if they want to participate. Because another problem I've had with IRC is the spam floods. I don't know if they occur in the TC39 channels but especially with the recent controversy around Freenode, I've seen a large uptick of spam floods with offensive and racist language. I don't think that's something that delegates should be subjected to if they want to participate in TC39. So I'd rather move sooner rather than later. -TCN: To expand on that, they [Freenode, not Libera] explicitly removed their prohibition on that kind of language. I would agree. +TCN: To expand on that, they [Freenode, not Libera] explicitly removed their prohibition on that kind of language. I would agree. MLS: There's a little bit of inertia to make it happen and so that's slightly an accessibility issue. -AKI: I don't disagree, I think for people coming to the committee now people who are newer and people who like I mentioned earlier haven't been using IRC for years. It's one of the other for those people, you know IRC is not super common anymore. There is a web client if people do not wish to install another native chat client. +AKI: I don't disagree, I think for people coming to the committee now people who are newer and people who like I mentioned earlier haven't been using IRC for years. It's one of the other for those people, you know IRC is not super common anymore. There is a web client if people do not wish to install another native chat client. -MLS: Yeah, I'm not denying that. I may be an anachronism, it's just some there may be some inertia. +MLS: Yeah, I'm not denying that. I may be an anachronism, it's just some there may be some inertia. MPC: A quick reply to be careful about the term accessibility and explicitly say accessible for whom. For example, are you talking about accessibility for users who haven't haven't used a certain platform before? That's what the inclusion group has been thinking about, mostly when we've like piloting Matrix and thinking about chat platforms, you know, is it common in the web space? Accessibility for users with a visual impairment or a hearing impairment or something like that, or is it accessibility in terms of like convenience to oneself, which is a perfectly valid thing. But I think it's very worth specifying that explicitly for the sake of clarity. MLS: My kind of who is related to what (?) was that to begin this conversation as far as accessibility, you have to get everybody that you expect and want to be part of it to be switched over to using tools, new server the methods and all that other stuff. -MPC: I think RPR's reply can speak to the efficacy of that we've observed with Matrix. +MPC: I think RPR's reply can speak to the efficacy of that we've observed with Matrix. JRL: To MLS's point here, this is the only group that I'm in that used IRC as a requirement in order to participate. So I'm essentially going to be trading my IRC client, for which I have to pay a yearly subscription to have a decent service, over to Element app for Matrix. So, trading one app for another. And now with this new app, I no longer have a paid subscription, which is a nice plus. In general, I'm trading one app for another, so it's neutral. I'm not adding another app to my setup. @@ -220,15 +223,15 @@ AKI: We didn't exactly expect to make this transition as suddenly as we found ou SYG: I also don't hate Matrix, so fine with me. I was interested in asking the room about JHD's point, that if we keep a community channel where the community is, including IRC, who would actually stay in that channel? As the temperature check I seem to be getting from the room, is everyone so eager to uninstall their IRC client? -AKI: I will have. I'll have IRC Cloud running for eternity, because when will I ever not be on IRC? I don't know. And I think JHD said as well. +AKI: I will have. I'll have IRC Cloud running for eternity, because when will I ever not be on IRC? I don't know. And I think JHD said as well. JHD: I have one to two dozen channels I’m in daily, and that's now doubled because they're on both networks. So I'll never be uninstalling my IRC client, so it's no burden for me to continue to be a presence there. -SYG: I would prefer there to be fewer official venues to engage the community, quote unquote. But I guess provided we already have a bunch, like we have the discourse, we have the current IRC, we will have Matrix. We will have another IRC then? +SYG: I would prefer there to be fewer official venues to engage the community, quote unquote. But I guess provided we already have a bunch, like we have the discourse, we have the current IRC, we will have Matrix. We will have another IRC then? AKI: A lot of the Freenode and Libera interactions will likely be gently leading people toward Matrix. If they’re particularly disinterested in that move, we can entertain a brief conversation. Ideally our more involved conversations wind up on Discourse, a less ethereal forum. I'm not super concerned about staffing, but I get your point—having more and more “official” venues is not great. I'm not super concerned because I think we can gently guide people toward Matrix most of the time. And when we can't, then we can have an individual conversation. -JHD: And we can have all of our of IRC documentation pointing to our preferred official venues and not even necessarily mentioning the others. People that are still in Freenode will be gently nudged, we hope, over time, in a different direction. And then the people, if there are people, who happen to be on Libera, who want to be on IRC and who are thus there on Libera, we can freely make them aware of the other venues if they still prefer IRC. That's fine. That's on them. But you know, I expect that the majority of people will discover whatever our documentation points to that's on the website and how-we-work. +JHD: And we can have all of our of IRC documentation pointing to our preferred official venues and not even necessarily mentioning the others. People that are still in Freenode will be gently nudged, we hope, over time, in a different direction. And then the people, if there are people, who happen to be on Libera, who want to be on IRC and who are thus there on Libera, we can freely make them aware of the other venues if they still prefer IRC. That's fine. That's on them. But you know, I expect that the majority of people will discover whatever our documentation points to that's on the website and how-we-work. SYG: Yeah, that sounds reasonable to me. So I wanted to make sure that there was more than a single volunteer for these other venues to staff them. It sounds like we do have that, so it's fine. @@ -236,7 +239,7 @@ SFC: I had a response to what SYG was saying about multiple channels communicati YSV: I think this experiment has gone really well. It seems like we had a pretty seamless transition. -SFC: My next comment was just a PSA about SSO. I like to use SSO (single sign on) as much as possible for all these accounts because it's less passwords to remember and it's more secure. The mozilla.org server for Matrix has SSO and it has both Google and GitHub SSO and that's a great option for you to use. That's how I have my Matrix set up now, via the Mozilla server as opposed to the Element server, but since it's federated, I can still join all the same channels. But if you make your home account on the Mozilla server, then you can use SSO, which is kind of convenient. Which is another perk that is in Matrix that's not in IRC. +SFC: My next comment was just a PSA about SSO. I like to use SSO (single sign on) as much as possible for all these accounts because it's less passwords to remember and it's more secure. The mozilla.org server for Matrix has SSO and it has both Google and GitHub SSO and that's a great option for you to use. That's how I have my Matrix set up now, via the Mozilla server as opposed to the Element server, but since it's federated, I can still join all the same channels. But if you make your home account on the Mozilla server, then you can use SSO, which is kind of convenient. Which is another perk that is in Matrix that's not in IRC. AKI: Good call. @@ -246,14 +249,16 @@ JHD: It works on the Matrix homeserver, I signed in with my GitHub account so I RBU: I just want mention that as we this transition good. (?) It makes sense to pare down the number of channels, we have an IRC and we talked about keeping people in one, but there's no reason to ask the resident TC39 delegates to the you to existing kirino, (?) The thing I would mention I was, I was probably wait on that because they might consider that. If we send a request for that, now, they might get spooked and nuke all the channels. We probably. (?) -JHD: That's something I think we can safely do. We can put like if we word(?) all the channels to the main TC39 one and allow people to chat. +JHD: That's something I think we can safely do. We can put like if we word(?) all the channels to the main TC39 one and allow people to chat. -RBU: Oh, that's also good. Okay, we should just do it. +RBU: Oh, that's also good. Okay, we should just do it. USA: I was just going to say that we could make it opt-in for delegates to make sure that doesn't pose a problem. AKI: All right. So I think this sounds like consensus, it sounds like we have this. There's going to be some growing pains, some adjustments and change is hard, but it sounds like this is going to be a good direction forward for us, which I'm really pleased about. I agree with YSV that the transition has gone pretty smoothly. Thank you, everyone for being willing to give it a try. + ### Conclusion/Resolution + - We have moved from Freenode to Matrix - Docs to be updated. - Be careful with announcements on Freenode to prevent losing control of the channel. @@ -272,7 +277,7 @@ MLS: I agree that the cadence is too frequent. I'm constantly having to check my DE: As one of the people who initially proposed the more frequent meetings, I have to say that my experiences have been pretty similar in terms of, you know, feeling more stressed, that they're more frequent. Even though I'm, you know, coordinating among fewer people than someone like SYG is, at some level. So, I also, you know, in a model like the WebAssembly CG where they have most of their work happening in these weekly meetings, that are bi-weekly meetings of one hour. I think that that works pretty well, but I imagine that it would be difficult for us to switch to that because we've been having these gaps on the agenda. I mean there's been pressure for a long time to have DC, the ground, plenary be fewer hours and I pushed back on that because we always go into overtime on the agendas. Since we do have gaps these days, I do like the idea of reducing the number of hours per year of meetings until we see that there's a need to go the other way. I think we're able to do this because we've established good practices like the incubator calls the campaign group meetings that that give us space to have good discussions, and coming to plenary with more polished proposals. I think that's why we're seeing the agenda is being being less packed with many items, keeping to quite short time boxes. So, it's a success from us. In the near term, maybe we should think about shaving days off of the meetings that we have coming up in the immediate future. It may be hard to shift around the schedule for the next meetings because people probably have things planned around them. But what if we try to make the longer meeting be three days or the shorter meeting be one day? Maybe not both. We might need to be a little bit dynamic about this depending on how many items come up. I think that that this is the thing that's more likely to be easy to implement in the near term. Or we could make the days shorter, we could say it's two days and it's just two hours each day. At the same time, this doesn't reduce the fixed amount of work that that you have in terms of before each meeting reviewing with other stakeholders. Like I'm reviewing with stakeholders in Igalia and within the tools call, with some companies that we work with, and yes, there is a fixed amount of work. So, I agree with others. -SFC: I just wanted to say that I've been actually quite happy with the increased meeting cadence because it has made my life quite a bit less stressful because no longer is there situation where I really want to get this proposal finished by this meeting and if it misses this meeting it's going to be delayed by multiple months. Now it's only delayed by six weeks. So I find that it actually serves a nicer cadence because I can bring the proposal to TC39 when it's ready, as opposed to really trying to press against these arbitrary deadlines or else delay the whole project by multiple months. And I know that this has happened, for example, with the Intl Enumeration API proposal that Frank was working on as well as Intl.DisplayNames where in the old model where the meetings are very infrequent, we may have pushed harder to be like, we should get this proposal adopted in this meeting because if you don't, we are going to just delay our work by a really long time. But instead with the more frequent meetings, it actually is quite nice because we can gather feedback from the committee, make the changes, and come back in only six weeks and advance. So I think, at least from my perspective, the increased frequency has been really really helpful to help with the rapid iteration of proposals. And I've been really happy with that from my perspective. +SFC: I just wanted to say that I've been actually quite happy with the increased meeting cadence because it has made my life quite a bit less stressful because no longer is there situation where I really want to get this proposal finished by this meeting and if it misses this meeting it's going to be delayed by multiple months. Now it's only delayed by six weeks. So I find that it actually serves a nicer cadence because I can bring the proposal to TC39 when it's ready, as opposed to really trying to press against these arbitrary deadlines or else delay the whole project by multiple months. And I know that this has happened, for example, with the Intl Enumeration API proposal that Frank was working on as well as Intl.DisplayNames where in the old model where the meetings are very infrequent, we may have pushed harder to be like, we should get this proposal adopted in this meeting because if you don't, we are going to just delay our work by a really long time. But instead with the more frequent meetings, it actually is quite nice because we can gather feedback from the committee, make the changes, and come back in only six weeks and advance. So I think, at least from my perspective, the increased frequency has been really really helpful to help with the rapid iteration of proposals. And I've been really happy with that from my perspective. SFC: And I have another agenda item after this, I'll just take that one now. So we've had exactly two meetings in a row where we've been undersubscribed. We've had meetings in the past where we've been undersubscribed as well. I remember the last meeting we had at Apple was also undersubscribed. We had an extra few hours at the end for an unconference. That just happens sometimes. Just based on the pace of work, you know. It's also been only since last November that we had a meeting that was oversubscribed; if you remember, it hasn't been that long since we had one that was oversubscribed. So I think that we don't have enough data, we don't have a big enough sample right now to really say that, oh, we don't need this extra meeting time. I think it's really premature. As to say that this meeting could have been oversubscribed, except that because we have the more frequent meetings I decided to pull my proposal from this meeting and put it on the next one instead so that I could have it more polished to present. I definitely don't want to say that, oh, we don't need this meeting time, because we just don't have enough samples for that. Now, maybe at the end of the year after we've tried this current cadence for a whole year, maybe we could make that claim, but I think it's way too early to make that claim after just two samples. A sample size of two is too small. @@ -284,29 +289,29 @@ YSV: I'm going to speak a little to what MM just said, and a little to what SFC DE: I really like YSV's idea. If we want to do the same number of hours as we have right now, I guess it would be like our current four times a year meetings, plus weekly smaller meetings. But if we're okay with reducing the number of hours per year, it seems like we have latitude to do that. Every two weeks would probably work and I agree that this could be a really great way to bring more focus into the topics, because with a long meeting, it can be hard to retain focus. I do want to directly disagree with MM, that blocking is the most important thing. I honestly don't think that blocking is the most effective way to control for quality. In the time that I've been on committee, we made mistakes that I objected to at the time and then we refer to them later, like, with the RegExp.matchAll g semantics. We talk things through and we have different points of view and we make compromises and I just don't think the option of blocking on the table is this decisive thing that causes us to produce output that is of substantial better quality. I think the important thing is that we, you know, examine this space very well and listen to each other and sometimes we end up doing that better than others. anyway, I think if we had these more frequent meetings but we published the agenda well in advance, then we would have ample time to let people show up or object to the the timing if they have a conflict when the topics that they're interested in will be under discussion. We've been able to work around people's vacations in the past and move topics around to when they'll be available and I think we would be able to do that if we pre-plan agendas for more frequent shorter meetings. -WH: I'm strongly opposed to having bi-weekly meetings. This will just become a scheduling nightmare. When I schedule a trip someplace on vacation, I usually schedule it many months or a year in advance. I have no idea what will be discussed on that day. Other people will do likewise, so we'll just have a nightmare of constraints. I'm in the model where I want to attend all the meetings, and the more scattered they are, the worse it gets. So I want them to be combined into significant chunks of time on predictable days which I know a year in advance so I can schedule vacations and other events around them. Other people have echoed this desire for larger chunks because it takes a while to prepare for one of these things. I'm not going to repeat those points. Scheduling when you're going to be available for meetings is very important. And I want to have frequency be as low as we can get away with. Can we do that? +WH: I'm strongly opposed to having bi-weekly meetings. This will just become a scheduling nightmare. When I schedule a trip someplace on vacation, I usually schedule it many months or a year in advance. I have no idea what will be discussed on that day. Other people will do likewise, so we'll just have a nightmare of constraints. I'm in the model where I want to attend all the meetings, and the more scattered they are, the worse it gets. So I want them to be combined into significant chunks of time on predictable days which I know a year in advance so I can schedule vacations and other events around them. Other people have echoed this desire for larger chunks because it takes a while to prepare for one of these things. I'm not going to repeat those points. Scheduling when you're going to be available for meetings is very important. And I want to have frequency be as low as we can get away with. Can we do that? -YSG: Vacation argument is very compelling for me. I totally agree. One question, I have for you WH is if we had chunks, where we had like bi-weekly meetings and then long periods without bi-weekly meetings, for example, have like one month where we have two meetings and then when the plenary is coming around, there's a two months on either side that are free, something like that. I don't know how the time we were to work out with something like, Be more preferable. +YSG: Vacation argument is very compelling for me. I totally agree. One question, I have for you WH is if we had chunks, where we had like bi-weekly meetings and then long periods without bi-weekly meetings, for example, have like one month where we have two meetings and then when the plenary is coming around, there's a two months on either side that are free, something like that. I don't know how the time we were to work out with something like, Be more preferable. WH: We already have incubator meetings. Let's use those. For the plenary, last year they were every eight weeks and the big change for this year is that they're every six weeks. Every two weeks is far beyond what I would find acceptable. YSV: Okay, and so the clumped structure wouldn't work either, okay? -WH: I want them all together. +WH: I want them all together. -PFC: I'd really like to push back on the idea of the main function of the plenary being to block mistakes. I think as a delegate in TC39, it's part of our job to prevent mistakes from happening, but we do that by keeping current on all the proposals that are in the air, and working with the authors of those proposals to resolve things that we think are mistakes. It shouldn't be the case that if you see a mistake in a proposal that you want to prevent, that we go to the plenary and then dramatically reveal that mistake and send the author back to the drawing board for for another two months. I really disagree with that model of collaboration. I don't think it's accurate to say that the main function of the plenary is to block. The main function of the plenary is to move things forward together. +PFC: I'd really like to push back on the idea of the main function of the plenary being to block mistakes. I think as a delegate in TC39, it's part of our job to prevent mistakes from happening, but we do that by keeping current on all the proposals that are in the air, and working with the authors of those proposals to resolve things that we think are mistakes. It shouldn't be the case that if you see a mistake in a proposal that you want to prevent, that we go to the plenary and then dramatically reveal that mistake and send the author back to the drawing board for for another two months. I really disagree with that model of collaboration. I don't think it's accurate to say that the main function of the plenary is to block. The main function of the plenary is to move things forward together. -RPR: Thank you. Before we move on to the next item we can go 20 minutes or perhaps go another 10 minutes before we need to move on to incubation chartering, and the queue is long. From experience, we found that we can just keep on going talking about this topic. It's the topic that doesn't stop giving. So we will call a stop to it in 10 minutes. +RPR: Thank you. Before we move on to the next item we can go 20 minutes or perhaps go another 10 minutes before we need to move on to incubation chartering, and the queue is long. From experience, we found that we can just keep on going talking about this topic. It's the topic that doesn't stop giving. So we will call a stop to it in 10 minutes. USA: I agreed with the point SFC was raising and wanted to say that it goes counter to the whole idea that we need to be super duper careful about all the proposals and long longer the cadence insurer(?) or maybe not introspect lets introspect(?), it lets people have sloppier design and proposals at times for for advancement and that can can even exaggerate the situation. -MLS: I think that we can get counterproductive here to any meetings and also discussion structure. I think it makes things kind of counterproductive as I and others sounds like prepare for meetings. We look what's on the agenda. We have the 10 day notice effectively gives us 10 days to look at the items presented and understand what stakeholders have inside. Organization think about those and prepare usually conversations offline beforehand and And then it show up a plenary meeting ready to discuss them. I think also the structure of the meeting itself, I do think it's a good thing that we time box, where the presenter needs to give an estimate of how long things thinks need to take to have our scheduling. but also the schedule of meetings themselves. Before covid, I was eager to find out in, say, September of a particular year, what the meeting schedule was for the following year, I, like others, would make personal scheduling decisions based on meetings, because I think it's important for me and others from my organization and every organization to participate. Now that we're going eight meetings a year, that actually reduces attendance in some cases and I don't think people are asking paired and from what I see in chat. I agree that after the meeting you are a little fried and you need some time to regroup. The last thing I'll add is that we all have other jobs besides attending these meetings and a certain point, it could be too frequent to just meet and not get treated with the things in our organization will feel that pain. +MLS: I think that we can get counterproductive here to any meetings and also discussion structure. I think it makes things kind of counterproductive as I and others sounds like prepare for meetings. We look what's on the agenda. We have the 10 day notice effectively gives us 10 days to look at the items presented and understand what stakeholders have inside. Organization think about those and prepare usually conversations offline beforehand and And then it show up a plenary meeting ready to discuss them. I think also the structure of the meeting itself, I do think it's a good thing that we time box, where the presenter needs to give an estimate of how long things thinks need to take to have our scheduling. but also the schedule of meetings themselves. Before covid, I was eager to find out in, say, September of a particular year, what the meeting schedule was for the following year, I, like others, would make personal scheduling decisions based on meetings, because I think it's important for me and others from my organization and every organization to participate. Now that we're going eight meetings a year, that actually reduces attendance in some cases and I don't think people are asking paired and from what I see in chat. I agree that after the meeting you are a little fried and you need some time to regroup. The last thing I'll add is that we all have other jobs besides attending these meetings and a certain point, it could be too frequent to just meet and not get treated with the things in our organization will feel that pain. TCN: I'm not sure whether this is a reply or not to what was proposed earlier. But I've been doing open source community meetings for years, running them, restructuring them, doing that and also internal meetings at Microsoft and stuff. One of the patterns I've seen people try to take is non-required, more frequent meetings and very often that— an example is my organization's all hands at Microsoft went from once a month doing a big meeting, to every week. We do an all-hands where you can show up if you want, and that's happened in the other open source projects I participated in as well, and the same thing always happens, where if it's optional less and less people start showing up. In the case of TC39, if the goal there is working on proposals, you'll get less engagement on those proposals overall. It ends up leading to more challenges for those who are starting, or more challenges for those who are less familiar with the process, and don't have it baked into their brain already. That would be a concern I have, is less direct interaction on proposals if we do go down that path. -KG: I said this in the chat so I guess I will just summarize briefly. I disagree with MM that blocking is the most important thing. But I do think that being able to review every proposal before it can advance is very important to me and I think to a lot of people. That's not exactly for blocking. I do need to do a careful review of things so I can bring up my concerns and I don't have the bandwidth to be doing that literally all of the time. So the way that happens is that before meetings I look at what's the agenda and I review very carefully. I can do that at a frequency of once every couple of months, and I cannot do that at the frequency of every two weeks. Again, I'm not discussing blocking, just having the opportunity to review things carefully. I would like not to have to do that more frequently than at least every six weeks. +KG: I said this in the chat so I guess I will just summarize briefly. I disagree with MM that blocking is the most important thing. But I do think that being able to review every proposal before it can advance is very important to me and I think to a lot of people. That's not exactly for blocking. I do need to do a careful review of things so I can bring up my concerns and I don't have the bandwidth to be doing that literally all of the time. So the way that happens is that before meetings I look at what's the agenda and I review very carefully. I can do that at a frequency of once every couple of months, and I cannot do that at the frequency of every two weeks. Again, I'm not discussing blocking, just having the opportunity to review things carefully. I would like not to have to do that more frequently than at least every six weeks. -SYG: I have a question to that point. I thought the bi-weekly thing was together with the proposal that the bi-weekly meetings are single proposal slots, so there will be one thing to review at most. Does that change your opinion? +SYG: I have a question to that point. I thought the bi-weekly thing was together with the proposal that the bi-weekly meetings are single proposal slots, so there will be one thing to review at most. Does that change your opinion? KG: That's still a lot. Having to look every week at something to review carefully, is just a lot to spread out with the rest of my work. I find it much easier to dedicate a larger chunk of time to TC39 periodically rather than trying to remember to do a small amount of work every single week. @@ -314,27 +319,29 @@ SYG: Thanks for clarifying. TAB: Michael and up reflecting some of my comments from chatting here. (?) My issue with the meetings this year has been that after a big several-day meeting like this, I want to not think about JavaScript for a week or so, and think about other things. And then I have to prepare for at least a week in advance to do anything because of the 10-day proposal deadline, which ends up being like a week and a half each side of the thing. And when you're on a five-week cadence that leaves you about four weeks in between, and with those time periods, eating from both sides, you have maybe a week or so in the middle when you suddenly realize, oh shit, I should do something for this next meeting. That feels incredibly rushed. I've really not appreciated the way that that's help here. It's always just felt like “oh shit, There's a big meeting about to happen.” I don't have time to prep for it and then because I don't have time to prep for and I And then the next meeting comes up and it's the same thing happening again, but that's just how my brain works there. It does not work great for me and I would much prefer going back to six or even less. Because I've always felt that for TC39, even six meetings a year was a little strange. Often people have been talking about the CSS working group's cadence. We do it with three, sometimes four, large meetings a year, and then a weekly one-hour meeting where we just have small topics to talk about. Attendance isn't mandatory, but there's a agenda posted ahead of time. So, you know, whether you want to or not, if you can't attend a certain meeting, it's fine. It just bumps your topics, just bump it to the next week and that works really well for us and it has for years. That doesn't necessarily mean it will translate over to a new group. But I’m just afraid of high-cadence, very short meetings. They feel different from high-cadence, long meetings. Even high-cadence single-day meetings are significantly different than high-cadence one-hour meetings, mentally, socially, every aspect of them. It's a different beast entirely. So go in with an open mind. If these meetings are too frequent, that doesn't necessarily mean we need all meetings to be less frequent. Different types of meetings. Different meetings can work well on a different cadence than what we're doing here. -SYG: I've heard some feedback from delegates that there's a desire that they be present at every single meeting. I don't see how that concern is addressable if we want to scale up. It doesn't seem that like, any particular delegate should feel the responsibility to put a check on every single thing that goes through committee. That just doesn't seem like it'll ever scale and I'm not sure how we can address that with cadence. What are you trying to scale? That's more participation, more proposals, get in to 4 under consideration into the committee. (?) I don't really see this increasing. I think that the rate at which proposals get in is if anything probably already at a higher rate and we should reduce it. Sorry, I didn't mean the rate of stage advancement into stage 4. I meant the rate at which new proposals are proposed for consideration. +SYG: I've heard some feedback from delegates that there's a desire that they be present at every single meeting. I don't see how that concern is addressable if we want to scale up. It doesn't seem that like, any particular delegate should feel the responsibility to put a check on every single thing that goes through committee. That just doesn't seem like it'll ever scale and I'm not sure how we can address that with cadence. What are you trying to scale? That's more participation, more proposals, get in to 4 under consideration into the committee. (?) I don't really see this increasing. I think that the rate at which proposals get in is if anything probably already at a higher rate and we should reduce it. Sorry, I didn't mean the rate of stage advancement into stage 4. I meant the rate at which new proposals are proposed for consideration. WH: Data disagrees with you because we've had extra time lately. So the rate is decreasing if anything. -SYG: I also didn't specify the time frame. That is true, but the committee's growing from 20 or 30 people a few years back, to what it is today. But it is definitely true that in the past two meetings that we have had extra time. +SYG: I also didn't specify the time frame. That is true, but the committee's growing from 20 or 30 people a few years back, to what it is today. But it is definitely true that in the past two meetings that we have had extra time. -R~R: We're going to need to draw it to a close there. Thank you for everyone who's contributed here. I know we've got four people on the queue we haven't got to. So I'll capture this and I know that there's discussion going on in the TC39 delegates channel. So please feel free to to continue it there. It's kind of an unstoppable topic. +R~R: We're going to need to draw it to a close there. Thank you for everyone who's contributed here. I know we've got four people on the queue we haven't got to. So I'll capture this and I know that there's discussion going on in the TC39 delegates channel. So please feel free to to continue it there. It's kind of an unstoppable topic. ### Conclusion/Resolution -* No conclusion -* further discussion to happen + +- No conclusion +- further discussion to happen ## Incubator call chartering + Presenter: Shu-yu Guo (SYG) -- [proposal]() -- [slides]() +- proposal +- slides SYG: I'm proposing not adding anything new to the chartering. The current charter has four items, two of which we covered in calls between the previous meeting and this one. It seems like, with the current cadence, and since nobody's proposing a change at least for the next meeting to the cadence, that means we have about time for two. We have two left, so I would like to drain what's in the current charter before adding anything new. And as a refresher here, is the link to the current charter that I will paste in chat. The next two items for the incubator calls are pipeline and the array copy methods. If you are interested in either of those topics, please add your GitHub handle to that issue there. And that determines if you get pinged, when a new reflector issue is made to schedule the next incubator call. -DE: There were many non-delegates who expressed interest in the pipeline operator. Can we invite non-delegates to the incubator call or should it be restricted to delegates and invited experts? +DE: There were many non-delegates who expressed interest in the pipeline operator. Can we invite non-delegates to the incubator call or should it be restricted to delegates and invited experts? SYG: So far one of the points of the incubator calls was that people who would be the room in the plenary would also be the ones in the incubator call such that there was time for them to surface any concerns to the champion group ahead of time to work through, subject to the same IP stuff as normal delegates and invited experts. if non-delegates and invited experts want to come participate, I would prioritize surfacing the concerns of delegates in the discussion. That's the thing we're trying to do, to streamline here. If there's time, I don't see any reason, outside of IP reasons that I don't quite grok that we would disallow non-delegates, and experts. @@ -362,30 +369,30 @@ DE: And I want to suggest that we have a call to discuss the interaction between SYG: Pattern matching would dovetail nicely with pipeline so that sounds good to me. So the charter for before the next meeting is pipeline and then pattern matching. Since we didn't discuss pattern matching at this meeting, I don't really know who the stakeholders are. I will add the champion group that I find on the proposal, but please add yourself to the next charter when it goes up. -RPR: All right, thanks. +RPR: All right, thanks. ### Conclusion/Resolution + - Pipeline remains chartered - Array copying methods to be discussed in the record and tuple monthly call - Adding pattern matching to the charter for before the next meeting - ## Realms + Presenter: Leo Balter (LEO) - [proposal](https://github.com/tc39/proposal-realms) - [slides](https://docs.google.com/presentation/d/1c-7nsjAUkdWYie5n1NlEr7_FxMXHyXjRFzsReLTm8S8/edit) +RPR: so let's begin with Realms callable boundary. -RPR: so let's begin with Realms callable boundary. - -LEO: Alright, thank you. And Thanks everyone for Being here for this presentation. Yeah, here to talk again about Realms. Now the API callable boundary, okay? So the primary goals are still the same. We still want for the Realms, a new global object, we're going have a module graph, separation with the single, synchronous communication between both realms and the proper mechanism to control the execution of a program. the interface is the same from the last meeting. One of the interesting things about this callable boundary in Realms is that we can still transfer primitive values. They are not limited to Strings, numbers. This has been discussed here for the last meeting. It's a quick recap. But yeah, so we have we can transfer from the realm. We allow transferring primitive values, including symbols, not just strings but we cannot transfer objects from one realm to another. But we do have a wrapped function out of an object like a creation of wrapped function cross-realm. When we transfer callable values. As discussed in the last meeting, we are still subject to some CSP directives like unsafe-eval. importValue is also subject to other CSP directives as default-src. +LEO: Alright, thank you. And Thanks everyone for Being here for this presentation. Yeah, here to talk again about Realms. Now the API callable boundary, okay? So the primary goals are still the same. We still want for the Realms, a new global object, we're going have a module graph, separation with the single, synchronous communication between both realms and the proper mechanism to control the execution of a program. the interface is the same from the last meeting. One of the interesting things about this callable boundary in Realms is that we can still transfer primitive values. They are not limited to Strings, numbers. This has been discussed here for the last meeting. It's a quick recap. But yeah, so we have we can transfer from the realm. We allow transferring primitive values, including symbols, not just strings but we cannot transfer objects from one realm to another. But we do have a wrapped function out of an object like a creation of wrapped function cross-realm. When we transfer callable values. As discussed in the last meeting, we are still subject to some CSP directives like unsafe-eval. importValue is also subject to other CSP directives as default-src. WH: Can you define the CSP acronym? -LEO: Yes, yes, Content Security Policy. Thank you. Yeah. So the functions are never unwrapped, every function is instead a new wrapped function exotic. And they don't have a [[Construct]]. Internal, these are not Chained. they have an internal call that one course. At this argument to object, this is done in the regular Functions’ call, we have more details this, from the last meeting, but these argument is actually subject to get rough value too. The general resolutions of this current proposal is as this proposed. API does not provide any cross realm object exists, but it's enables a virtualization mechanism that we can see if we can work with this to address many of our use cases. The API provides enough tools to implement membranes on top. Like when we do membranes Frameworks that are used for virtualization, we have a wrapped exotic Functions enabling crossrealms callbacks in the other direction. This is actually important to create this communication channel, +LEO: Yes, yes, Content Security Policy. Thank you. Yeah. So the functions are never unwrapped, every function is instead a new wrapped function exotic. And they don't have a [[Construct]]. Internal, these are not Chained. they have an internal call that one course. At this argument to object, this is done in the regular Functions’ call, we have more details this, from the last meeting, but these argument is actually subject to get rough value too. The general resolutions of this current proposal is as this proposed. API does not provide any cross realm object exists, but it's enables a virtualization mechanism that we can see if we can work with this to address many of our use cases. The API provides enough tools to implement membranes on top. Like when we do membranes Frameworks that are used for virtualization, we have a wrapped exotic Functions enabling crossrealms callbacks in the other direction. This is actually important to create this communication channel, -LEO: since then, after the recap that we just had, we have some challenges that I want to be discussed today. I think those are challenges that are important to have addressed to move forward with this proposal. And here is a summary of them going to discuss each one of them. I ask you please wait until I finish the presentation. So we talk to a lot of all of them. Let's add things to the, the queue, the first topic will be the HTML integration issue on the module map or graph separation. And then, we're going to talk about the web globals that should be during the host or HTML integration. And then I'm going to report some of the question back requesting the previous realm API to be discussed. What we have in HTML integration today we have a concern that was raised by the Chrome team to asking us to not introduce a disjoint module map. as saying this is a copy and paste in Web’s disjoint implementations module maps are tired tied to window or worker Global scopes. and their associated memory cache. Has HTTP. Cache is ETC. This is a discussion that we've been having on GitHub. +LEO: since then, after the recap that we just had, we have some challenges that I want to be discussed today. I think those are challenges that are important to have addressed to move forward with this proposal. And here is a summary of them going to discuss each one of them. I ask you please wait until I finish the presentation. So we talk to a lot of all of them. Let's add things to the, the queue, the first topic will be the HTML integration issue on the module map or graph separation. And then, we're going to talk about the web globals that should be during the host or HTML integration. And then I'm going to report some of the question back requesting the previous realm API to be discussed. What we have in HTML integration today we have a concern that was raised by the Chrome team to asking us to not introduce a disjoint module map. as saying this is a copy and paste in Web’s disjoint implementations module maps are tired tied to window or worker Global scopes. and their associated memory cache. Has HTTP. Cache is ETC. This is a discussion that we've been having on GitHub. [slide: Module Graph/Map Per-Realm Instantiation] @@ -417,7 +424,7 @@ There's a possible option, Realm constructor parameter to disable the mutation o [slide: Web Goals] -LEO: okay, following up. We have the module breath. We have the web globals by the, for the Realms includes ecmascript intrinsic size and the instance instance socialization hook allows to the host to Add more properties to the global object. The current spec text is allowing the properties is actually restricting product properties to a configurable this of the global disk to be configurable and they must have the most not have authority. Meaning they cannot perform I/O or create side effects of mutations status for the parents realm. This is this has been one of the things that I actually like has some discussions. +LEO: okay, following up. We have the module breath. We have the web globals by the, for the Realms includes ecmascript intrinsic size and the instance instance socialization hook allows to the host to Add more properties to the global object. The current spec text is allowing the properties is actually restricting product properties to a configurable this of the global disk to be configurable and they must have the most not have authority. Meaning they cannot perform I/O or create side effects of mutations status for the parents realm. This is this has been one of the things that I actually like has some discussions. MM: Not only the parent realm, we can't have hidden mutable state, even if it's isolated within the same Realm @@ -439,7 +446,7 @@ CP: Just to clarify, the module map issue has no implications - we expect the HT KG: While I understand the motivations for the callable boundary, and they make sense, I am a little bit concerned that the restriction means that options bag arguments don't work. For example, I can't just provide my new realm a fetch function and expect that new realm to be able to call fetch, because fetch takes an options bag, which means I have to manually wrap it very carefully and like you know, that's surmountable and given the trade-offs, perhaps the boundary is in the correct place but I wanted to raise this as an issue, especially if anyone else has any bright ideas for how to make that less painful. -CP: Yeah, I mean yes or this is with a current proposal. You have to implement your own protocols to communicate. You want to do a fetch across the realm, you will have to implement something that transfers those options into some form for the other realm, you're going to make the fetch out there. You have two options, you do it yourself manually, which is a little bit tricky. We are to create a function may be passing as arguments to that function, instead of options. And then you construct the options the other side or you use, as a LEO mentioned, a membrane that does that for you but it's a little bit bigger, you have bring a library and you have to do initialization. So we understand that this is the case like this, it is the cons of going with a callable boundary +CP: Yeah, I mean yes or this is with a current proposal. You have to implement your own protocols to communicate. You want to do a fetch across the realm, you will have to implement something that transfers those options into some form for the other realm, you're going to make the fetch out there. You have two options, you do it yourself manually, which is a little bit tricky. We are to create a function may be passing as arguments to that function, instead of options. And then you construct the options the other side or you use, as a LEO mentioned, a membrane that does that for you but it's a little bit bigger, you have bring a library and you have to do initialization. So we understand that this is the case like this, it is the cons of going with a callable boundary SYG: so one of the, so when the bount our boundary idea was originally brought up so I really still like the colorful boundary as the boundary of think, I thought was very clever and solving your use case, but the original boundary idea was well, how far can we get with just structure cloning? And structured cloning works for options bags and I don't think functions work for structure cloning anyway. So you can still have this callable boundary. I'm wondering what your thoughts are on combining structured cloning and the callable boundary, I guess it can't can't be exactly that because you want primitives to pass through as is though, I guess you can't really observe if you know they're copied or not because they don't have identity. @@ -447,25 +454,25 @@ CP: We were hoping that records and tuples will build part of that role because DE: talking about options bags is a bit too narrow. So even if we have records and tuples, if you call fetch, it's going to return a promise of a response and that's an object. So it's immediately going to throw, and also can't be structured cloned. The things that make infinite realm are quite different from the things that make sense to structured clone two different agents. Even if we ignore the layering issues with structured clone itself, I think what we can do is we can start with an initial minimal realm callable boundary proposal, which is quite expressive. And later we can add features either by generalizing the cases that currently throw a type error to not throw type error, or by making it an opt-in mode, that’s part of an options bag that you pass to the Constructor. I think that in terms of starting with a minimal piece of functionality, that can later be extended, this API does provide this kind of core base that most other things can be implemented on top of. Maybe not, maybe not everything. But in terms of ergonomics you could achieve most of this ergonomics through a library. So I think this makes sense as a starting point. I think it's best to start with something simple, given that the space of copying things is quite large. -RW: in the spirit of what Dan just said. TC39 as a committee has in the past has made clear that - I don't want use the word primitive, but let's say fundamental building blocks are definitely our business. For example, atomics aren't really particularly useful with shared array buffers unless you put a bunch of Library code on Top of them. So in a similar vein, having a very minimal realm API, it's definitely within our wheelhouse and provides this realm API expressively I just wanted to say, like, maximally minimal, minmax, maxmin, something like that, that and comment. +RW: in the spirit of what Dan just said. TC39 as a committee has in the past has made clear that - I don't want use the word primitive, but let's say fundamental building blocks are definitely our business. For example, atomics aren't really particularly useful with shared array buffers unless you put a bunch of Library code on Top of them. So in a similar vein, having a very minimal realm API, it's definitely within our wheelhouse and provides this realm API expressively I just wanted to say, like, maximally minimal, minmax, maxmin, something like that, that and comment. JHD: Yeah, so I wanted to talk about which globals are available inside a realm, the argument that heard from I guess the Chrome team and from other web Folks as well, has been that most web developers don't think of JavaScript and HTML as separate things and most web developers don't distinguish where setTimeout is specified, for example, which I'm sure is both true and also backed by all sorts of surveys. To me, that is a very compelling argument for deciding what defaults should be available. But there is a growing group of folks that do server side JavaScript that try to run code in both browsers and on the server or that make packages that are intended to be useful in both environments. And so it is very important to be able to have that knowledge to know which things are in which environments and to also author your code with the expectation that you're only using an intersection of those environments and not a union of them or any one of them individually. So I think it's very important that there be a way that I can get a realm that isn't a web Realm when I'm not on the web. More specifically, I want to be able to create a realm that’s not specific or unique to the environment I'm currently running in but that is actually somewhat portable and 262 is usually the group for that. so I want to just get on record that I think that's a very important use case and I think it needs to be addressed somehow -SYG: I can respond to that before going to the main topic. it is the case that from the mdn surveys, I think that that what you said JHD yeah it is backed up by the surveys that what developer is certainly do not understand or want to understand, and we think it’s not desirable for them to have to understand, what groups standardized what APIs and where they come from. So to that end, I have the opposite position, which is that I think it'd be more harmful than not for us to specify something that is portable by virtue of TC39 having specified it. I don't think that is the right thing to do for Developers. I think, for this question of having an interoperable set of globals, I don't know what to say there. The environments are different and there, the individual hosts, they’ve taken note, especially to try to Get more web APIs where the users have been asking for them. So I have some optimism there, I guess. +SYG: I can respond to that before going to the main topic. it is the case that from the mdn surveys, I think that that what you said JHD yeah it is backed up by the surveys that what developer is certainly do not understand or want to understand, and we think it’s not desirable for them to have to understand, what groups standardized what APIs and where they come from. So to that end, I have the opposite position, which is that I think it'd be more harmful than not for us to specify something that is portable by virtue of TC39 having specified it. I don't think that is the right thing to do for Developers. I think, for this question of having an interoperable set of globals, I don't know what to say there. The environments are different and there, the individual hosts, they’ve taken note, especially to try to Get more web APIs where the users have been asking for them. So I have some optimism there, I guess. JWK: this is an advanced API. So I guess it's okay to require developers using this API to know this difference. And this proposal currently is a min-max and need a membrane to make it easy to use. Those libraries can set-up those functions. SYG: I did not understand that point, I'm sorry. you're saying that membranes will provide extra globals? -JWK: The membrane or some other library (that wraps the Realms API) can do that. +JWK: The membrane or some other library (that wraps the Realms API) can do that. SYG: How? -CP: Right. Yes. So once you bring in a membrane Library, you can see in the examples and demos that that they will mention you execute some code inside a newly created realm, that it finds new globals that when you try to interact with those globals, it actually goes across the callable boundary and executes the operation on the other side. That's the kind of things you have to do, Like, if you don't have fetch inside the realm, you have to bring in membrane, put it on top of the realm and then you have whatever globals you want to expose inside around from the other realm, you'll be able to do so. But you have to bring the library. I think that's an okay solution for now as we progress on on these, and I think the process will be more organic as we specified in the spec today that we're not dictating work, just like it is for the web today and node and so on, we don't specify it all of them. We only tell what are the things that the language requires and that's it. So the holes move we'll be adding that we can engage with different groups and try to add things that are useful for the Realms in different environments and do it in an organic way? I get that quite fine, you always have the membrane solution, or virtualization solution that you can commit on top of this very minimal API. +CP: Right. Yes. So once you bring in a membrane Library, you can see in the examples and demos that that they will mention you execute some code inside a newly created realm, that it finds new globals that when you try to interact with those globals, it actually goes across the callable boundary and executes the operation on the other side. That's the kind of things you have to do, Like, if you don't have fetch inside the realm, you have to bring in membrane, put it on top of the realm and then you have whatever globals you want to expose inside around from the other realm, you'll be able to do so. But you have to bring the library. I think that's an okay solution for now as we progress on on these, and I think the process will be more organic as we specified in the spec today that we're not dictating work, just like it is for the web today and node and so on, we don't specify it all of them. We only tell what are the things that the language requires and that's it. So the holes move we'll be adding that we can engage with different groups and try to add things that are useful for the Realms in different environments and do it in an organic way? I get that quite fine, you always have the membrane solution, or virtualization solution that you can commit on top of this very minimal API. -SYG: It's fine with us, meaning Chrome web platform team, that, of course, 262 will say the minimal set of things that all Realms must support. We're not okay With the position that 262 would prohibit additional things to be added to the global. Such as the guidance, he provided around. No I/O and hidden mutable State. +SYG: It's fine with us, meaning Chrome web platform team, that, of course, 262 will say the minimal set of things that all Realms must support. We're not okay With the position that 262 would prohibit additional things to be added to the global. Such as the guidance, he provided around. No I/O and hidden mutable State. -CP: Yeah, I believe we can compromise on that and just keep it configurable because I think configurable is an important distinction because that means you could virtualize and remove everything. We have plenty of issues with unforgeables in the web. So I think if the other champions are okay, I'm fine. Very nice job with the SES team and see what we can be there. Yes, that's something that I was going to add our goal. +CP: Yeah, I believe we can compromise on that and just keep it configurable because I think configurable is an important distinction because that means you could virtualize and remove everything. We have plenty of issues with unforgeables in the web. So I think if the other champions are okay, I'm fine. Very nice job with the SES team and see what we can be there. Yes, that's something that I was going to add our goal. LEO: Here just like draw this line and I think I like drawing this line at like, avoiding no configurables still allow us to do a virtualization and we might be more work for user land but it's too like something we can work on top of. @@ -473,9 +480,9 @@ CP: Yeah. And they're just to be clear. I'm not saying that we are okay with add SYG: I'm okay with the dictation that that the, that no Authority be conferred. I think the, the globals that the web, know, seems sensible to add, here is not stuff that, do I. Oh, right. It's like, Like blob, and like atob, stuff like that. Like stuff that happens to not be in 262, but not for any actual reason. -CP: Oh, so you don't have any objection with the core spec text that we have? +CP: Oh, so you don't have any objection with the core spec text that we have? -SYG: I think the guidance is fine. I mean it's kind of loose enough that that will need to really look deep into what confers Authority and whether it makes sense to add. but like the point of the hard boundary, what I needed to do that, right? like fetch wouldn't work anyway. It's how do you get the like I'm fine with the no I/O thing basically. I am not fine with a Prohibition that like the only things that can ever be in a user specific realm are the things 262 so I'm okay with it. it, +SYG: I think the guidance is fine. I mean it's kind of loose enough that that will need to really look deep into what confers Authority and whether it makes sense to add. but like the point of the hard boundary, what I needed to do that, right? like fetch wouldn't work anyway. It's how do you get the like I'm fine with the no I/O thing basically. I am not fine with a Prohibition that like the only things that can ever be in a user specific realm are the things 262 so I'm okay with it. it, CP: The current spec does not prohibit adding new things, it's just saying that those things that are added by the host should be configurable and should not have have authority. @@ -489,9 +496,9 @@ SYG: I don't understand that. Like, why is that necessary? Because if interop is JHD: I don't want code that relies on those additional things to function properly. I want it to fail, to not be portable. -RW: Yeah, I think that the point thing is actually pretty valuable to be able to create a fresh untouched, you know, global object that is you know, the Ecma 262 will object with nothing on it. there is value in that. +RW: Yeah, I think that the point thing is actually pretty valuable to be able to create a fresh untouched, you know, global object that is you know, the Ecma 262 will object with nothing on it. there is value in that. -SYG: Okay, question, I don't know what more to say other than I disagree with that intuition, supposed to mean. +SYG: Okay, question, I don't know what more to say other than I disagree with that intuition, supposed to mean. RW: That's fine. You're welcome to disagree. Great. I'm just saying that I see what George's point is, and then I could see value in it. @@ -503,33 +510,33 @@ JHD: I guess, YSV. That sounds like you're talking about a mechanism by which th YSV: So I think, I think what Mark just said that we don't specify Unicode we use it is pertinent here and I would see this as being a similar action on our side in that were referencing these APIs and using them, we aren't going to create an API in ecmascript that allows you to get all of the ecmascript APIs without Unicode. That wouldn't make very much sense, right? -JHD: right, because they depend on 262 to function, but none of we don't - 262 doesn't depend on any of the HTML APIs to function for the majority of it. +JHD: right, because they depend on 262 to function, but none of we don't - 262 doesn't depend on any of the HTML APIs to function for the majority of it. MM: What I’m suggesting explicitly is that we can depend on them, we can and in particular I first of all, I don't know. The the webidl `exposed` things so I'm not going to take a position on that. I'm going to take this position. Edition only on the ones that I know to be safe, like URL and textencoder. But those are both safe and host independent, and things that make sense across all hosts. So we can decide to be dependent on them by citing them and to, and to say that these are standard global variables in which the behavior of the objects at that Global variable is standardized by other Spec JHD: That sounds great to me in general - that's decreasing the deviation between environments regardless of where the actual source of truth for the specification lives. But that unless we're planning to do that for everything, there still exists the same use cases that I have of wanting a subset. -CP: Oh, let me this is not a stage 3 blocker, Right. +CP: Oh, let me this is not a stage 3 blocker, Right. LEO: Yeah look at that. That's one of the things that I've been trying to think we Like we have more to discuss in here. I think most of these can be stage three discussion but also like for what we want here. And what we proposing is just to Define like some like drawing the line for what we want as like in things being added my the idea of having like every group of proper you should be configurable is to allows someone to do configuration of of that realm to shape. Get back to what they want. We still do host integration to add more things, but we still allow the user to clean up what the host has just added. We do today with with iframes, people might not love like just removing things to shape it back but it's still like this is what is done in virtualization today, the experience is so much better from what we have today, and I still, I think we can make this line of “Yeah, let's not add a, let's not at all. Non-configurable properties” And then we can discuss in follow-ups. I was like extensions for the Realms have neurons. That is being cut off with just this most of the saddle globals Etc. I really hope like this. this. and knowing the limits of this configuration there, like we're configuration is necessary. -RPR: We've got an hour left and we're not really progressing down the queue because people keep coming in. It's your LEO to choose what you want to go to next but they're downtown is next in the queue. So why don't you just said it's a question. +RPR: We've got an hour left and we're not really progressing down the queue because people keep coming in. It's your LEO to choose what you want to go to next but they're downtown is next in the queue. So why don't you just said it's a question. -LEO: if JHD could accept, it's definitely not what JHD wants, to shape the associated Realms to the state that JHD wants. +LEO: if JHD could accept, it's definitely not what JHD wants, to shape the associated Realms to the state that JHD wants. -JHD: So I have a longer thing to say, which I won't now because we need to move through the queue I think, but I don't agree that that means it's not a stage 3 blocker. So I'd love to talk more about that later. +JHD: So I have a longer thing to say, which I won't now because we need to move through the queue I think, but I don't agree that that means it's not a stage 3 blocker. So I'd love to talk more about that later. CP: I want to express JHD that this is something that you could do usual, and much. He's just a script that remove anything that you don't know. littledan: I can I get a library at some point on exposed in the mechanics, but the idea would be that it's written in the web specifications, which things are exposed. And then have led by DL doing the plumbing to JavaScript to those unto the Realms overall. I think this matches up really well with what Leo's proposing, and I'm in favor of the contents of this slide and in the current proposal, which matches that -LEO: That was mostly it also like reflect some, some feedback from from my team back at Salesforce as they saying here minimal and John David Dalton or check here. What I want is actually, like, is still provide around, there is configurable But yeah. +LEO: That was mostly it also like reflect some, some feedback from from my team back at Salesforce as they saying here minimal and John David Dalton or check here. What I want is actually, like, is still provide around, there is configurable But yeah. -WH: Clarifying question for Mark. Would the ECMAScript spec also require the same globals from other standards that you want in Realms to be present for the global script in all ECMAScript implementations? +WH: Clarifying question for Mark. Would the ECMAScript spec also require the same globals from other standards that you want in Realms to be present for the global script in all ECMAScript implementations? -MM: For the globals that I'm talking about, yes. I think that ecmascript should standardize the URL Constructor, a text encoding Decoder, possibly a few others, all of which satisfy the harmlessness criteria. That's on the current slide. all of which have a host independent semantics. And many of which are already implemented across multiple non browser hosts. So, yes, I think these should become standard Ecmascript period, independent of realms and then having become standard globals. They realms Creating the Chrome API, creating new Realms with the JavaScript standard globals would would thereby include them because they're big of a script. +MM: For the globals that I'm talking about, yes. I think that ecmascript should standardize the URL Constructor, a text encoding Decoder, possibly a few others, all of which satisfy the harmlessness criteria. That's on the current slide. all of which have a host independent semantics. And many of which are already implemented across multiple non browser hosts. So, yes, I think these should become standard Ecmascript period, independent of realms and then having become standard globals. They realms Creating the Chrome API, creating new Realms with the JavaScript standard globals would would thereby include them because they're big of a script. -RW: Stand standard Wellness Market. When you standardized, you can I ask clarify for you to clarify that you what you mean, as we standardize the name of the thing again and we stand here with your pointer to the definition right? +RW: Stand standard Wellness Market. When you standardized, you can I ask clarify for you to clarify that you what you mean, as we standardize the name of the thing again and we stand here with your pointer to the definition right? MM: you to the definition as of a You don't give to you the committee free license to make revisions that implicit Impressions. So so yes, yes, it would be by citing a particularly version of a particular external spec. And, uh, standardizing the name you with that point. RW: Okay, just one. I just wanted to make sure that that was still the point, great @@ -540,39 +547,39 @@ LEO: Yeah, and yeah, this is why we're trying to make this a stage three concern SYG: Okay, so the talk about modules in the module map, some people asked in the chat that it didn't quite get the rationale from for why the module map thing is problematic from some web platform folks, the so I can try to explain that here hopefully in a clear way. The idea with the so we need to take a step back. So the idea with Callable boundary. Remember that the pushback for calling for a hard boundary. We're asking for a hard boundaries started because we saw a lot of evidence. Both kind of in the Wild on social media and GitHub and stuff and even internally at some teams at Google who were interested in using Realms as a Sandbox isolation, execution context. Basically We thought it would give them isolation just in a very lightweight so why not, sounds great! The problem is of course that if you can in fact, intermingle the object graph, then it is very difficult to secure the boundary and you would need something like a full membrane implementation to actually secure the boundary to make sure that your object graphs don't touch. And as we know, it's easy to accidentally pass an object that you thought conferred No Authority. But in fact, Confer some Authority because you can chase by its prototype chain to get to some function Constructor or something or other that's kind of the inherent difficulty. So the suggestion was like, we think is a foot gun because of the enthusiasm that people were displaying of wanting to use the old version of realms for something that it didn't provide a language level guarantee for. So that was the starting point. With the callable boundary idea that entirely solves that concern, right? That that because of how the callable boundary works the callable boundary version does not have the same foot gun at all. It's great. How this follows into where, the module map concern comes from is that well, the The thinking behind asking for the before a boundary in the first place Is this foot gun, to not have the the, the user round be able to touch and mutate the stuff of the outer realm having to be a hard boundary. in some folks, mental models, the module map is immutable. It's mutable State. And if you were to import modules in the user realm and that directly mutates, the module map in the outer realm. If we go with the the Let's Not Duplicate module maps situation, duplicating module Maps would have the issue of mutating. And the interim mutating the outer realm, but duplicating the module map has its own host of complexity issues. At least there was, there was pushback given by your female editors that it has a host of integration that complexity issue. So of, where does the cache live? Who does the fetch, that kind of thing. So, so the thinking with the module map is, it would be nice If The the mental model of the inner module cannot programmatically touch the outer realm of mutated State extended even to the module map. And that's why there's this split that's not quite worked out yet of if all the mutation happens still by virtue of code, being run in the outer Realm, The only thing that happens in the inner realm. is that instantiates? Something that's already been fetched. That would side-step back concern that said, I don't know how much of a blocking concerned. This is it would be nice if the mental model extended all the way down to the module map as well. I mean I think it is exception that that the Realms concept has to explain that no I/O is allowed except modules, right? No mutation that allowed except modules. Because you observe that the inner realm, fetch, something, and cost, mutation to happen in the module Map. So that is the rationale for why the the module map mutation was considered problematic, is that hopefully clear up some of the confusion, -CP: I wanted to mention that these obviously is has no effect on the spec, we will not change the spec in any significant way because it's not part of the 262 where we are specified at the spec. That's on the HTML integration, I guess. And and we have multiple solutions that we explore our. So I don't think this is stage 3. Blocker, I don't know if you have any different opinion on that, but I believe this this should not be a blocker. +CP: I wanted to mention that these obviously is has no effect on the spec, we will not change the spec in any significant way because it's not part of the 262 where we are specified at the spec. That's on the HTML integration, I guess. And and we have multiple solutions that we explore our. So I don't think this is stage 3. Blocker, I don't know if you have any different opinion on that, but I believe this this should not be a blocker. -DE: So I think I understand the point of view that inside of a module, you shouldn't shouldn't have any Ability to cause this I/O to be done. I think within the Champion group, this is understood to be a pretty acceptable exception. If you look at some really basic use cases, if you're going to run any code inside a realm at all, while respecting the CSP no-eval rule. You just need to load a module and those the code that you run will often be different from the code that you want to run outside the grill. So I think it just makes sense to let Realms load modules. I don't think there's any really substantial interference from one to the other. Sure, it affects This Global Network cache. There was a large amount of discussion about an additional caching layer at the module. Map level. That doesn't seem harmful, but it doesn't seem very very helpful either, because real interference is at the Network cache level. So it, so the current proposal does allow relevance to Inner Realms to cause something to be loaded into the network cache which could be observed by the by the outer realm, but this can be mitigated by if you want to run code in a realm where you want to restrict it, you just don't run code in the realm, that Imports the module because the outer realm that decides which code to run in the inner realm. I just don't think the appearance of it. A module is significant mutation because it its identity and it's not it's not sending any particular. Different given piece of data. And when, if something's cash, then you're going to continue to get that copy of it. so, I don't agree that this is a significant. This is significant problem. I think it would be it would just be quite unfortunate if we made this severe restriction, that that realm is cannot load modules, which were previously unloaded and the other things in the middle which are kind of splitting the difference don't seem to accomplish anything. +DE: So I think I understand the point of view that inside of a module, you shouldn't shouldn't have any Ability to cause this I/O to be done. I think within the Champion group, this is understood to be a pretty acceptable exception. If you look at some really basic use cases, if you're going to run any code inside a realm at all, while respecting the CSP no-eval rule. You just need to load a module and those the code that you run will often be different from the code that you want to run outside the grill. So I think it just makes sense to let Realms load modules. I don't think there's any really substantial interference from one to the other. Sure, it affects This Global Network cache. There was a large amount of discussion about an additional caching layer at the module. Map level. That doesn't seem harmful, but it doesn't seem very very helpful either, because real interference is at the Network cache level. So it, so the current proposal does allow relevance to Inner Realms to cause something to be loaded into the network cache which could be observed by the by the outer realm, but this can be mitigated by if you want to run code in a realm where you want to restrict it, you just don't run code in the realm, that Imports the module because the outer realm that decides which code to run in the inner realm. I just don't think the appearance of it. A module is significant mutation because it its identity and it's not it's not sending any particular. Different given piece of data. And when, if something's cash, then you're going to continue to get that copy of it. so, I don't agree that this is a significant. This is significant problem. I think it would be it would just be quite unfortunate if we made this severe restriction, that that realm is cannot load modules, which were previously unloaded and the other things in the middle which are kind of splitting the difference don't seem to accomplish anything. RPR: is everyone okay if we extend this by another 15 minutes, Yes. Okay. All right. So we'll do that. So Rick did Dan cover your item? -RW: Yeah, I think, I think Dan didn't need her of my item. Mostly what I wanted to say. that, you know, the issues with with the shared module, map, and mutating the outer. Like let's say the parent Realms graph isn't observable. You know, I think are nailed it down, he said that's not our problem. Anyway, the HTML spec can figure that out and all. Furthermore, I'll go as far said that, implementers can figure that out like where the modules get loaded with either one of them. And issue could very easily just create a module. could it make a difference because that's not observable. And if the argument is as well, there's a timing observability. Then I would say, that's a bad implementation. If it allowed for their period, timing observability first between a cached module in one that has to make a network request. You can fake that with whatever you need to do to, to make it, you know, put some non-deterministic result. So again, I don't think that this is a stage three blocker. I don't think it's really a relevant blocker for this proposal. that all stage 3 or stage for ringing stage, is it just doesn't really apply to the specification. +RW: Yeah, I think, I think Dan didn't need her of my item. Mostly what I wanted to say. that, you know, the issues with with the shared module, map, and mutating the outer. Like let's say the parent Realms graph isn't observable. You know, I think are nailed it down, he said that's not our problem. Anyway, the HTML spec can figure that out and all. Furthermore, I'll go as far said that, implementers can figure that out like where the modules get loaded with either one of them. And issue could very easily just create a module. could it make a difference because that's not observable. And if the argument is as well, there's a timing observability. Then I would say, that's a bad implementation. If it allowed for their period, timing observability first between a cached module in one that has to make a network request. You can fake that with whatever you need to do to, to make it, you know, put some non-deterministic result. So again, I don't think that this is a stage three blocker. I don't think it's really a relevant blocker for this proposal. that all stage 3 or stage for ringing stage, is it just doesn't really apply to the specification. -CP: the timing issue is even more difficult to or economically impossible, because you're using async Anyways, they're using game board, board, which is async, right? +CP: the timing issue is even more difficult to or economically impossible, because you're using async Anyways, they're using game board, board, which is async, right? RW: And even even static Imports that happen inside the realm, if Imports something that was already imported by the parent realm. So it's a little bit quicker because it's cached locally, the implementers can see implementations using split the difference and make it an auditor that a system cache or had to make a network request like you can't, there's can't. There's no way for the user. To know that that, you know, did or did not previously exist in the parent or in sibling right? For that, -RW: I'm going jump to the next thing because my I did have one of those after and just respond to Shu and say, you know, aside from using different words, it is not different. "positional flakiness" is just another way of saying order of statements matter. So if you to cancel out that that argument from us, that's fine if if you if that's the view it still makes a difference when somebody can change the order of static important to file without it making a difference and it might be a little bit strange, if changing the order like this in that example, that was on a slide would make difference that's all trying to say but if you don't think that that holds up on its own, then I would gladly say we retract that argument. +RW: I'm going jump to the next thing because my I did have one of those after and just respond to Shu and say, you know, aside from using different words, it is not different. "positional flakiness" is just another way of saying order of statements matter. So if you to cancel out that that argument from us, that's fine if if you if that's the view it still makes a difference when somebody can change the order of static important to file without it making a difference and it might be a little bit strange, if changing the order like this in that example, that was on a slide would make difference that's all trying to say but if you don't think that that holds up on its own, then I would gladly say we retract that argument. -MM: I need to respond to Rick statement about timing because I completely disagree. Once you allow code to measure duration, there's all and there's lots of sneaky ways code can measure duration indirectly. So it's not necessarily that you gave it a timer, but once you want to code is able to measure duration there are tremendous numbers of side channels that they can then use and trying to delays to hide those side channels, is just a not going to be practical and being not something that implementations will actually do. Because we actually because the additional delays are too onerous and they're not the succeeded, avoiding the side Channel anyway, +MM: I need to respond to Rick statement about timing because I completely disagree. Once you allow code to measure duration, there's all and there's lots of sneaky ways code can measure duration indirectly. So it's not necessarily that you gave it a timer, but once you want to code is able to measure duration there are tremendous numbers of side channels that they can then use and trying to delays to hide those side channels, is just a not going to be practical and being not something that implementations will actually do. Because we actually because the additional delays are too onerous and they're not the succeeded, avoiding the side Channel anyway, -RW: Wasn't that actually the solution or one of the solutions to like timing issues with settimeout all the implementations like basically, she like made it Non deterministic, what the minimum amount of time? A set timeout will actually take +RW: Wasn't that actually the solution or one of the solutions to like timing issues with settimeout all the implementations like basically, she like made it Non deterministic, what the minimum amount of time? A set timeout will actually take -MM: We're talking about is network round trip. Yeah. There's, +MM: We're talking about is network round trip. Yeah. There's, -RW: I don't necessarily think we disagree, but what I'm saying is, is that How about this Mark? I see the argument you're trying to make. So I'll I will actually just go ahead and retract my personal statement it and still I will still say that I think that timing is not an item that this spec needs to address. +RW: I don't necessarily think we disagree, but what I'm saying is, is that How about this Mark? I see the argument you're trying to make. So I'll I will actually just go ahead and retract my personal statement it and still I will still say that I think that timing is not an item that this spec needs to address. -MM: I agree, completely. Okay. I think I'm in favor of the shared mutable parent, table map. The key thing is it's not overtly observable, meaning that there is no deterministic observation, according to semantics in the spec, observation through timing is a completely different. Kind of Of matter. +MM: I agree, completely. Okay. I think I'm in favor of the shared mutable parent, table map. The key thing is it's not overtly observable, meaning that there is no deterministic observation, according to semantics in the spec, observation through timing is a completely different. Kind of Of matter. -RW: I will concede that to you right now, Okay. And I also apologize for making odious claims. Yeah, +RW: I will concede that to you right now, Okay. And I also apologize for making odious claims. Yeah, -DE:I want to agree with with Mark here. That this, this thing stands on its on own even though this timing is observable because of the basic identity of loading modules. +DE:I want to agree with with Mark here. That this, this thing stands on its on own even though this timing is observable because of the basic identity of loading modules. MM: The further point I would make is that the coupling that work on? This is the stateful coupling that Talking about between the child and the parent is consistent with the theory of coupling that we already have through the callable boundary. We can radio retroactively, explain that in the Theory as if there's a there's communication over a callable boundary between the child's importer in the parents importer. The key thing is that No Object references are exposed across the boundary. It's only a coupling of side effects and a coupling of side effects are passing of data, can happen over a call boundary. So I think, I think all of this together argues that, this is just fine. CP: Does Shu have anything to say about marks explanation? -SYG: I think it's okay to move forward with the with the With the version as is to allow modules to be imported. We will of course do need to iterate on the mechanics of how that works, As I think, it's still like, what were you would still like to do? Is that there be a single fetch not multiple fetches? If you import the same thing, we get multiple instantiations as you have said, but not multiple fetches. We have voiced, the concern I have said, it's not a blocking concern, this particular point. What I want to push back on here. I something that you said that this is not a stage three concerned because it doesn't touch the 262 spec text, I would argue that it is a stage three concern to the extent that without this being figured out, I can't really go back and start implementing this. Even if we reach stage 3 here and so insofar as stage 3, being a signal for the implementations to start implementing. If it's not figured out, you're still not getting to the point where yes, let's start implementing it because a large part of this will be integration with the host. But that said, I'm okay with the proposal. +SYG: I think it's okay to move forward with the with the With the version as is to allow modules to be imported. We will of course do need to iterate on the mechanics of how that works, As I think, it's still like, what were you would still like to do? Is that there be a single fetch not multiple fetches? If you import the same thing, we get multiple instantiations as you have said, but not multiple fetches. We have voiced, the concern I have said, it's not a blocking concern, this particular point. What I want to push back on here. I something that you said that this is not a stage three concerned because it doesn't touch the 262 spec text, I would argue that it is a stage three concern to the extent that without this being figured out, I can't really go back and start implementing this. Even if we reach stage 3 here and so insofar as stage 3, being a signal for the implementations to start implementing. If it's not figured out, you're still not getting to the point where yes, let's start implementing it because a large part of this will be integration with the host. But that said, I'm okay with the proposal. DE: So if I could just jump in briefly, so I wrote the current HTML integration PR, I would be happy to upgrade to these other semantics to avoid the Redundant fetch. If this is the final decision, we were getting a lot of kind of conflicting signals about what the preferred semantics were. So that's part of why I didn't write it up so far but I can do this shortly after a week. would even be happy. You conditional stage 3 on that. Being written up. Does this? Does this resolve that particular issue, Shu? @@ -590,7 +597,7 @@ LEO: This would be exploration for the work, too much complexity to break what t LEO: I would like to hear what are the remaining Stage 3 blockers. -JHD: I have Stage 3 concerns that we ran out of time to discuss. The tldr is that there are 2 points. (1), when we're talking about adding something later, that's when we're subtracting something later, like making something more ergonomic later. It doesn't make sense to ship out now with every global and subtract later. (2), It's fine in our process to split things up, but that usually only works better when the second thing seems viable to be added later. But if it has insurmountable objections, it doesn't make sense to say that we ship something now and the other thing later. +JHD: I have Stage 3 concerns that we ran out of time to discuss. The tldr is that there are 2 points. (1), when we're talking about adding something later, that's when we're subtracting something later, like making something more ergonomic later. It doesn't make sense to ship out now with every global and subtract later. (2), It's fine in our process to split things up, but that usually only works better when the second thing seems viable to be added later. But if it has insurmountable objections, it doesn't make sense to say that we ship something now and the other thing later. LEO: We still have strong use cases for having full access to Realms and Globals. We just believe that there is a hard objection from one side or another. But in this case, your objection is opposed to another strong objection. There might not be a best solution for all, but the currently proposed solution is workable. We still have strong use cases to push for providing access to this. We could have a strategy showing how access to objects would be useful in the future. But we're still working on a cloudy perspective because we don't have all the artifacts we're providing. So that's why we like a step-by-step approach. But we still have strong arguments to continue exploring this. @@ -598,8 +605,8 @@ RPR: I hope JHD and LEO can work together. WH: I don’t have specific concerns myself but get the sense from the discussion that different people have different ideas about what this proposal means for implementations. So I echo SYG's point that we should make it clear what implementations should implement before stage 3. - ### Conclusion/Resolution + - Proposal does not reach Stage 3 - will continue to be discussed. @@ -620,7 +627,7 @@ WH: We agreed to use the `v` flag. It implies `u` in the sense that everywhere w MB: I certainly can think of examples, e.g. Babel or any ECMAScript parser implementation in ECMAScript would need to be updated to not only check for the `u` flag, but also for the new `v` flag if they change their behavior for parsing accordingly. This is equivalent to the changes that parsers in JavaScript engines would need to implement as they implement this proposal. -WH: A potential alternative might be, rather than gating spec logic on either Unicode or UnicodeSet, that supplying the `v` flag automatically sets both the `u` and the `v` flags. But that may have other undesirable consequences. +WH: A potential alternative might be, rather than gating spec logic on either Unicode or UnicodeSet, that supplying the `v` flag automatically sets both the `u` and the `v` flags. But that may have other undesirable consequences. MWS: We intend for it to be possible and not forbidden to have both `u` and `v`, but that is the same as just specifying the `v` flag. @@ -630,11 +637,11 @@ JHD: My queue item is right after this. And I have a few questions about the spe WH: Okay, next item. We added a whole bunch of other characters with special meanings in some contexts, such as `&`, `@`, and whatnot. Thus we will allow backslash escapes of those so users don’t get stuck: you can write `\&`, `\@`, etc. inside the new character classes. Would it be useful to retrofit those backslash escapes to also apply inside regular character classes so users don't have to worry about where they can escape those characters with backslashes? Currently we forbid those in regular character classes in Unicode mode. -MB: That's an interesting proposal. I like it. I would like to hear what other people in the committee think about this. +MB: That's an interesting proposal. I like it. I would like to hear what other people in the committee think about this. KG: I'm in favor of it. I hate having to remember which things I need to escape. I just aggressively escape punctuation that might do things. And when I escape a `-` in Unicode mode, that doesn't work, and that's always annoying. So I am in favor of porting these identity escape sequences to Unicode mode. We're not going to give them other meanings, right? -WH: `-` can be escaped as `\-`, but only inside a character class in Unicode mode. So it's weird. +WH: `-` can be escaped as `\-`, but only inside a character class in Unicode mode. So it's weird. WH: Next item. This thing will use both character sets and string sets and I’m iterating about how to spec that with the champions of the proposal. One thing that comes up is that, with the addition of string sets, matching choices become important. When matching against character sets, either the next character is in the set or not. There can be at most one element within a character set that matches. With string sets you can have multiple elements that match, so the question is which one to pick. I am assuming we want the longest one. I just wanted to raise this as an issue for others to think about. For now I think we'll go with the longest one. @@ -660,9 +667,9 @@ WH: I hear what you're saying. I don't understand the rationale. MWS: Because implementation is just a set of strings, which is barely more than sets of characters. Because what lots of people think of as characters needs to be encoded sometimes as multi-character strings, whereas the other is really a matcher spec. And if you look at Unicode, as I said, one is defined as a file with a few thousand lines, or a few hundred lines of strings, that are the set of strings, and the other one is defined as a regular expression. Those two are very, very different things and they behave differently, and they are implemented differently, and people think of them as different things. -WH: Let's go on to the rest of the queue. +WH: Let's go on to the rest of the queue. -JHD: Yeah, so I just had a few specific questions. So in this case, you've put the pipe at the Belgian flag in the parentheses in the first example. +JHD: Yeah, so I just had a few specific questions. So in this case, you've put the pipe at the Belgian flag in the parentheses in the first example. MWS: Yes, the left color bar of the Belgian flag is black on black background so it doesn't show up so well. @@ -670,21 +677,21 @@ JHD: So is there always supposed to be a pipe there or is that just happened? It MWS: That's not a pipe. That's just the one flag. -JHD: What happens if instead of that flag? I put a character that isn't in the RGI Emoji set. +JHD: What happens if instead of that flag? I put a character that isn't in the RGI Emoji set. MWS: The same thing that happens when you do set operations in other places: If you subtract something that's not in the left set then nothing happens. -JHD: okay, so it's like a no-op. Might that perhaps mask a bug? +JHD: okay, so it's like a no-op. Might that perhaps mask a bug? MWS: Well, that's the same in math. If you have a mathematical set of A and B and you subtract a mathematical set of B and C, you get the mathematical set that is A. -JHD: Sure. Yeah, I guess. Okay. So I mean, this is something we figure out within stage 2 but to me this isn't math and so like it's if it isn't a no-op, +JHD: Sure. Yeah, I guess. Okay. So I mean, this is something we figure out within stage 2 but to me this isn't math and so like it's if it isn't a no-op, MWS: But this is how people implement sets in general in computers as well. MB: If I may, there's actually a use case here. It can be that it masks a bug in some cases, but it can also be a valid use case to have a user-defined thing on the left hand side, and you want to make sure that whatever the subtraction produces definitely does not include this particular string or this particular set of strings, then this feature lets you do that. This is an important use case, and I don't think we should make it throw an exception or anything like that just because it might be used incorrectly. -JHD: So, from the discussion with Waldemar, sounds like if I do `/.../uv`, that is equivalent to `/.../v` in the way, the regex is parsed in whatnot. If I do `/u` or `/v`, does `.unicode` return true and does `.flags` contain the string `u`? +JHD: So, from the discussion with Waldemar, sounds like if I do `/.../uv`, that is equivalent to `/.../v` in the way, the regex is parsed in whatnot. If I do `/u` or `/v`, does `.unicode` return true and does `.flags` contain the string `u`? MWS: No, Mathias was careful enough to point that out and define the spec such that that doesn't happen. @@ -692,31 +699,31 @@ JHD: Okay, so the intention then, is that, if I want to check for, you know, see MWS: Yeah. -JHD: Okay. +JHD: Okay. -MB: It seemed important for flags to continue mapping to only their one corresponding getter, and for a flag not to suddenly influence other getters all at once. So `v` and `u` are really two separate modes you can think of: one is the Unicode mode, and the other is the UnicodeSet mode. There is overlap in their behavior, since Unicode set mode is a superset of Unicode mode, building on the existing Unicode mode behavior. But as far as the flags in the getters are concerned, they are really two separate modes. +MB: It seemed important for flags to continue mapping to only their one corresponding getter, and for a flag not to suddenly influence other getters all at once. So `v` and `u` are really two separate modes you can think of: one is the Unicode mode, and the other is the UnicodeSet mode. There is overlap in their behavior, since Unicode set mode is a superset of Unicode mode, building on the existing Unicode mode behavior. But as far as the flags in the getters are concerned, they are really two separate modes. JHD: Is there any use case for providing both `u` and `v` other than perhaps wanting to dictate the output of flags and the getters? MWS: I don't know if there's a use case; it seems unfriendly to throw an exception if you have both, when it's not necessary, but if you think that would be better, yeah, this is also something. -JHD: I think we can be discussed with in stage 2. I just posted those mostly to get clarification. +JHD: I think we can be discussed with in stage 2. I just posted those mostly to get clarification. -MWS: Okay. Okay, thank you. +MWS: Okay. Okay, thank you. -MM: I just want to verify that this does not change the Lexing rules of JavaScript, that anything that lexes tokens to tokenize JavaScript correctly today, will continue to lex correctly tomorrow. +MM: I just want to verify that this does not change the Lexing rules of JavaScript, that anything that lexes tokens to tokenize JavaScript correctly today, will continue to lex correctly tomorrow. WH: Correct. -MB: Yes, that is correct. Mark, we made that an explicit goal. It's listed in the readme of this proposal. I know we've talked about this last time the proposal was brought up and the time before that. Rest assured it is an explicit goal of this proposal to not change that invariant. +MB: Yes, that is correct. Mark, we made that an explicit goal. It's listed in the readme of this proposal. I know we've talked about this last time the proposal was brought up and the time before that. Rest assured it is an explicit goal of this proposal to not change that invariant. MM: Okay. And also, I want to say, I appreciate the humor of including flags in the regex pattern content, as well as on the (?). -MLS: On this page here, in the second example you show character class, it also contains some strings in here and earlier, we're both are talking about this, you suggest that you want to match the longest string first, when we have a set of strings when it seems to me that it's going to be more performance sensitive. If we match the longest string first, especially in example, here, where have a range of lowercase, ascii letters arranged of uppercase s, you letters in a bunch of screens. the ranges are easily matched while the strings are especially if there's a lot of them is more expensive to match. And I also think that the semantics for the, for those that are using these regexpressions, may not be if they have a implied matching order when we have a effectively character class or set that contains properties of strings. +MLS: On this page here, in the second example you show character class, it also contains some strings in here and earlier, we're both are talking about this, you suggest that you want to match the longest string first, when we have a set of strings when it seems to me that it's going to be more performance sensitive. If we match the longest string first, especially in example, here, where have a range of lowercase, ascii letters arranged of uppercase s, you letters in a bunch of screens. the ranges are easily matched while the strings are especially if there's a lot of them is more expensive to match. And I also think that the semantics for the, for those that are using these regexpressions, may not be if they have a implied matching order when we have a effectively character class or set that contains properties of strings. MWS: So, in terms of performance, I can readily believe that this could be slower than not having strings, but we are not changing the matching behavior for sets that do not have strings. And so, if someone really wants this and does this, then they have a use case for it. And if that comes at some cost, I think, if that's the cost that it has because that's the feature that they need. On the other hand, you could also readily optimize this. I mean, what we are doing is specifying behavior, but implementations are free to optimize on top of that. So the typical thing that we do, for example in a collation implementation, is that you would do a fast lookup on a character. And then there is a flag on the lookup result. It says whether the character on its own is all that you need to look at or you need to look at matching a suffix from there, so you have a data structure where you find a lookup result for C in this case and C could tell you that if it matches by itself, then it's a match, but also there could be a suffix H that you have to mach only if you get to C, whether the next character is an h and you don't have to check for multi-character strings when the first character is A or B. So there are common techniques that are widely known in the industry that can make that reasonably fast if desired. In Terms of confusing semantics, I think people are used to not having an order or anything like that implied, when they are characters in the character class and I think it would be very confusing if there was an order implied, if the strings in a character class had to be matched in the order that they occur – in particular, if they occur in subtractions and intersections and nested classes and things like that. -MLS: well, it character class, you only care that you Imagine a character that appears in that class. It's a, it's a unique set. Its does the characters match what we’re currently looking at? Is it in that set or not? That's a whole lot different than when you start including properties and strings in a set, And I think the regex users understand that. +MLS: well, it character class, you only care that you Imagine a character that appears in that class. It's a, it's a unique set. Its does the characters match what we’re currently looking at? Is it in that set or not? That's a whole lot different than when you start including properties and strings in a set, And I think the regex users understand that. MWS: We have been using strings in the equivalent ICU and CLDR construct for the last 19 years and I'm not aware of that kind of an issue. @@ -726,25 +733,25 @@ MB: I believe that's correct, yes. SYG: Okay. this is not any kind of blocking concern. I'm just trying to kind of figure out in my head. How we ought to think about these implicit implications, maybe we would want them to be more explicit. Do you get a sense that it be significantly harder to use? I think I saw in chat that WH had suggested the idea of Having `u` and `v` always be together if you want to use `V`, what do you think that's it? -WH: If you set the `v` flag it would automatically set the `u` flag. +WH: If you set the `v` flag it would automatically set the `u` flag. -SYG: okay? Not that you explicitly have to type u but that didn't explicitly, implies you that U just with +SYG: okay? Not that you explicitly have to type u but that didn't explicitly, implies you that U just with WH: Okay… -SYG: I think I have some issues but they want to explore to see what. Yeah. Okay, that sounds good. I think that's I would feel more comfortable with with that. It's a gut reaction. I haven't thought it through. Then the implicit information. +SYG: I think I have some issues but they want to explore to see what. Yeah. Okay, that sounds good. I think that's I would feel more comfortable with with that. It's a gut reaction. I haven't thought it through. Then the implicit information. MB: I think in response to what WH said earlier, what was compelling to me is the backwards compatibility aspect. Existing code that handles a RegExp object and goes down different code paths for `unicode` vs. non-`unicode` mode might want to do the same for this new flag. (Then again, they might need a new code path, since `unicode !== unicodeSet`.) That's an argument that would lead me in the direction of: if we make a change to the current proposal, maybe we should require both the `u` and the `v` flag to be explicitly set. Everything is explicit that way. -WH: I don't like it as much because it's just more typing and I would still want `v` to become standard usage. And I don’t want to have to deal with what happens if someone wrote just `v` when that doesn’t imply `u`. It would be annoying for that to throw an exception. It would be equally annoying to have to define a non-Unicode `v` mode. +WH: I don't like it as much because it's just more typing and I would still want `v` to become standard usage. And I don’t want to have to deal with what happens if someone wrote just `v` when that doesn’t imply `u`. It would be annoying for that to throw an exception. It would be equally annoying to have to define a non-Unicode `v` mode. -SYG: The high order bit for me is just the implicitness. I would feel more comfortable if there were explicit whether you have to manually type or that it's testable. But the fact that it's an implication, I'm that seems okay to me. +SYG: The high order bit for me is just the implicitness. I would feel more comfortable if there were explicit whether you have to manually type or that it's testable. But the fact that it's an implication, I'm that seems okay to me. MLS: So just following up on Shu's comment. So if I set V, but I don't set `U`, what does the Unicode getter return? There's different semantics with `v` vs just `u`. WH talked about, you know, implies or should they both be present is a syntax error for not both present or they are both present. It seems like the since their semantic differences. differences. We almost to have it so that `V` is a superset behavior of `U`, but they cannot be used together because the U semantics are different than the `V` semantics as are we discussed? I think there's some confusion as to how this, how this should work and how very refreshing should be constructed. Not just in the engine but also in people that use it, MWS: So in my mind, if I might respond to that quickly in my mind, these questions need to be resolved but it's not very near and dear to Mathias and my hearts exactly which way we go. So I don't know if we require that `U` is given when someone does `V` or we forbid that `U` is given when someone requires `V`. I'm perfectly happy to collect arguments and have a vote or something like that at some point. -MLS: so I think these are stage two concerns as to how this, how this works, what was implied, you have Getters do, is a syntax error, they both required by +MLS: so I think these are stage two concerns as to how this, how this works, what was implied, you have Getters do, is a syntax error, they both required by RPR: To clarify Michael, you're saying that these are things that could be worked through during stage two. @@ -754,11 +761,11 @@ SYG: explicitly, not a blocking mechanism. MLS: Well, if we think we do have a spec that says what happens? But we can change that. -WH: What currently happens is that `v` implies the behavior of `u`, but it does not show up as `u` in reflection, it only shows up as `v`. +WH: What currently happens is that `v` implies the behavior of `u`, but it does not show up as `u` in reflection, it only shows up as `v`. SFC: I just want to say I'm really happy with this proposal moving to Stage 2. I support Stage 2, and I'm really happy with Mathias and Markus keeping me in the loop and resolving my concerns with the previous set of strings proposal. -RPR? I'm really happy to see this progressing for stage 2. See if the Champions like to ask. +RPR? I'm really happy to see this progressing for stage 2. See if the Champions like to ask. MWS: I would like to ask for advancement to stage 2. @@ -766,7 +773,7 @@ WH: Enthusiastic yes from me. RPR: Any objections to advancing? -MS: So I don't know if we are advancing the set operations in the flag or property strings as well. Are you merging these into one proposal at this point? +MS: So I don't know if we are advancing the set operations in the flag or property strings as well. Are you merging these into one proposal at this point? MWS: Yes. @@ -780,7 +787,7 @@ RGN: I would like to participate in the review. Thank you. MB: MLS I don't want to put you on the spot, but I know you've already reviewed the properties of strings proposal as a stage 3 reviewer. So if you want to review this proposal (which subsumes properties of strings) as well, that would definitely be welcome. -MLS: You can add me as a reviewer. +MLS: You can add me as a reviewer. MB: Nice, thank you. @@ -789,7 +796,6 @@ RPR: So you have your three reviewers. MB: Perfect. Thank you very much everyone. ### Conclusion/Resolution + - Stage 2 - reviewers: WH, RGN, MLS - - diff --git a/meetings/2021-07/july-13.md b/meetings/2021-07/july-13.md index f203e2fb..80b0ebba 100644 --- a/meetings/2021-07/july-13.md +++ b/meetings/2021-07/july-13.md @@ -1,7 +1,8 @@ # 13 July, 2021 Meeting Notes ------ -**Remote attendees:** +----- + +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Bradford C. Smith | BSH | Google | @@ -21,13 +22,12 @@ | Frank Yung-Fong Tang | FYT | Google | | Richard Gibson | RGN | OpenJS Foundation | - ## Secretary's Report + Presenter: Istvan Sebestyen (IS) - [slides](https://github.com/tc39/agendas/blob/master/2021/tc39-2021-038.pdf) - IS: So this is the usual report from the secretariat. So first of all, I have to apologize. It is 30 minutes after 3 o'clock in the morning here, so it is very very early in the morning. So I will give my presentation then I will go to bed and if there is anything for which you need me, then you, well, it is better to do it at the end of the session because then I would wake up according to the normal time. So this the introduction… from the TC39 Secretariat. And I go to the next point. You know what has happened lately? So these are the usual points. So list of the relevant TC39 documents. So I will just show you - you know - and the GA documents that are the relevant documents during this time since the last meeting, which was in May. So we have in the meantime, also a couple of new TC39 members. So, obviously the TC39 Management Group knows that. And, also, those who are following the github they too. So the next one is to show the numbers of the TC39 meeting participation very quickly - because the trend is the same as in the past re. the latest standards’ download and access statistics. And then the very important information that ES2021 has been approved by the June GA. ECMA-262 2021 has been approved by the Ecma general assembly. so that everything went as it was planned, so that's good. Then I report back about the results of the Ecma Recognition Award, awarded by the General Assembly to them, Then there is a longer topic which is not presented here. So, no detailed information here. I have a small summary of that but I will basically then point out to the documentation that we have, because the time that is allocated to this is much too short here, this is one of the reasons. The second reason is that it is really - that TC39 people who are interested in the subject are not that many in TC39, I think, and those who are interested subject, should rather join the general assembly discussion on that and the discussion between the GA, the Execom, but you will get here, at least some entry points to the topic. And what is the current status? And then as usual, the dates of the next TC39, the next ExeCom meeting, we have an update based on the results of the general assembly and last but not least ECMA became 60 years old and that was on June 17, and normally that we have a larger anniversary celebration at the GA with a nice dinner, etc. Etc. Of course this year we don't have anything. So they are going to try to have this social event at the next general assembly meeting in December 2021, which is planned in Switzerland in the Lake Geneva area or in the Rhone-River area. And if because of covid that's not possible yet, then they would like to postpone it to the next June but then it would be the “Ecma 61st anniversary”. IS: So it is a pretty much very similar type of presentation, but there is really nothing new. I will try to be very fast. Okay, these are here, the latest ECMA TC39 documents that you have not seen in the list of this type of presentation yet. So, first of all, the minutes of the 83rd TC39 meeting in May. It was a virtual meeting, we have just approved that. So this is already done now. The next document one is what we always do is that we are extracting from the GitHub, all the slides, all the presentations, which were delivered to the 83rd TC39 meeting, and this is not so interesting for you who are participating actively in TC39, but it is a very important aspect for the long-term archival process that we are making here in ECMA. We are transferring the information from GitHub into archival information that can also be read e.g. after 20 years or 30 years or later. And this is really needed for Ecma as a classical standardization organization, that practically the information which we are preparing here together for the development of the different versions of ECMAScript standards and internationalization standards, etc. That such information can be searched and found also after 30, 40, 50 years, Etc. So practically “forever”. So this is a typical long-term archival functionality that we have to do as an SDO. But as I mentioned for your working purposes, this is not so interesting because TC39 has its own tools, etc. You know that we are also using now one of the tools with TCQ, but we have also the long-term archival duty that any SDO requires, and we do that together with Patrick Charollais. So this is about this document and then of course, here is the venue of this meeting, and also the meeting agenda. Again, this is important for the information of the other TCs and GA participants who are not accessing the TC39 GitHub, This is not that important for you but it is for them. So this is also what you have. @@ -44,30 +44,31 @@ IS: Well, life goes on... we have received for TC39 participation a new Ecma app IS: So this is now our summary of the new TC39 members. I already mentioned it here, there were a couple of members who were approved formally as ECMA members in the June 21st meeting. So the first one is from China, Beijing Bytedance Network Technology as Associate Member.. They have already participated in TC39 meetings if I remember correctly. They are also very welcome, not just as TC39 members, but now as formal ECMA members too, The same goes also to the people from the Zalari Gmbh as SPC members. They have also done administrative work. Then for Rome Tools the same - as the last one. So these are also formal ECMA GA members now. -IS: TC39 meeting participation. Well, here you can see the list of the recent TC39 meeting participation starting two years ago. So from July 19, in Redmond, one can see 60 people participated total locally, 30 remote participants in the meeting in Redmond and 24 companies. Now, I'm not going to go through all the other meetings. Well, I go to the next slide, which lists the last meeting. It was a remote meeting. In total we had 74 participants, 0 locally because it was remote, of course. The number of companies was 27. So both the number of meeting participation, and then also the number of company participation - this includes also, by way, the Ecma Secretariat's Etc - the new companies, it is about the same. So the absolute record so far in the first slide - basically before the approval of the 2021 specification. In the January 2021 remote meeting, 95 participants, and also 27 companies, so not more than that now. I'm extremely proud of it because this represents a significant part of the entire Ecma work. According to my judgment, at least 60% of all Ecma work goes only to the account of TC39. But also the other open source projects are doing rather well. So TC53 has approved its first standards at the GA meeting. Congratulations also to them. They are also doing very well. And also TC 49, other programming languages C sharp, CLI, etc. TC49 presents their stuff over Github too, +IS: TC39 meeting participation. Well, here you can see the list of the recent TC39 meeting participation starting two years ago. So from July 19, in Redmond, one can see 60 people participated total locally, 30 remote participants in the meeting in Redmond and 24 companies. Now, I'm not going to go through all the other meetings. Well, I go to the next slide, which lists the last meeting. It was a remote meeting. In total we had 74 participants, 0 locally because it was remote, of course. The number of companies was 27. So both the number of meeting participation, and then also the number of company participation - this includes also, by way, the Ecma Secretariat's Etc - the new companies, it is about the same. So the absolute record so far in the first slide - basically before the approval of the 2021 specification. In the January 2021 remote meeting, 95 participants, and also 27 companies, so not more than that now. I'm extremely proud of it because this represents a significant part of the entire Ecma work. According to my judgment, at least 60% of all Ecma work goes only to the account of TC39. But also the other open source projects are doing rather well. So TC53 has approved its first standards at the GA meeting. Congratulations also to them. They are also doing very well. And also TC 49, other programming languages C sharp, CLI, etc. TC49 presents their stuff over Github too, IS: So now the downloads of the ECMA standards So these are not TC39 standards only, what I have shown you in earlier meetings. Now, since Isabelle Walch - who is in our secretariat doing these statistics and she helps me out with the figures - is on vacation, therefore I have taken the download statistics from the last GA meeting. So these are originally for those who are following the GA meeting download figures for all the Ecma standards. Now, the “red” ones are coming from TC39 and here are the “blue” ones which is the next to larger downloads of the Ecma. This is the figure of the OOXML (open xml document) Ecma standard. So that's an old standard from 2008 Etc. Then the “green” ones. This is the CLI and C# standard by TC49. And the “black” ones, which are for all the different “rest” other Ecma standards. But here for us, the message is that TC39 is extremely in the top. All together, thirty-two thousand downloads of Ecma standards were carried out during almost the first half of 2021. Now, this is the more important part for us. And this is with the status of the end of May. So, these are the access statistics regarding the HTML version of the different ECMA-262 editions. So you can see they are almost 200,000, and this one is for the ECMA-402 for the internationalisation standard. So it is almost 10,000. This figure you already know as the figures that I have presented at the May meeting are very close to this. And here - download of the Ecma Technical Reports - the Ecma TR/104, Ecmascript, Test Suite, second edition. The figure is only 87, which is very, very low and you will be wondering why is it so low…. Because this is sort of “cheating”, this TR/104 contains only the “readme file” of Test Suite. And actually the entire 30k+ small tests programs, that people are downloading or accessing those are separate (outside of this TR). They are not really included into this document, because they are added without a full TC39 approval as they become available - usually after donation by the authors. The two-page document contains only a link to the repository of the 35,000 test modules. Well this is a special TC39 product. -IS: I will go back to the most important results. Congratulations, everything was formally approved by the Ecma GA #121 and it went without any problems. Also the work of the preparation, like document publication Etc. And without problems “start” and “ending” of the “opt-out” period to satisfy the royalty-free, the patent policy requirements, according to those special rules. Well, no “opt-out” remarks were submitted by anybody. So this is exactly like in the past 10 years. Etc, no change on that. But so it went through. So we have passed that Etc. And then Documentation which you can see is the relevant documentation, the announcement of the opt-out period And then here these were the official drafts. Presentation, publication TC39 depository, Etc. So we have done that two months before the GA meeting, also exactly according to the WTO TBT rules, how this is how it should be, so that was done and congratulations again. +IS: I will go back to the most important results. Congratulations, everything was formally approved by the Ecma GA #121 and it went without any problems. Also the work of the preparation, like document publication Etc. And without problems “start” and “ending” of the “opt-out” period to satisfy the royalty-free, the patent policy requirements, according to those special rules. Well, no “opt-out” remarks were submitted by anybody. So this is exactly like in the past 10 years. Etc, no change on that. But so it went through. So we have passed that Etc. And then Documentation which you can see is the relevant documentation, the announcement of the opt-out period And then here these were the official drafts. Presentation, publication TC39 depository, Etc. So we have done that two months before the GA meeting, also exactly according to the WTO TBT rules, how this is how it should be, so that was done and congratulations again. -IS: Now the Ecma Recognition Awards. I have already reported to you in May that unfortunately, from our proposal to the ExeCom / GA, not everybody came through, but okay, let me start with a good news. So I would like to congratulate Jordan Harband. He is one of the oldest active co-workers in TC39. And he's also the “champion” for bringing in several new Ecma member companies as he changed his companies and so he has been real sweet a, if I remember at Airbnb. Etc, etc. So here's his present new member company, Coinbase.. So I would like to recognize Jordan’s excellent activities for TC39 and this has been also reflected so by the General Assembly. So so let's go to the next one. [applause] +IS: Now the Ecma Recognition Awards. I have already reported to you in May that unfortunately, from our proposal to the ExeCom / GA, not everybody came through, but okay, let me start with a good news. So I would like to congratulate Jordan Harband. He is one of the oldest active co-workers in TC39. And he's also the “champion” for bringing in several new Ecma member companies as he changed his companies and so he has been real sweet a, if I remember at Airbnb. Etc, etc. So here's his present new member company, Coinbase.. So I would like to recognize Jordan’s excellent activities for TC39 and this has been also reflected so by the General Assembly. So so let's go to the next one. [applause] IS: Next Meeting Schedule: The next few meetings are remote as Aki has already announced, in alternation between 4-days and 2-days long meetings until the end of the year. No change this year. But we have to continue the discussion about what the next year schedule should look like. According to the first presentation, maybe we are going back to the to the 6 meeting plan. I'm sure this is a separate discussion again, so I'm not going to touch it. But we need sooner or later, or rather sooner, the schedule for 2022. IS: Then the next meeting is the official one. It will be 8th and 9th of December 2021, so it will be either the first face-to-face meeting, or it will be still remote. It is not decided yet. It is also not decided whether it's going to be in Geneva. Geneva in December, it is not too nice. If it will be in Montreux, Montreux is much nicer - with a nice Xmas market - and this is just one hour with a train from the Geneva airport. Then in Crans-Montana, which is listed too. This is up into the mountains. It is in the Rhone Valley so that's about two hours or maybe three hours with the train from the Geneva airport, while passing the Lake Geneva area. Nice trip. And then go into the Rhone Valley and then it will be on the left side that's a village one thousand, three or four 1400 meters high up. So this is a snow resort Mountain and it is usually very good skiing early December, you would already have snow. If it will be a remote meeting then the Summer meeting will be face-to-face in the Geneva-lake area. One word about the ECMA 60th anniversary, which will be celebrated in the first face-to-face GA meeting depending if that will be in December 2021 or in June 2022. ## ECMA262 Editors' Status Update + Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1doR1uDcWAsepZ8Rp8OWftgJqZTgr96A8sz24_G1Mqqo/edit) - + KG: We've made a few non-trivial editorial changes. One update that you might notice is that the equality operations were these sort of abstract-operation-like things that had their own calling convention and syntax and that was weird. Now they are just regular abstract operations and they have been renamed to make them clearer when using the regular spec internal calling syntax. So that's 2378. -KG: 2413 is, I rewrote all of the machinery for async generators because I have looked at it several times and been confused by it every single time I have looked at it, there is at its base, a relatively simple state machine, but the state machine was all routed through this message pump that did not make it at all clear what was going on. I have re-written it into a much simpler, not recursive state machine, that just does the transitions and actions. Hopefully that is clear, but if you are wondering why your implementation comments that have the specs steps don't match up to what is currently in the specification, it's because of this change. I think it was worth it on balance to make the machinery clearer. +KG: 2413 is, I rewrote all of the machinery for async generators because I have looked at it several times and been confused by it every single time I have looked at it, there is at its base, a relatively simple state machine, but the state machine was all routed through this message pump that did not make it at all clear what was going on. I have re-written it into a much simpler, not recursive state machine, that just does the transitions and actions. Hopefully that is clear, but if you are wondering why your implementation comments that have the specs steps don't match up to what is currently in the specification, it's because of this change. I think it was worth it on balance to make the machinery clearer. KG: And then this last item is, we introduced a mechanism for defining spec-internal closures a while ago. We also introduced, a few months ago, a mechanism to use those closures to create built-in functions, rather than the prevailing style, which was to have a separate clause that listed all of the steps and coordinate state via internal slots. Now we can use abstract closures inline, which is much more concise and I think easier to follow and we have done that in the places that it made sense to do. This is incidentally, it's closed one of the older issues on the spec because this has been something we wanted to do, or other consumers of the spec wanted to do for a long time. -KG: Also a few meta changes. So this first one: as you are hopefully aware, we have a multi-page build of the spec, which is useful primarily to people on Chrome because Chrome chokes on large pages. Now I've added a shortcut to toggle between the single and multi page version. You can just press `m` and it will bring you to the corresponding section on the other version of the page. And then the second one was a contribution from ms2ger (thank you ms2ger), to make it so that when you click on multiple variables they highlight in different colors. There is a screenshot. So you can see rather than all of the things being highlighted with the same color, the different variables have different colors. That's why there's different colors now if you're confused, it’s because there's different variables. I love this contribution. +KG: Also a few meta changes. So this first one: as you are hopefully aware, we have a multi-page build of the spec, which is useful primarily to people on Chrome because Chrome chokes on large pages. Now I've added a shortcut to toggle between the single and multi page version. You can just press `m` and it will bring you to the corresponding section on the other version of the page. And then the second one was a contribution from ms2ger (thank you ms2ger), to make it so that when you click on multiple variables they highlight in different colors. There is a screenshot. So you can see rather than all of the things being highlighted with the same color, the different variables have different colors. That's why there's different colors now if you're confused, it’s because there's different variables. I love this contribution. KG: And then the next thing, which hopefully you shouldn't notice, is that we have migrated from Travis to GitHub Actions, because as if you do any open source work I am sure you are aware, Travis has basically kicked everyone off. So if you notice builds not working or something, this is what's going on, please ping the editor group, and/or Jordan and we'll try to get things working again. @@ -77,26 +78,26 @@ KG: And then normative changes, there's only one that has landed since the last KG: And then a similar list of upcoming stuff, which, I'm not going to talk about all of these in detail because we have talked about all of them before. We haven't added any new ones except this last one, which is, now that we have or almost have a syntax for structured data about abstract operations, we want to introduce new data. And one example of the data we want to introduce is whether a given abstract operation can in general or at any particular call site invoke user code, which is something that is extremely useful if you are an implementation to know whether this is something you have to worry about potentially having arbitrary side effects or if this is strictly internal. So, hopefully someday in the relatively near future will come with annotations that tell you that. And that's it. -MM: You mentioned the `__proto__` accessor being marked legacy. Is that also normative optional and his legacy employment of optional optional. I don't remember what we agreed the marking of Legacy means +MM: You mentioned the `__proto__` accessor being marked legacy. Is that also normative optional and his legacy employment of optional optional. I don't remember what we agreed the marking of Legacy means -KG: Legacy does not imply normative optional but in this case I believe this was also marked as normative optional. +KG: Legacy does not imply normative optional but in this case I believe this was also marked as normative optional. MM: Okay, and the syntax is neither normative optional nor legacy, correct? -KG: Let me confirm that very quickly - the syntax is neither normative optional or Legacy that is correct. +KG: Let me confirm that very quickly - the syntax is neither normative optional or Legacy that is correct. -MM: Okay, thank you. +MM: Okay, thank you. -KG: Sorry, I should have mentioned this. This PR also includes the definegetter and definesetter accessors, and the lookup versions, which are also Legacy and normative optional. Just like the dunder proto accessor. +KG: Sorry, I should have mentioned this. This PR also includes the definegetter and definesetter accessors, and the lookup versions, which are also Legacy and normative optional. Just like the dunder proto accessor. -JH: To clarify, something can have Legacy or normative optional or both or neither. So like we can decide what what fits each thing, +JH: To clarify, something can have Legacy or normative optional or both or neither. So like we can decide what what fits each thing, ## ECMA-402 Status Update + Presenter: Ujjwal Sharma (USA) - [slides](https://notes.ryzokuken.dev/p/EAc7lufKN#/) - USA: So hello. I'm Ujjwal and this is the Ecma 402 status update for the Japan meeting which looks awfully like my home these days. So what is Ecma 402 for the uninitiated ECMA 402 is a specification that contains an API JavaScript built internationalization Library, so let's say you have a date specifically this date, you can convert that English us and get the interesting u.s. format or the British format or the Japanese format, which is so much more efficient, why don't we use that? USA: So how is Emma 402 developed? Ecma 402 is as I just mentioned a separate specification for many and it is developed by another subgroup of the group we're in right now is TC39 TG1. That specification is developed specifically by TC39, TG2. Proposals a for a team approach to move through the standard TC39 process we have monthly calls to discuss details and if you want to join and if you're interested, shoot an email at this email and there's more information at the Repository. @@ -109,18 +110,18 @@ USA: Okay. Perfect. Thank you. Just going through the different stage three prop USA: So yeah, intl locale info is next. This is championed by Frank. And if you run V8 in the harmony mode, this is already implemented. It's not shipping though, so it's behind the flag and JSC and then spider monkey still implementing this. This one gives you additional information on the intl locale object. So if I create an intl locale object with locale Japanese it gives me the calendars is Gregory which stands for the Gregorian, calendar and Japanese. And then this relations for the Japanese Locale, there's hour cycles and so on. And if I go with English US, the set is different and has tons of timezones, so many hundreds of timezones, which I guess is good for Japan that they only have one time zone. -USA: Yeah, so next up, we have Intl display names V2. So this improves the existing Intl display names object. This is also championed by Frank and this is implemented in V8 Harmony again also in Spidermonkey nightly so 91 to be more precise. JSC is still pending. And for this one we need tests, so if you like writing tests in JavaScript, feel free to help us, that would be really appreciated. So, this one's quite interesting and straightforward. You can create a display names object using whatever locale you like. So let's say I want to see what the names of different calendars are in English. there is the RC coming And I can do that. calendar (?). So on and I can find the names of all these calendars in Japanese as well. So that's been helping me, learn the language.:A +USA: Yeah, so next up, we have Intl display names V2. So this improves the existing Intl display names object. This is also championed by Frank and this is implemented in V8 Harmony again also in Spidermonkey nightly so 91 to be more precise. JSC is still pending. And for this one we need tests, so if you like writing tests in JavaScript, feel free to help us, that would be really appreciated. So, this one's quite interesting and straightforward. You can create a display names object using whatever locale you like. So let's say I want to see what the names of different calendars are in English. there is the RC coming And I can do that. calendar (?). So on and I can find the names of all these calendars in Japanese as well. So that's been helping me, learn the language.:A USA: And next up, we have extended time zone name. So there is a time zone name, option, in date-time format in this proposal and this expands that option to accept values. This is also written by Frank. Thank you Frank for championing so much stuff. And this is also implemented in V8 Harmony and in spider monkey and JSC is pending, So implementing tests are also wanted for one. And if you see my time zone in English - or actually not my time zone but Pacific time, so maybe your zone - this is all the different ways you can now Express the time Club, so you got Pacific Standard Time, Pacific Time, PTPT, everything, and then, you can also do that in, say, Japanese. -USA: Regarding stage two and one proposals. There's Intl duration format that is stage two and I'm the . It's not for stage advancement this time unfortunately, but hopefully next one. There's Intl number format, we stage 2, and championed by Shane, and it's going for stage advancement, this meeting. So keep an eye for that. There's internationalisation enumeration API by Frank. Also going for advancement, and then there's a bunch of really interesting stage one proposals that we're still working through and they're not going for stage advancement this time. +USA: Regarding stage two and one proposals. There's Intl duration format that is stage two and I'm the . It's not for stage advancement this time unfortunately, but hopefully next one. There's Intl number format, we stage 2, and championed by Shane, and it's going for stage advancement, this meeting. So keep an eye for that. There's internationalisation enumeration API by Frank. Also going for advancement, and then there's a bunch of really interesting stage one proposals that we're still working through and they're not going for stage advancement this time. USA: If you like this and if you like the general idea that we're working on, how do you get involved? Well, as I mentioned, here is the repo TC39/ecma402. So feel free to drop by part of an issue. You can also give us feedback on open issues. You can help us, right? MDN documentation, We have a specific repo ecma402-mdn, track and follow progress on MDN stuff and you can implement the different proposals that we talked about in JS Engines and polyfills, you can also help us write tests 262 test and add the plumbing to ICU, which is a native library that enables browsers to do most of these interesting things. To join our monthly calls, email, we’d be glad to have you. And thank you arigato. -WH: I'm curious where the “h23” vs “h24” hourCycle nomenclature came from. The intuitive thing would be to write “h24” for a 24-hour clock but that's almost always wrong. +WH: I'm curious where the “h23” vs “h24” hourCycle nomenclature came from. The intuitive thing would be to write “h24” for a 24-hour clock but that's almost always wrong. USA: Yeah, I believe the hour cycle preferences come from the Locale itself. So each Locale has a default hour cycle that is sort of preferred for it. So for example, English us would have “h12” if I'm correct but many others would have “h24” for example - + WH: Actually, I think anything which uses “h24” is probably wrong. What you want is “h23”. I’m curious where these confusing names came from. USA: yeah, I think the list of hour cycles by Unicode also follows this convention so maybe that's one of the places where they drew inspiration from, but I think Shane or somebody else from TG2 might be able to answer that better. I can raise this later offline. @@ -128,21 +129,20 @@ USA: yeah, I think the list of hour cycles by Unicode also follows this conventi AKI: Thank you. ## ECMA-404 Status Update + Presenter: Chip Morningstar (CM) -CM: -ECMA 404 -The standard rests unchanging -As we meet again +CM: +ECMA 404 The standard rests unchanging As we meet again AKI: Thank you very much. - ## Code of Conduct -JHD: I don't think there's any updates at this time. +JHD: I don't think there's any updates at this time. ## Election of TG3 Security Task Group Chair + GRS: Hey, everyone, I just do a quick introduction because I haven't had the opportunity to meet everyone yet. So I'm Granville Schmidt. I work at F5 in the office of the CTO where I work on Innovations in the future of applications, and I have a background in security, application development, and startups. And I have spent a lot of time in open source across different technologies and languages. And for me what's really exciting about this opportunity is that in security everything's built on different layers and they all build upon themselves and one of the most foundational and critical layers is the underlying language, right? And JS is a language which is used by many applications. So getting the opportunity to work with each of you to ensure that one of the most critical layers has many eyes on and we review it and we tackle security issues early on It's really exciting to me. So thank you. AKI: Do we have consensus for Granville to be chair? @@ -156,6 +156,7 @@ IS: This is IS. So the first one will be, you know, that from Ecma Secretariat. GRS: Thank you, I look forward to working with you. ## ECMA Proposal + Presenter: Istvan Sebestyen (IS) - [proposal](https://github.com/tc39/Reflector/issues/386) @@ -163,17 +164,17 @@ Presenter: Istvan Sebestyen (IS) (notes from this section are on the Reflector) ## Remove "Designed to be subclassable" + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/ecma262/pull/2360) - [slides](https://docs.google.com/presentation/d/1WDLS4tBiAbEJQeBYRJwjut_yfseGBKocTHpUlM4dpJM/) +KG: Okay. So the title slide here is perhaps slightly more provocative than it needs to be. Okay. So this is a normative pull request. Well, arguably normative, arguably editorial. The content of the pull request, is - you see this highlighted bit of text here? This highlighted bit of text says that Boolean, like capital B. Boolean, was designed to be subclassable. This is technically true. It was technically designed to be subclassable. I think this is a very strange thing to call out. The intended reading of that clause according to Allen is just that it may be used as the value of an `extends` clause and things will work as you expect, in that this will create an instance of Boolean with all the correct internal slots and so on. People read this to imply more than that. In particular, some people read this to imply that it is intended to be subclassed -- that we are actively recommending that you subclass built-ins. Of course, this is not just true of Boolean. This is true of every single constructor. We do not differentiate among them. We say Boolean is a designed subclassable, Function is designed to be subclassable (though good luck with that if you are in a CSP environment), we say Array and Map and Set are designed to be subclassable. Now arguably some of these it may make more sense to do so. But like I said, we don't differentiate among them and I think at least for Boolean, it is extremely silly to call this out. So proposal, let's remove that, specifically this highlighted part [highlighted text in slide 3]. This is not an endorsement of subclassing any particular thing, this is not expressing opposition to subclassing any particular thing. I'm not trying to say that you should or shouldn't subclass Boolean, or Map, or Array. I am just trying not to suggest that you definitely should subclass every built in which the current wording suggests to some people. If you are interested in calling out a particular subset as being a good idea to subclass you're welcome to do so, as a separate effort. -KG: Okay. So the title slide here is perhaps slightly more provocative than it needs to be. Okay. So this is a normative pull request. Well, arguably normative, arguably editorial. The content of the pull request, is - you see this highlighted bit of text here? This highlighted bit of text says that Boolean, like capital B. Boolean, was designed to be subclassable. This is technically true. It was technically designed to be subclassable. I think this is a very strange thing to call out. The intended reading of that clause according to Allen is just that it may be used as the value of an `extends` clause and things will work as you expect, in that this will create an instance of Boolean with all the correct internal slots and so on. People read this to imply more than that. In particular, some people read this to imply that it is intended to be subclassed -- that we are actively recommending that you subclass built-ins. Of course, this is not just true of Boolean. This is true of every single constructor. We do not differentiate among them. We say Boolean is a designed subclassable, Function is designed to be subclassable (though good luck with that if you are in a CSP environment), we say Array and Map and Set are designed to be subclassable. Now arguably some of these it may make more sense to do so. But like I said, we don't differentiate among them and I think at least for Boolean, it is extremely silly to call this out. So proposal, let's remove that, specifically this highlighted part [highlighted text in slide 3]. This is not an endorsement of subclassing any particular thing, this is not expressing opposition to subclassing any particular thing. I'm not trying to say that you should or shouldn't subclass Boolean, or Map, or Array. I am just trying not to suggest that you definitely should subclass every built in which the current wording suggests to some people. If you are interested in calling out a particular subset as being a good idea to subclass you're welcome to do so, as a separate effort. +KG: I guess I should review the history before I ask for consensus. This came up in the context of Temporal where the operations in Temporal do not defer to Symbol.species because they are sort of a bag of classes and it doesn't really make sense to try to subclass one of them and assume you automatically get the “right” behavior because you will get nonsense, if you convert to a different type and convert back, and so Temporal does not provide particular affordances for subclassing. And so Temporal wanted to remove this wording for their constructors. I think this wording should be removed from every constructor. We can just leave “may be used as the value of an extends clause of a class definition”, which is a plain statement of fact. Can we have consensus for removing this clause? -KG: I guess I should review the history before I ask for consensus. This came up in the context of Temporal where the operations in Temporal do not defer to Symbol.species because they are sort of a bag of classes and it doesn't really make sense to try to subclass one of them and assume you automatically get the “right” behavior because you will get nonsense, if you convert to a different type and convert back, and so Temporal does not provide particular affordances for subclassing. And so Temporal wanted to remove this wording for their constructors. I think this wording should be removed from every constructor. We can just leave “may be used as the value of an extends clause of a class definition”, which is a plain statement of fact. Can we have consensus for removing this clause? - -JHD: My initial, the tldr here is I think removing this is a strict Improvement because its presence does signal to many people that you should go ahead and subclass these things with, you know, with class extends and go nuts. The follow-on change I would like to see and that if I have the time I will pursue but I'd love to get thoughts from folks in the committee on like, do we even know what subclassable means? Like the species removal proposal defines, like four types of subclassing. The Set methods proposal has been stuck like in which ways should something be subclasible like in which types of things should be easier than others and things like that. So I think that removal is good. I would love to see us as a community agree on what subclassing means and somehow Define it and then notate which built-in Constructors are reasonable to extend versus, which are simply, like, possible to extend. Because I agree `extends Boolean` is silly. There's no use case for it, but `extends Map` has use cases, for example. So I want to put that out there, but either way I support this change, because it's an incremental Improvement. +JHD: My initial, the tldr here is I think removing this is a strict Improvement because its presence does signal to many people that you should go ahead and subclass these things with, you know, with class extends and go nuts. The follow-on change I would like to see and that if I have the time I will pursue but I'd love to get thoughts from folks in the committee on like, do we even know what subclassable means? Like the species removal proposal defines, like four types of subclassing. The Set methods proposal has been stuck like in which ways should something be subclasible like in which types of things should be easier than others and things like that. So I think that removal is good. I would love to see us as a community agree on what subclassing means and somehow Define it and then notate which built-in Constructors are reasonable to extend versus, which are simply, like, possible to extend. Because I agree `extends Boolean` is silly. There's no use case for it, but `extends Map` has use cases, for example. So I want to put that out there, but either way I support this change, because it's an incremental Improvement. SYG: Yes, strong support for this. Largely aligned with - well, completely aligned with Kevin here. Maybe a little bit different for where I want the editorial direction to go from Jordan which is I think we probably don't want to - Like I want to say the strictly true thing in the spec without additional qualification, without saying things like “reasonable” and “designed to be” at all. But for now I think this is completely the correct starting point, is to say exactly the thing that is supported, which is, it is supported in the extends clause. So 100% support. @@ -189,68 +190,72 @@ MM: And by constructor we just mean anything with a construct Behavior. KG: Anything with construct behavior which isn't `throw TypeError`, yes. -MM: Okay yeah. I mean I agree. I completely support the goals of this PR. +MM: Okay yeah. I mean I agree. I completely support the goals of this PR. -KG: Allen clarifies in the thread that the reason this was added was because this was not necessarily true in es5, you would not necessarily get the correct behavior - meaning all of the internal slots setup and so on - which is why this wording was added in es6. +KG: Allen clarifies in the thread that the reason this was added was because this was not necessarily true in es5, you would not necessarily get the correct behavior - meaning all of the internal slots setup and so on - which is why this wording was added in es6. -MM: Well, in es5 there was no extends mechanism. +MM: Well, in es5 there was no extends mechanism. KG: Yeah. If you just set up the prototype you would not have the internal slots and ES6 gave us a way to have syntax to set up the internal slots. -MM: Okay. Yeah. I support this. +MM: Okay. Yeah. I support this. -Aki: All right, the queue is empty. +Aki: All right, the queue is empty. KG: Okay, consensus on this pull request then. ### Conclusion/Resolution -Consensus for tc39#2360. - +Consensus for tc39#2360. ## Intl Locale Info update + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-locale-info/blob/main/README.md) - [slides](https://docs.google.com/presentation/d/1rrEaInlUFpYJ3djkRfQHpMBzt0C88WuQeFGis8x9UP8/edit#slide=id.p) -FYT: Hi everyone, is Frank Tang and I'm going to talk about 4 proposals today. Tthe first three are just 10 minutes slots for three of the stage three proposed. So and I just want to keep update to everyone. So you know what's going on right now. And the fourth one, I have a 30-minute slots us for stage three advancement. +FYT: Hi everyone, is Frank Tang and I'm going to talk about 4 proposals today. Tthe first three are just 10 minutes slots for three of the stage three proposed. So and I just want to keep update to everyone. So you know what's going on right now. And the fourth one, I have a 30-minute slots us for stage three advancement. FYT: The first one I'll talk about is the Intl local info API in state 3. So this not for any stage advancement, it is just to give you Update. So the base motivation is for a proposal to expose a Locale info such as a week data, including the first day in the week, weekend start day and end day and first day in week. Hour cycle used to be in the Locale and the measurement system using Locale that part actually got remove doing that. This is all original charter, but we removed that. -System part in. it was advanced in to Stage 1 in September ofl last year in stage 2, in January and stage 3 in April year. They are some changes and discussion during their(?). And one of them are like to ask you for a consensus. There are being Torre talking the TG2 Institute, get Intl Locale property collations, which is supposed to return an array of all the collations that are use in the Locale? And we do not have the language to exclus all any of them, but I think Andreas from Mozilla point out that because in the collation API clearly stated that collation for option for standards in search should be excluded in that API because there's a different way to invoke it not food localization option, but through usage option. therefore, He suggested that we remove it and just adding say, hey this thing shouldn't be returned there. So later I would like to ask you for consensus for it. In the June, tg2 meeting date, another member to look at it. Think that is the right thing to do. There are also there are some additional thing during the say three that bring up by Andreas and he did a very awesome. Review and that we may need to still do additional changes. Most, some of them are editorial. Some of them are just clarification. Including I think we didn't sound how clarify in the appendix. A about Vision, dependent Behavior with the new additional API were added. Something should added to the appendix. We somehow forgot to do that. So we need to add that. The other is there is some suggestion that we should clarify that. of Locale on the items in their array, that quick but got returned from that. So it was discussed at all. whether that order has a particular meaning or, or So, the current spec says that should be in the prefered order. I think Andreas pointed out the first one that for sure, is the preferred one. But the rest of them may not be. so, that is still under discussion and I will bring that to TG2 and to you later on if we believe there are some agreeable, action. The other thing is, we will try to ask for clarification for whether the (?), the calendar Locale extension. will have impact two-week info or not is still under discussion about that. and also, there's another one's asking for, Make it clear, whether that return IDR canonicalize, I think when I wrote it, I was assuming everything gets returned will be canonicalized by think that requires us to make it clear. It is a canonicalized before said before return, which I think is a good point but I somehow just missed that. That neither one is very interesting that we sell her. Our API didn't account for a condition that I not aware of before stage 3. Some of the part of the world? I think in that in some style other region, not a country. But some like a part of the stable part of the province, inside a particular country are using a non-continuous weekend. So they have probably be even half of the Christian and Muslim population. So they actually choose to have weekend on Friday and Sunday, but not Saturday. And that is somehow not very presentable in our current model. So we are still trying to address that. We are trying to figure out how to address that we don't have a conclusion yet. Unfortunately, the it was not bring up before stage 3 so we still need to look at that. +System part in. it was advanced in to Stage 1 in September ofl last year in stage 2, in January and stage 3 in April year. They are some changes and discussion during their(?). And one of them are like to ask you for a consensus. There are being Torre talking the TG2 Institute, get Intl Locale property collations, which is supposed to return an array of all the collations that are use in the Locale? And we do not have the language to exclus all any of them, but I think Andreas from Mozilla point out that because in the collation API clearly stated that collation for option for standards in search should be excluded in that API because there's a different way to invoke it not food localization option, but through usage option. therefore, He suggested that we remove it and just adding say, hey this thing shouldn't be returned there. So later I would like to ask you for consensus for it. In the June, tg2 meeting date, another member to look at it. Think that is the right thing to do. There are also there are some additional thing during the say three that bring up by Andreas and he did a very awesome. Review and that we may need to still do additional changes. Most, some of them are editorial. Some of them are just clarification. Including I think we didn't sound how clarify in the appendix. A about Vision, dependent Behavior with the new additional API were added. Something should added to the appendix. We somehow forgot to do that. So we need to add that. The other is there is some suggestion that we should clarify that. of Locale on the items in their array, that quick but got returned from that. So it was discussed at all. whether that order has a particular meaning or, or So, the current spec says that should be in the prefered order. I think Andreas pointed out the first one that for sure, is the preferred one. But the rest of them may not be. so, that is still under discussion and I will bring that to TG2 and to you later on if we believe there are some agreeable, action. The other thing is, we will try to ask for clarification for whether the (?), the calendar Locale extension. will have impact two-week info or not is still under discussion about that. and also, there's another one's asking for, Make it clear, whether that return IDR canonicalize, I think when I wrote it, I was assuming everything gets returned will be canonicalized by think that requires us to make it clear. It is a canonicalized before said before return, which I think is a good point but I somehow just missed that. That neither one is very interesting that we sell her. Our API didn't account for a condition that I not aware of before stage 3. Some of the part of the world? I think in that in some style other region, not a country. But some like a part of the stable part of the province, inside a particular country are using a non-continuous weekend. So they have probably be even half of the Christian and Muslim population. So they actually choose to have weekend on Friday and Sunday, but not Saturday. And that is somehow not very presentable in our current model. So we are still trying to address that. We are trying to figure out how to address that we don't have a conclusion yet. Unfortunately, the it was not bring up before stage 3 so we still need to look at that. FYT: And then we have two editorial requests and I think so you can Take a look at of that. There are more editorial issues. So, in the implementation currently will have pulled out coming, we have an invitation for Chrome and not to with under the flag is not flip yet. We are animals up in Chrome or shooting. Maybe we can flip it by end of Q3 and there's a status check. You can look at it, we are currently in Chrome is pending on more complete coverage of the test. USA has contributed a lot. I think we still need to take a look and see whether others are any missing issue. I think Mozilla has also on the development and really quickly working on that, but I see his blocked by solve the issue I mentioned earlier. He's asking for clarification and decision so we're still working on that part. It's not clear of what's the status of a safari. If anyone from Apple know that it will be nice to tell me. And we really need more help. And hopefully, we can. I think we have a, some tests in test262 but you know, more tests is definitely better. So, There's a particular flag tag in the feature flag to seek to it's using the a string. And we still need some help (?) have someone to help polyfills. So particular I would like to thank for all the people list here Shane and ZB from Mozilla did a great stage 3 review and many people, particularly Ujjwal and the many others. in the discussion that contribute to that, and particularly for Andrea who found a lot of very interesting question doing his implementation. -FYT: So I have two asks for the committee. The first is retrospective approval for the change of to exclude a standard and search found the return of that list. The other one is just basically recommend people really helping to write more tests. So, may I ask, is there any question on the queue about the update. And if not, I like to ask the approval for consensus for the first one. +FYT: So I have two asks for the committee. The first is retrospective approval for the change of to exclude a standard and search found the return of that list. The other one is just basically recommend people really helping to write more tests. So, may I ask, is there any question on the queue about the update. And if not, I like to ask the approval for consensus for the first one. USA: There's nothing on the queue. And yeah, I think you have consensus. FYT: Thank you, I appreciate that. so, if anyone have interest to help work on the task, please let me know. otherwise, I will move to the second presentation. ### Conclusion/Resolution -* Consensus for removing “standard” and “search” from Intl.Locale.prototype.collations. + +- Consensus for removing “standard” and “search” from Intl.Locale.prototype.collations. + ## Intl.Displaynames + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/intl-displaynames-v2) - [slides](https://docs.google.com/presentation/d/1EUJ8fIBcCN784S_Da5FT8Fxgo1_lVM8InbSUjhuvpkU/edit#slide=id.ge36b9e7bc8_0_1) - -FYT: This is another proposal. Just want to give an update just already in stage 3. This is called Intl display name version 2. So which means where you have a version one in the state for while, and this proposal is kind of on top of it. The motivation is to try to enable developers to get translation of All language regions, scribbled out of this. Plain language already cover version 1 but this basically add additional one. But we also try provide straightforward API to do that because the are sometimes time, have some edges you can work around and get us anything but which may faill in certain condition. So we try to encourage to give developer to call this API instead of those worker around condition which sometime were some that don't and they causing some more problem. A little bit historical background. So let's hear about version 1 energy in stage 4 in September last year. and this particular proposal version 2 is introduced in August. Try to capture something that we somehow cannot accomplish in subversion one and it's got of answer for stage 1 in September Last year and January to stage two this year. I gave update actually, I think we're supposed to be to April, but didn't consensus that time. Have some less minutes issue but we so we postponed it. But we addressed it in advance to stay three in May already. So it feature got enhance in this particular proposals. Adding a languageDisplay option. whenever the type is language, option will accept either (?) and also adding additional type one is called calendar together, neon of the counter one is daytime feel to get a data field ningún of the particular the name for that opportunity times field. So for example just give you give you show you some feature. This is the language display. The left side is showing you asking for it. The language in the dialect mode, which is the default if you adapt to the option is removed and will be default to die. Like one is ten armed and for example Eng be in the dialect mode will show us in the area under the English locale will return British English but if the you what standard Overturn English, params, the United Kingdom in the English Locale. I probably should choose another, like a Locale language code to show that. But anyway, that's that's the case. I think can see from the example. Similarly calendar, right? This is a return the name of the calendar. For example, the Minguo calendar in English and in Chinese, it will ask for Chinese calendar (?) political calendar in for that, particular Locale. Similarly, there be this is a showing the labs is a Chinese writing Spanish, and showing you daytime Phil. So if you ask you for name in Spanish, I don't know how to pronounce that Spanish word. Etc. +FYT: This is another proposal. Just want to give an update just already in stage 3. This is called Intl display name version 2. So which means where you have a version one in the state for while, and this proposal is kind of on top of it. The motivation is to try to enable developers to get translation of All language regions, scribbled out of this. Plain language already cover version 1 but this basically add additional one. But we also try provide straightforward API to do that because the are sometimes time, have some edges you can work around and get us anything but which may faill in certain condition. So we try to encourage to give developer to call this API instead of those worker around condition which sometime were some that don't and they causing some more problem. A little bit historical background. So let's hear about version 1 energy in stage 4 in September last year. and this particular proposal version 2 is introduced in August. Try to capture something that we somehow cannot accomplish in subversion one and it's got of answer for stage 1 in September Last year and January to stage two this year. I gave update actually, I think we're supposed to be to April, but didn't consensus that time. Have some less minutes issue but we so we postponed it. But we addressed it in advance to stay three in May already. So it feature got enhance in this particular proposals. Adding a languageDisplay option. whenever the type is language, option will accept either (?) and also adding additional type one is called calendar together, neon of the counter one is daytime feel to get a data field ningún of the particular the name for that opportunity times field. So for example just give you give you show you some feature. This is the language display. The left side is showing you asking for it. The language in the dialect mode, which is the default if you adapt to the option is removed and will be default to die. Like one is ten armed and for example Eng be in the dialect mode will show us in the area under the English locale will return British English but if the you what standard Overturn English, params, the United Kingdom in the English Locale. I probably should choose another, like a Locale language code to show that. But anyway, that's that's the case. I think can see from the example. Similarly calendar, right? This is a return the name of the calendar. For example, the Minguo calendar in English and in Chinese, it will ask for Chinese calendar (?) political calendar in for that, particular Locale. Similarly, there be this is a showing the labs is a Chinese writing Spanish, and showing you daytime Phil. So if you ask you for name in Spanish, I don't know how to pronounce that Spanish word. Etc. FYT: So, what happened during stage three. So we are a already advanced to stage three in May, and so in Chrome it is developed in the check in, under a flag in the harmony intl display name V2 flag for Chrome and 93. So anyone on 93, I don't think is stable, it is still under beta Channel. or dev Channel, but once you get chrome 93 you can try it. I mean V8 is aiming, try to flip to shipping mode in Q3 which means still take some time. probably in queue forward to roll out to the user. We again is just a preliminary plan, we haven't gotten all the approvals yet, which do try to work out some quality issue and you can see the status page here. Mozilla has landed 91, which is - I'm not a hundred percent sure how the Mozilla work in terms of rolling, but I think that means you have an implementation, there may be other flag, you can look at a book report ID to figure out the detail. Safari. Similarly it would be nice if someone who works on that. Would tell me their status, We have test262 and have a particular flag for that too. it would be really helpful If someone can work on building adding more tasks for tests for this particular feature, that would be really appreciated and it will be appreciated to contribute to MDN and polyfills. So basically, that's what we are. There are no additional issue about this yet since we're still working on, you know, to try to have more supporting activity, just whatever I mean. Here. So also, I would like to thank Shane for his leadership and review and Ujjwall for us stage 3 reviewer for this and(?) and other members.. Thanks for India. Also in, he did sound work on Mozilla and give us some feedback, which is pretty nice. Any question about this particular proposal? -USA: the queue is empty. So I suspect not +USA: the queue is empty. So I suspect not FYT: so, my request for the committee is there, no question or whatever it, just try to help The development of test was easy to and poly fill in the end and tell everybody about this API. So other people use Maybe submit some conference talk. Anyone else? USA: No, but thank you for the presentation. + ### Conclusion/Resolution -No questions were raised. +No questions were raised. ## Extend timezone name option + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-extend-timezonename) @@ -260,18 +265,18 @@ FYT: So the third proposal already in state 3 is called extend timezone name opt USA: The queue is still empty. I think you're super clear. - ### Conclusion/Resolution -No questions were raised. +No questions were raised. ## Intl Enumeration API for Stage 3 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-enumeration) - [slides](https://docs.google.com/presentation/d/1zL3lb4stb4wrfDlOeMsmW5NqjX_TxTWL5pMjTa1qHVw/edit?usp=sharing) - FYT: So this is a the for proposal different than the other three and asking stage 3. This is the Intl Enumeration API. In the charter for the enumeration APIs, try to list the support of value of options, in pre-exists Ecma402 API, me there are options already there, but it's not programmatically developer discoverable. We are trying to make it discoverable. One is calendar, one is Collation, one is currency, one is numbering system, one is time zone, one is unit. Within this 402 says you can support this list of unit and can only support this list of unit. Number system, say there is a table. You can support at least this numbering system but you can support more than that. the order we do not actually have a clear specification in ECMA402 Which one could be need to be in common or (?) increment or you know, whether the implementations have it or not. So what happened in the past is people will try to some way to call the Constructor and then called observe result.. And then to see whether that got resolved to the one they into, to the feature task, which is a lot of work and could be pretty hacky. so here's also a less of whatever the appendix a in Ecma 402 mentions. that some of those fields or this option are implementation dependent support, right? So, the set of the calendar are per locale, the set of a support acylation Per locale. The setup was supported. numbering system per locale and the implementation of the number system, not listed there could be supported and the pattern used for formatting (?) Could be supported per Locale and also the calendar calculation of time local timezone. How many time zones should be supported. Those are clearly stated in appendix A to be implementation-dependent. The problem is there are no problem programmatic way to figure out, you know, easily to figure out what those implementations supported. So how can we figure it out, right? One way is do currently - I think there's a polyfill. You can feature ttest one by one, it possible to do that but is not very straight forward and become very clumsy. +FYT: So this is a the for proposal different than the other three and asking stage 3. This is the Intl Enumeration API. In the charter for the enumeration APIs, try to list the support of value of options, in pre-exists Ecma402 API, me there are options already there, but it's not programmatically developer discoverable. We are trying to make it discoverable. One is calendar, one is Collation, one is currency, one is numbering system, one is time zone, one is unit. Within this 402 says you can support this list of unit and can only support this list of unit. Number system, say there is a table. You can support at least this numbering system but you can support more than that. the order we do not actually have a clear specification in ECMA402 Which one could be need to be in common or (?) increment or you know, whether the implementations have it or not. So what happened in the past is people will try to some way to call the Constructor and then called observe result.. And then to see whether that got resolved to the one they into, to the feature task, which is a lot of work and could be pretty hacky. so here's also a less of whatever the appendix a in Ecma 402 mentions. that some of those fields or this option are implementation dependent support, right? So, the set of the calendar are per locale, the set of a support acylation Per locale. The setup was supported. numbering system per locale and the implementation of the number system, not listed there could be supported and the pattern used for formatting (?) Could be supported per Locale and also the calendar calculation of time local timezone. How many time zones should be supported. Those are clearly stated in appendix A to be implementation-dependent. The problem is there are no problem programmatic way to figure out, you know, easily to figure out what those implementations supported. So how can we figure it out, right? One way is do currently - I think there's a polyfill. You can feature ttest one by one, it possible to do that but is not very straight forward and become very clumsy. FYT: To give a little background, the proposal was actually originally discussed and motivated by a Temporal issue about what time zones are supported. And they said, maybe time temporal Originals they maybe should be there but they said well no this shouldn't be part of Temporal to figure out how many time zones reported so they talked to TG2 to say whether we could take care about that so teach you to discuss that and I say okay I'm going to sign up to Champion for that. Of course, in the meantime, there are some discussions that we should support other things, not just time zone. So, in last year, June last year, a year ago, it was put together and advanced stage one. In September is Advanced a true in particular time time. One of the major motivation is the Our concern about fingerprinting, but they also believe I think we also believe that unless it is advanced to stage 2 it is hard to get expert from other standards body to take a look and Give us feedback. So it got stage 2. And I think the motivations of I want to get stick to doing a stage 2. We can get more feedback and to study whether that this will expose fingerprinting issue for the user may be careful. Fingerprinting the user and fingerprinting. The browser are two different thing where particular talk about fingerprinting the user to, for their private identity concern. So, if we have an update in November last year, and during I think late last year, or maybe early this year, I think tg2 also believe we should have some new guidance because there's additional (?) guidance what kind of stage two or three proposals should be forming. in tg2, I think because we have concern about data and prior art. So they are the prior art difficult. From in the usually and brought (?) additional guidance. Things I say to of requirement inside TG2 to and also what's that called a payload size? Requirement for stage 3. So that come after this or the events TG 2. So in we retrofitted to discuss that in the March meeting and I think TG2 agreed that it is proposal fit that question all criteria for stage three. Also in April I think, Mozilla and apple both, I think Mozilla first published their conclusion about the privacy or fingerprinting user concern, and concluded and apple. I think also take a look at it and agree with their conclusion. So, in that time we declare the concern about the Privacy or fingerprinting issue closed. So we can look at issue number three as the closure for that. So in TG2 in May several member expressed their support for stage three and so I that time those the last not quite sure, So we say we're come back to us before and the May TC39 to should tell us their stand. But I think there's something - I forgot what exactly - spme concern I have with proposal. I decide to postpone it but I give an update in May and particularly may I also ask other people's people's feedback and I several feedback in the discussion and so I have the rest of this particular proposal. I go over structure. based on how I responded to stage three feedback. @@ -283,19 +288,19 @@ FYT: So, here's example, after we change it, what it look like so you will have FYT: So, another option we which thing with change I think is also mentioned by Andrea from Mozilla. Originally We have a option for the time zone. We say okay we can say the region is US - the idea is that originally there are an option for region US but Adrea, say, well, you already have the intl Locale API in stage 3, and this, you know, sis, this particular API do not need to have redundant work with that and agree. So, because you only need to say, hey, this a bad tell us what, what kind of time zone are supported, and the other one was that in which time zone I prefer in that particular Locale and then intersection will tell you which one will be supported in that Locale or region, right? So, we removed this particular option, which is recounting. so because, you know, sometime maybe next year maybe Marco Rubio will invent a permanent Daylight Saving Time in Florida, who knows? Maybe there are passing the u.s. So we may have new tizen who knows right. And if IANA has a new time zone ID for that, and maybe some process or adding a new one There that there are no easy way to find out unless we have an API to show that or user have to do feature test, and this is an easier way to figure out that. -FYT: the second action item I think is coming from YSV from Mozilla about shipping the entire payload, I think there's a direct quote from the meeting notes, to add a requirement that anyone who ships this ships the entire payload. I think during the meeting I was very confused about what does that really mean? So I try to address that but that's the tag. I mean the the request of time. So I try to now, so first of all, I have to have some critiques about that request. The critique is basically first, they're not real, no definition about what is all requires or for entire set or or anywhere. Right? As I mentioned earlier, the appendix a already simplified, the supported are implementation dependent, there is no definition in ecma402 to say what is "all". There is no list of all the calendar. calendar. All the Collation Etc, except unit. Therefore there is no way we can base our admin for to the say what all tries also have problem, right? Because times are not defined by Ecma262, time zones is defined by IANA and they may add additional entry, right? We never know what will happen. So you know it's not in our hand, there is no way we can say what is "all". But CLDR or UTS35, how about that Locale? Actually they do not Define. What is all because I think they are basically believe that over time support from different minority language or country will be added and by Design they try to make it an open set, right? So it's possible in the next version, there will be additional more so they don't really have - and also see how they are currently after Define something more than ECMA465, for example numbering system, Roman is defining in CLDR, but Ecma 402 it Not true should include at there in the Roman numeric style in the Ecma, for to, right? So it will (?). We cannot just say, okay, we should we require to ship all the CRT are because ECMA 402 to currency but those are not needed to be shipped if you want to ship that for now. And citizens, okay, but not for example, V8 in Chrome is not stripping that so, also the browser for practical uses of this problem. usually multiple scenarios or our practice only support subsets. The currency code now, all this their currency. Fell are historical and nobody uses anymore but they're defined nice. Alright, so some of the browser to decide things which (?) and there's no clear domination and there are different version ISO, which what is also? It's very difficult to achieve that. but anyway, I think you'll learn actually you Yulia actually later on, I think concluded - Shane asked her whether this issue could be considered as a general 302 issue not tied to this particular proposal and I think she agreed. This is general issue that could be addressed separately, I think there's a good concern about fingerprinting, but it is difficult to define what is the "entire payload". Is a very abstract concept for the reason I mentioned above. because there’s nothing we can refer to the say, what is entire? So is it's a very difficult action to take action about. +FYT: the second action item I think is coming from YSV from Mozilla about shipping the entire payload, I think there's a direct quote from the meeting notes, to add a requirement that anyone who ships this ships the entire payload. I think during the meeting I was very confused about what does that really mean? So I try to address that but that's the tag. I mean the the request of time. So I try to now, so first of all, I have to have some critiques about that request. The critique is basically first, they're not real, no definition about what is all requires or for entire set or or anywhere. Right? As I mentioned earlier, the appendix a already simplified, the supported are implementation dependent, there is no definition in ecma402 to say what is "all". There is no list of all the calendar. calendar. All the Collation Etc, except unit. Therefore there is no way we can base our admin for to the say what all tries also have problem, right? Because times are not defined by Ecma262, time zones is defined by IANA and they may add additional entry, right? We never know what will happen. So you know it's not in our hand, there is no way we can say what is "all". But CLDR or UTS35, how about that Locale? Actually they do not Define. What is all because I think they are basically believe that over time support from different minority language or country will be added and by Design they try to make it an open set, right? So it's possible in the next version, there will be additional more so they don't really have - and also see how they are currently after Define something more than ECMA465, for example numbering system, Roman is defining in CLDR, but Ecma 402 it Not true should include at there in the Roman numeric style in the Ecma, for to, right? So it will (?). We cannot just say, okay, we should we require to ship all the CRT are because ECMA 402 to currency but those are not needed to be shipped if you want to ship that for now. And citizens, okay, but not for example, V8 in Chrome is not stripping that so, also the browser for practical uses of this problem. usually multiple scenarios or our practice only support subsets. The currency code now, all this their currency. Fell are historical and nobody uses anymore but they're defined nice. Alright, so some of the browser to decide things which (?) and there's no clear domination and there are different version ISO, which what is also? It's very difficult to achieve that. but anyway, I think you'll learn actually you Yulia actually later on, I think concluded - Shane asked her whether this issue could be considered as a general 302 issue not tied to this particular proposal and I think she agreed. This is general issue that could be addressed separately, I think there's a good concern about fingerprinting, but it is difficult to define what is the "entire payload". Is a very abstract concept for the reason I mentioned above. because there’s nothing we can refer to the say, what is entire? So is it's a very difficult action to take action about. -FYT: The third action, I turn I got requested - the direct quote follow me. he knows that the use case segment of the proposal is very limited compared to the proposal would like to see the use case proven out, right? So, the response is that, I spend some time to write a motivation and the text is very small, but you can go to the proposal slide I put in the readme. And basically describes the motivation of that. So one use case is to detect missing feature. And therefore, we can use that to trigger import of a polyfill, right? So one particular You're real use case. For example, for example, one of my colleague is working, changing the closure Library than Google ship for JS to try to use as much as possible to use the Ecma 402 API. but because not all the browser support all the calendar feature or whatever, there won't be time, we all say, okay, if we have, we to have a way in the bootstrapping code to say very quickly, to detect what is supported, what is not supported. And if they are not supported, we may download one version of closure and there if they are supported in the browser, we may download a different version of a closure with a polyfill to to fill those feature. That is not supportive, right? And it have to be a very small amount of code and bootstrapping, so this API can provide a way, you know, for example, you can using this word very carefully, get fingerprint the browser support with a hash right? Nothing or print the user because all the user using the same browser version will have exactly the same fingerprint result of that browser. It might cannot, you cannot use that detect particular user, right? Just connected can detect a browser and use the hash key to load the API a different version, right? So you miss it, you may loading a general version that, you know, the closure API, haven't seen that with additional support. I mean polyfill, but if the bootstrapping code to say, about this, this whatever, All the returned, all the calendar support. I have hash to the same thing. We have you can load a closure API which only called the Intl 402 api directly, right? So how how to be way and have their shorthand small amount of bootstrapping code to do that. So that's one of the use case to help very easy detect missing feature. in +FYT: The third action, I turn I got requested - the direct quote follow me. he knows that the use case segment of the proposal is very limited compared to the proposal would like to see the use case proven out, right? So, the response is that, I spend some time to write a motivation and the text is very small, but you can go to the proposal slide I put in the readme. And basically describes the motivation of that. So one use case is to detect missing feature. And therefore, we can use that to trigger import of a polyfill, right? So one particular You're real use case. For example, for example, one of my colleague is working, changing the closure Library than Google ship for JS to try to use as much as possible to use the Ecma 402 API. but because not all the browser support all the calendar feature or whatever, there won't be time, we all say, okay, if we have, we to have a way in the bootstrapping code to say very quickly, to detect what is supported, what is not supported. And if they are not supported, we may download one version of closure and there if they are supported in the browser, we may download a different version of a closure with a polyfill to to fill those feature. That is not supportive, right? And it have to be a very small amount of code and bootstrapping, so this API can provide a way, you know, for example, you can using this word very carefully, get fingerprint the browser support with a hash right? Nothing or print the user because all the user using the same browser version will have exactly the same fingerprint result of that browser. It might cannot, you cannot use that detect particular user, right? Just connected can detect a browser and use the hash key to load the API a different version, right? So you miss it, you may loading a general version that, you know, the closure API, haven't seen that with additional support. I mean polyfill, but if the bootstrapping code to say, about this, this whatever, All the returned, all the calendar support. I have hash to the same thing. We have you can load a closure API which only called the Intl 402 api directly, right? So how how to be way and have their shorthand small amount of bootstrapping code to do that. So that's one of the use case to help very easy detect missing feature. in -FYT: the other thing use cases server-side programming. For example, you know, JS is of course. not only designed for a client, although clients probably one of them biggest usage, but you can also run JS on the server. server is already very popular and you may need to figure out It out, you know, the JavaScript running on the server side may need to respond. So let's say the user Say Hey I want to write a calendar purely JavaScript in the server, sign to receive the user interface from the user and output 'html to the user from the server, Java script and may have a UI to use all the calendar or the times on the hown support, right? then the server After asked the V8 in the short words say, hey, no, you're the vam might know that J server Walk out on what time zone. Do know, and then it will return and it will format HTML and return to the client, right? So this is useful for server-side programming. +FYT: the other thing use cases server-side programming. For example, you know, JS is of course. not only designed for a client, although clients probably one of them biggest usage, but you can also run JS on the server. server is already very popular and you may need to figure out It out, you know, the JavaScript running on the server side may need to respond. So let's say the user Say Hey I want to write a calendar purely JavaScript in the server, sign to receive the user interface from the user and output 'html to the user from the server, Java script and may have a UI to use all the calendar or the times on the hown support, right? then the server After asked the V8 in the short words say, hey, no, you're the vam might know that J server Walk out on what time zone. Do know, and then it will return and it will format HTML and return to the client, right? So this is useful for server-side programming. -FYT: The other use case, for example, you may have some because I think early on people mention say, well, if they're used this only use cases for picker then Why don't we just put in it? HTML. But how about if you have a pie you want to use JavaScript in, a non HTML environment? Let's say you want program webgl video game. A virtual time, you know, whatever can write clearly extent webgl 3D there, no HTML inside and you have all the 3D rendering and your character walking to a particular store, and you may have some place, have to have change the option. Everything showing up is 3D webgl, you cannot address by adding a HTML widget, right? Everything is webgl there. Unless you also want to work on webgl timezone picker there. So, just the three of the options. I mean, to show you that this is needed to for addressing Being fast way to do a missing feature detection, and probable can combine for bootstrapping process to download polyfill code or missing feature detection. Server-side programming is very necessary and of course, their client side programming and also some environment that using javascript not under HTML, which cannot be addressed by HTML picker have to have to increment that there are real Pretty real. You just can't face it. May be no one to work on this project as all very non-fictional use case. So I address that so I put it there including the motivation, which may be small tasks probably, similarly repeated what are just described but I put on together. So that's my response for that three action item to resolve the feedback. We receive in in May, that will convince you to do. This is a proposal that already fulfilled, the burden as a champion for this. So let me come back. So I'm here to ask for stage 3 advancement. So basically the entrance criteria from my understanding, is there how to have a complete spec tax and a Believer as down we have designed definitely reviewer. Take a look at that. So we'll have a developer take a look at it and we also got support from Several several member doing the TG2 solve the member Express support. They just expression. Are there Are there non-blocking position? +FYT: The other use case, for example, you may have some because I think early on people mention say, well, if they're used this only use cases for picker then Why don't we just put in it? HTML. But how about if you have a pie you want to use JavaScript in, a non HTML environment? Let's say you want program webgl video game. A virtual time, you know, whatever can write clearly extent webgl 3D there, no HTML inside and you have all the 3D rendering and your character walking to a particular store, and you may have some place, have to have change the option. Everything showing up is 3D webgl, you cannot address by adding a HTML widget, right? Everything is webgl there. Unless you also want to work on webgl timezone picker there. So, just the three of the options. I mean, to show you that this is needed to for addressing Being fast way to do a missing feature detection, and probable can combine for bootstrapping process to download polyfill code or missing feature detection. Server-side programming is very necessary and of course, their client side programming and also some environment that using javascript not under HTML, which cannot be addressed by HTML picker have to have to increment that there are real Pretty real. You just can't face it. May be no one to work on this project as all very non-fictional use case. So I address that so I put it there including the motivation, which may be small tasks probably, similarly repeated what are just described but I put on together. So that's my response for that three action item to resolve the feedback. We receive in in May, that will convince you to do. This is a proposal that already fulfilled, the burden as a champion for this. So let me come back. So I'm here to ask for stage 3 advancement. So basically the entrance criteria from my understanding, is there how to have a complete spec tax and a Believer as down we have designed definitely reviewer. Take a look at that. So we'll have a developer take a look at it and we also got support from Several several member doing the TG2 solve the member Express support. They just expression. Are there Are there non-blocking position? SFC: I just wanted to say that Frank has had a very good attention to detail on the proposal. He's been working on it for quite a long time. And he takes everyone's concerns and resolves them in very careful ways. From a quality point of view I think this is a very good proposal. I think that it fills a void an Ecma 402 that we've had for a long time. Frank already went over these cases on the previous slide. And I really support this proposal. It's taken a long time to get to this point and I'm really excited to see it hopefully advancing to stage 3 today. -USA: Great. And next up, there is me. So I just wanted to clarify a little on the ship all data class so I I reached out to you via after the meeting and we talked a little about this and I share her concerns. So the idea is I think as you mentioned it's hard to say to say what all means when When you say ship all data. So I think a better way to turn. This would be to ship a consistent set of data. The the concern was that, you know, it's okay to not support certain currencies, it's just not, okay to support a different set of currencies or sorry, enumerate over a different set of currencies than what is supported in number format. But as you pointed out, this is Is not a problem that is unique to this proposal and also Yulia mentioned this. But what is written in the last meeting. it's something very general to ECMA 402 and I think even before this proposal came along they were two sets of currencies technically being supported by displayNames and by number format. So I think this needs to be addressed on the 402 level but I do think that, since this would likely happen before proposal reaches stage 4, you know, there might be some normative change that would bleed into this. Given that people generally already believe that this should happen, that different sets of values should be supported, it's fine that the committee give consensus on that is it's just that the current spec might allow and implementation to potentially, you know, support different sets of units across the info Constructors, which shouldn't happen. +USA: Great. And next up, there is me. So I just wanted to clarify a little on the ship all data class so I I reached out to you via after the meeting and we talked a little about this and I share her concerns. So the idea is I think as you mentioned it's hard to say to say what all means when When you say ship all data. So I think a better way to turn. This would be to ship a consistent set of data. The the concern was that, you know, it's okay to not support certain currencies, it's just not, okay to support a different set of currencies or sorry, enumerate over a different set of currencies than what is supported in number format. But as you pointed out, this is Is not a problem that is unique to this proposal and also Yulia mentioned this. But what is written in the last meeting. it's something very general to ECMA 402 and I think even before this proposal came along they were two sets of currencies technically being supported by displayNames and by number format. So I think this needs to be addressed on the 402 level but I do think that, since this would likely happen before proposal reaches stage 4, you know, there might be some normative change that would bleed into this. Given that people generally already believe that this should happen, that different sets of values should be supported, it's fine that the committee give consensus on that is it's just that the current spec might allow and implementation to potentially, you know, support different sets of units across the info Constructors, which shouldn't happen. - FYT: So, let's say we don't have this proposal. Will the thing describe still happen? +FYT: So, let's say we don't have this proposal. Will the thing describe still happen? USA: Yes, it absolutely does. This proposal just exposes this issue in Ecma 402 which is why I mentioned that this needs to be addressed in a normative request to 402. However, that normative pull request might and probably will induce a normative change in this proposal. @@ -303,7 +308,7 @@ FYT: I see, got it. SFC: Yeah, I agree with everything Ujjwal said and I think in general the basic goal that we need to achieve is that if there's an Intl object that supports some list of things, let's say calendar systems, let's say Intl date-time format supports a certain set of calendar systems, that set of calendar systems needs to be the same one that the intl enumeration returns, which seems obvious and we just need to do some - if it's not already fully in the spec - we just need to do editorial work to make sure that that's the case. Because that's the expectation that developers have and it also helps this data consistency issue. -USA: Perfect. Yeah I think as we're on the same page with that I think that also addresses Yulia's concern. +USA: Perfect. Yeah I think as we're on the same page with that I think that also addresses Yulia's concern. JHD: yeah, so I think what Shane is saying is the same thing that I put on my queue topic which is just like we should be able to just rearrange the spec to make sure that anything that the enumeration stuff reports exactly matches, what supported everywhere Right. And like and that way rather whether the full data set by whatever definition that is like. It doesn't matter how much of it is shipped, at least everything will agree and be coherent. Does that sound like what Shane was saying? @@ -338,18 +343,16 @@ CM: I think I can speak to Mark's concern, I understand his concern fairly well FYT: Thank you, thank you. Okay, so I guess we got stage three. ### Conclusion/Resolution -Stage 3. +Stage 3. ## Realms for Stage 3 + Presenter: Caridy Patiño (CP) - [proposal](https://github.com/tc39/proposal-realms) - [slides](https://docs.google.com/presentation/d/1MgrUnQH25gDVosKnH10n9n9msvrLkdaHI0taQgOWRcs/edit) - - - CP: Hi Folks. We're back with Realms. We have been talking about Realms for the last few meetings. So we'd like to go over very quickly and open the discussion about some of the topics. Probably everyone that has participated in the last few meetings already is aware of the API. It has a very small footprint API. From the last meeting, we have three main observations of things that we have to look into. We wanted to get the stage 3 reviews resolved. We also wanted to make sure that the HTML integration spec is ready, specifically for the module mechanics. That was one of the contention points during the last meeting and the general feedback was that implementers were not sure what they are gonna do yet in terms of integration. And finally, the last one was a little bit more hand waving in the sense that some of the delegates, specifically I believe Jack and Jordan, raised some concerns about the scope of this proposal. So, let’s go over these three items individually. CP: in terms of the review we have received a considerable amount of editorial reviews and feedback in general. Thanks to everyone who has contributed in some way or another but we believe we have addressed all of the concerns in terms of review. @@ -364,31 +367,31 @@ JHD:. So the first one I added was, I mean if this event since this membrane lib CP: yeah, we have debated also, the membranes and I think Mark Miller has done a good job explaining why we don't have a built-in membrane library yet. I don't know if Mark is around - -MM: I'm around. I can jump in to explain. The main thing is that membrane right now is a pattern. If you the that is that we universally agree on across all uses of membrane, is what a fully transparent, or a transparent as possible rather, membrane looks like. The purpose of a membrane is to introduce non-transparency. We'd love to have an agreed membrane abstraction mechanism membrane that you could parameterize in a standard way, with a distortion, but that's still research. We've got several different membrane libraries that have different perspectives on how one parameterizes with the distortion and no clear dominant approach to that. So, so what? what this? this, what the realms, Proposal does instead with the callable boundaries, the callable boundary is universal. Adequate underlying mechanism that enforces the separation. It's a safety measure such that any membrane built on top it, enables any membrane to be built on top of it, so it doesn't preclude any particular membrane that preserves separation and as a mechanism, it ensures that any membrane built on top of the can't get separation wrong. It also fortunately ensures that any membrane built on top of it. Can't purposely violate separation, so it cuts both ways. But but also as we've we've discussed, the, if we didn't either a membrane or a call we'll boundary we could still do membrane as user libraries on top of Realms that had neither separation. But that's not an argument for trying to adopt membranes themselves as the separation mechanism at this time. +MM: I'm around. I can jump in to explain. The main thing is that membrane right now is a pattern. If you the that is that we universally agree on across all uses of membrane, is what a fully transparent, or a transparent as possible rather, membrane looks like. The purpose of a membrane is to introduce non-transparency. We'd love to have an agreed membrane abstraction mechanism membrane that you could parameterize in a standard way, with a distortion, but that's still research. We've got several different membrane libraries that have different perspectives on how one parameterizes with the distortion and no clear dominant approach to that. So, so what? what this? this, what the realms, Proposal does instead with the callable boundaries, the callable boundary is universal. Adequate underlying mechanism that enforces the separation. It's a safety measure such that any membrane built on top it, enables any membrane to be built on top of it, so it doesn't preclude any particular membrane that preserves separation and as a mechanism, it ensures that any membrane built on top of the can't get separation wrong. It also fortunately ensures that any membrane built on top of it. Can't purposely violate separation, so it cuts both ways. But but also as we've we've discussed, the, if we didn't either a membrane or a call we'll boundary we could still do membrane as user libraries on top of Realms that had neither separation. But that's not an argument for trying to adopt membranes themselves as the separation mechanism at this time. CP: yeah, I want to add one thing to what, just, what marks said and is that the callable boundary? There is actually help a lot when it comes to having that membrane between two rooms and we have been working on this for quite some time and before they call Obama it was a thing and it was a common mistake when you implement a membrane to have issues with the separation and object from the other side. And so on, especially with errors and throwing errors between implications across the boundary. So it actually helps a lot to have that separation place. JHD: Okay, and then just a real quick and then I'll move on to the replies so you're saying that there is no larger feature before you get to the points of disagreement that Mark described. Beyond this. Like there's no there's no hop between callable Realms and full membranes with a decided distortion mechanism? -MM:i Before the invention of the callable boundary, there was no Universal underlying mechanism Simply Beyond simply the mechanisms of proxies and weak maps, because put another way proxies and weak Maps, were the Universal underlying agreed mechanism. And then they were the only ones until the invention of callable boundaries. But other than that, the membrane library simply followed a particular pattern and then introduced distortions by varying from the pattern. There was nothing Universal that how they vary. +MM:i Before the invention of the callable boundary, there was no Universal underlying mechanism Simply Beyond simply the mechanisms of proxies and weak maps, because put another way proxies and weak Maps, were the Universal underlying agreed mechanism. And then they were the only ones until the invention of callable boundaries. But other than that, the membrane library simply followed a particular pattern and then introduced distortions by varying from the pattern. There was nothing Universal that how they vary. -JHD: Thank you. +JHD: Thank you. -MM: Anybody who's interested: We'd love to have more research on this, on the idea of a membrane abstraction mechanism rather than a reusable pattern. Interaction mechanism where you had, we had a universal way to parameterize with distortion functions is something that kind of tantalisingly close, I think we actually could get there and I'd love to have more minds on this problem. +MM: Anybody who's interested: We'd love to have more research on this, on the idea of a membrane abstraction mechanism rather than a reusable pattern. Interaction mechanism where you had, we had a universal way to parameterize with distortion functions is something that kind of tantalisingly close, I think we actually could get there and I'd love to have more minds on this problem. + +SYG:I don't know if Justin Ridgewell is on the call. It may be a bit late for East Coast. I assume he's not so I'll kind of speak with him but not really. It's my understanding, which is non-experts understanding I don't work on AMP, that the amp DOM worker use case can readily leverage the callable Boundary to move whatever it needs to do with the virtual DOM for running amp scripts should be a synchronous thing which is what they wanted all along without the need of a full membrane without a callable boundary exists. So there is a concrete use case that doesn't require the membrane pattern, but can still use callable boundaries, that kind of separation to do what I want. So just that's just the point that this is useful as people as a building block beyond membranes. -SYG:I don't know if Justin Ridgewell is on the call. It may be a bit late for East Coast. I assume he's not so I'll kind of speak with him but not really. It's my understanding, which is non-experts understanding I don't work on AMP, that the amp DOM worker use case can readily leverage the callable Boundary to move whatever it needs to do with the virtual DOM for running amp scripts should be a synchronous thing which is what they wanted all along without the need of a full membrane without a callable boundary exists. So there is a concrete use case that doesn't require the membrane pattern, but can still use callable boundaries, that kind of separation to do what I want. So just that's just the point that this is useful as people as a building block beyond membranes. - CP: I believe you are right to discuss this. We studied this extensively and there are many, many use cases that do not require the membrane. The membrane is only needed when you want to really preserve identity across the two rooms and, which is a very edge case and there are many other use cases, just simply I've only code and get the result back and send it over. What did you stringify that using JSON stringify for? Just share the bits, or whatever shape, or form sharing the information that you want, if you need to share information, -JRL: AMP wants the callable boundary. If we don't have it, all we're going to do is implement it in user code, because we don't trust ourselves to use the object-passing form of Realms correctly. Somehow we will leak an object across the graphs and that will allow the semi-trusted code to access the AMP global. Our reason for using Realms in the first place is to disallow that. +JRL: AMP wants the callable boundary. If we don't have it, all we're going to do is implement it in user code, because we don't trust ourselves to use the object-passing form of Realms correctly. Somehow we will leak an object across the graphs and that will allow the semi-trusted code to access the AMP global. Our reason for using Realms in the first place is to disallow that. -JWK: So, I have a question about callable boundaries. Is hypergraph membrane still possible? It seems the hypergraph membrane needs to track identity across different realms. +JWK: So, I have a question about callable boundaries. Is hypergraph membrane still possible? It seems the hypergraph membrane needs to track identity across different realms. MM: The hypergraph membrane is the thing that Alex Vinvent Invented where you have several different sub graphs. Not just two but several, that are separated and that there's a shared identity tracking between all N sub graphs, and that furthermore, that the the number of subgraphs being separated can be dynamic, which is, and at Alex Vincent’s extensions, hypergraph membrane mechanism did all of that rather elegantly the underlying with the normal membrane mechanism you have the Simplest to a membrane. The way we originally did. It was a blue to yellow WeakMap and a yellow to Blue weak map for when you're just separating two and you're trying to make corresponding identities, you're trying to do a hyper graph instead, what Alex came up with, which works very well, is that you've got a yellow to Shared record and a blue to Shared record and a green To Shared record WeakMaps, all of which map to a common shared record and And then, the shared record has symbol name property for each of the sub graphs mapping back to the identity for that sub graph. So you can, you can be compactly map from any color to any other color by going from the representative of that color to the shared record and then going from the shared record to the representative of the other color. -CP: I believe, so I don't know all the details of these, but it sounds very familiar. I've been implemented, many different membranes throughout the years, haven't found anything, specifically, that kind of be done and they mechanics are really simple. If you take a look at the fermentation that we did on top of this API is very straightforward. You need a WeakMap, map, you need identity of object that you want to share to go through the gullible, boundary has a function on the other side and it works, Pretty simple. As Mark said, this is just the foundation of the capability to implement any kind of membrane on top of it. We can look into more details, but I'm pretty sure that supports this. I just don't have enough details about these type of membrane specifically, but it looks a lot like what we do. Anyways, the marshal shares the identity across all the different sandboxes and then whenever something happened you have to go to the Marshal, to do the operation there, and share it with the other, some box, or the other realm. So it looks like we're doing literally +CP: I believe, so I don't know all the details of these, but it sounds very familiar. I've been implemented, many different membranes throughout the years, haven't found anything, specifically, that kind of be done and they mechanics are really simple. If you take a look at the fermentation that we did on top of this API is very straightforward. You need a WeakMap, map, you need identity of object that you want to share to go through the gullible, boundary has a function on the other side and it works, Pretty simple. As Mark said, this is just the foundation of the capability to implement any kind of membrane on top of it. We can look into more details, but I'm pretty sure that supports this. I just don't have enough details about these type of membrane specifically, but it looks a lot like what we do. Anyways, the marshal shares the identity across all the different sandboxes and then whenever something happened you have to go to the Marshal, to do the operation there, and share it with the other, some box, or the other realm. So it looks like we're doing literally -MM: It's not as clear to me. The thing with the Alex Vincent representation is that the record that you're mapping to your map is, Represents the should identity, and it's not within any one of the boundaries it's a shared thing between all the boundaries, +MM: It's not as clear to me. The thing with the Alex Vincent representation is that the record that you're mapping to your map is, Represents the should identity, and it's not within any one of the boundaries it's a shared thing between all the boundaries, CP: well, the record belongs, a realm, it has to be in some Realms and if he has to be in zone realm, then should not be a problem to be able to to guide to that realm. It has be in a realm @@ -398,33 +401,33 @@ JWK: Okay, so I guess the answer is yes. My next question is, if we have to use CM: Yeah, debugging a membrane is hard. The good news is that we are Salsforce funding. some work for an Igalia to work on helping improving some that. But in general, you're using proxies, the debugging experience is more difficult than regular objects. -MM: I wasn't aware of the Igalia work on helping to debug this. +MM: I wasn't aware of the Igalia work on helping to debug this. - CP: In general those are the problem debugging and again as only, you need to use a membrane, you don't need to use a membrane, This is not different from debugging an iframe today. you have different identities depending on where you are. And they have to have accommodations to support the Realms and be able to Allow you to identify what realm you are on and so on. +CP: In general those are the problem debugging and again as only, you need to use a membrane, you don't need to use a membrane, This is not different from debugging an iframe today. you have different identities depending on where you are. And they have to have accommodations to support the Realms and be able to Allow you to identify what realm you are on and so on. -JHD: I'm glad you're here because was hoping for some clarification here both Realms and amp has been around for many years and I hadn't heard about AMP having any use case related to Realms until the callable boundary thing came out and it's been like lightly mentioned a couple times, so I'm not super clear. I hear you that callable Realms solves the problem for you and that amp wants it. So I'm curious is it a problem that cannot be solved with the previous iteration of the realm of proposal or would not be solved with membranes. Like maybe this is smaller solution, but do they all solve your problem or not? +JHD: I'm glad you're here because was hoping for some clarification here both Realms and amp has been around for many years and I hadn't heard about AMP having any use case related to Realms until the callable boundary thing came out and it's been like lightly mentioned a couple times, so I'm not super clear. I hear you that callable Realms solves the problem for you and that amp wants it. So I'm curious is it a problem that cannot be solved with the previous iteration of the realm of proposal or would not be solved with membranes. Like maybe this is smaller solution, but do they all solve your problem or not? -JRL: Yes, every Realms proposal that's been presented solves AMP’s use case. The difference with the old proposal where we were allowed to share objects directly across the graphs, it's easier to shoot yourself in the foot. So what AMP currently has is essentially the exact same thing as a callable boundary, except we're talking across web workers. We serialize the entire graph into JSON and post that across, and then deserialize it and that separation guarantees that we can never share the object across the graphs. Someone who's unprivileged can never access the AMP global object. With the old Realms proposal, that's not a guarantee. So what I was going to have to implement is essentially a callable boundary where you're not allowed to talk to the other side at all. All you're allowed to do is call my post message helper, my post message will serialize and deserialize on the other side, and then you would do it that way except now it's sync instead of async. So we would be implementing the exact same feature as the callable boundary. Just, it would be a user land now. +JRL: Yes, every Realms proposal that's been presented solves AMP’s use case. The difference with the old proposal where we were allowed to share objects directly across the graphs, it's easier to shoot yourself in the foot. So what AMP currently has is essentially the exact same thing as a callable boundary, except we're talking across web workers. We serialize the entire graph into JSON and post that across, and then deserialize it and that separation guarantees that we can never share the object across the graphs. Someone who's unprivileged can never access the AMP global object. With the old Realms proposal, that's not a guarantee. So what I was going to have to implement is essentially a callable boundary where you're not allowed to talk to the other side at all. All you're allowed to do is call my post message helper, my post message will serialize and deserialize on the other side, and then you would do it that way except now it's sync instead of async. So we would be implementing the exact same feature as the callable boundary. Just, it would be a user land now. -CP: Yeah. So then we call this Y 2 Jordan, the terms that we use for that is integrity preserving, semantics of the callable boundary. So if you just cannot share that the object. +CP: Yeah. So then we call this Y 2 Jordan, the terms that we use for that is integrity preserving, semantics of the callable boundary. So if you just cannot share that the object. -JHD: So just to clarify, again, Current the boundary API does not permit any way to get direct object access access, and you're saying that's safer because it just does what you want already and handles all of the separation for you. is that accurate? +JHD: So just to clarify, again, Current the boundary API does not permit any way to get direct object access access, and you're saying that's safer because it just does what you want already and handles all of the separation for you. is that accurate? -JRL: Correct. We don't want to share objects. We only need the ability to run on other code without giving them access to our globals, the integrity boundary. +JRL: Correct. We don't want to share objects. We only need the ability to run on other code without giving them access to our globals, the integrity boundary. -JHD: So just as a brief thought experiment to make sure I have the understanding correct: if the old Realms API had some sort of constructor option that like opted you into this mode, such that you didn't have to worry about it, would that achieve the same goal for you? +JHD: So just as a brief thought experiment to make sure I have the understanding correct: if the old Realms API had some sort of constructor option that like opted you into this mode, such that you didn't have to worry about it, would that achieve the same goal for you? -JRL: Yeah, that would be fine with us. +JRL: Yeah, that would be fine with us. -JHD: Okay. Thank you for clarifying. +JHD: Okay. Thank you for clarifying. -GCL: I feel like a lot of times this conversation about object access turns into this, like membrane thing and think the whole you know, privileged access use cases. Clearly very important, a lot of people here care about it. and I don't want to say that we shouldn't have that. I think it's totally reasonable that. We have that. But there are also lots of use cases That Don't have anything to do with that. Right? Realms are not just a security mechanism in the way that it's being used here, but they can also be used to, you know, bootstrap environments for, you know, mocking platforms and access and clean versions of objects for, you know all sorts of things there, a lot of use cases, you know and in node we have VM on the, on the web. there's iframe obviously and there's a lot of code that uses these Primitives for things that are not. The like membrane call separation thing that is being forced in this proposal here, and I think that's really unfortunate if not for the, you know, alone. Just the fact that this code can't be reconciled right to be code. that works anywhere, right? You have to use iframe code in the browser and VM code in node. that's unfortunate by itself. and so, I think it would be like Jordan said, write a sort of option. You could, you know, opt-in opt-out I don't think it really matters to not have this limitation. I think would be a lot more representative of the use cases that this proposal can serve because I feel like it's very one-sided right now. +GCL: I feel like a lot of times this conversation about object access turns into this, like membrane thing and think the whole you know, privileged access use cases. Clearly very important, a lot of people here care about it. and I don't want to say that we shouldn't have that. I think it's totally reasonable that. We have that. But there are also lots of use cases That Don't have anything to do with that. Right? Realms are not just a security mechanism in the way that it's being used here, but they can also be used to, you know, bootstrap environments for, you know, mocking platforms and access and clean versions of objects for, you know all sorts of things there, a lot of use cases, you know and in node we have VM on the, on the web. there's iframe obviously and there's a lot of code that uses these Primitives for things that are not. The like membrane call separation thing that is being forced in this proposal here, and I think that's really unfortunate if not for the, you know, alone. Just the fact that this code can't be reconciled right to be code. that works anywhere, right? You have to use iframe code in the browser and VM code in node. that's unfortunate by itself. and so, I think it would be like Jordan said, write a sort of option. You could, you know, opt-in opt-out I don't think it really matters to not have this limitation. I think would be a lot more representative of the use cases that this proposal can serve because I feel like it's very one-sided right now. -CM: Yeah just two just to share that in the previous proposal that was a getter on the input of the realm. That get a give you access to the global object that was removed. There's nothing that prevents of adding that in the future. If we feel that that's really what is needed. Just that at the moment, we believe that's a foot Khan and it's very error-prone. And yes, they are previous around that, like, the iPhones and the DM toes come with without issue the identity. This Tissue. +CM: Yeah just two just to share that in the previous proposal that was a getter on the input of the realm. That get a give you access to the global object that was removed. There's nothing that prevents of adding that in the future. If we feel that that's really what is needed. Just that at the moment, we believe that's a foot Khan and it's very error-prone. And yes, they are previous around that, like, the iPhones and the DM toes come with without issue the identity. This Tissue. -DE: so, I'm kind of having trouble following some of the discussion about use cases because I get to their use cases for shared objects are almost but they're also use cases. Many use cases presented for Caldwell boundary problems and the existence of one class of use cases doesn't negate that the other class. So I could I can see the argument for for adding this a good proposal but I don't see why this gets tied up in that other one. This realms proposal meets, the use cases that the champion group has the champion that, you know, presented the motivation and brought it to stage two. So, this, you know, the current the current path makes sense to me, even if it doesn't everybody's problem. +DE: so, I'm kind of having trouble following some of the discussion about use cases because I get to their use cases for shared objects are almost but they're also use cases. Many use cases presented for Caldwell boundary problems and the existence of one class of use cases doesn't negate that the other class. So I could I can see the argument for for adding this a good proposal but I don't see why this gets tied up in that other one. This realms proposal meets, the use cases that the champion group has the champion that, you know, presented the motivation and brought it to stage two. So, this, you know, the current the current path makes sense to me, even if it doesn't everybody's problem. -JHD: So to be clear, at the time it was brought to stage 2, it solved the shared object use cases also, so callable boundary is effectively a brand new proposal that happens to solve some of the use cases of the proposal that went to stage 2. So it's I think it's not really fair to characterize this as like, "oh well, these are just two overlapping proposals" because essentially the stage 2 proposal was kind of replaced - very recently - with one that does not solve all of the use cases that the stage 2 proposal solved. The fact that the champion group only has a subset of those use cases, doesn't also negate that the stage 2 presumption we all had was that all of those use cases were solved and that's what my next sort of queue item is about. +JHD: So to be clear, at the time it was brought to stage 2, it solved the shared object use cases also, so callable boundary is effectively a brand new proposal that happens to solve some of the use cases of the proposal that went to stage 2. So it's I think it's not really fair to characterize this as like, "oh well, these are just two overlapping proposals" because essentially the stage 2 proposal was kind of replaced - very recently - with one that does not solve all of the use cases that the stage 2 proposal solved. The fact that the champion group only has a subset of those use cases, doesn't also negate that the stage 2 presumption we all had was that all of those use cases were solved and that's what my next sort of queue item is about. MM: Yeah. Historically, JHD is correct, The Realms without the callable boundary. Certainly enables the call, the boundary, to be built at the user level. the thing that solved by building the callable the boundary in is enforcing isolation. And if the isolation is optional, then it's not something that needed to be enforced by an inescapable mechanism in which case building it into the platform is, is with note is no longer solving a fundamental problem if you make it optional, because the only fundamental difference between providing at the user level and providing it in the platform is that the is that we can make it non-optional by putting to into the platforms. So, once again, historically I think it's the the way we ended up here is that some of the people objectioning to the nature of the stage 2 proposal wanted to make the isolation mandatory. So I think I have trouble characterizing this as well. If we want to make it optional, can add an option to make it optional in the future. I'm uncomfortable with that because if we wanted to make it optional, then I agree that we shouldn't introduce the callable boundary in the first place. But, what we found was we were deadlocked on moving it forward, whereas the callable boundary meets the use cases that most of us have in mind, That would be happy either way. And satisfies those who want to make the isolation mandatory. @@ -432,11 +435,11 @@ LEO: I just want to talk a little bit about this problem here, because we are ta USA: next up, we have Shu. Who says that he agrees with Mark and that the primary value add of built and isolation, is that it's not as capable. Next up is Gus. Yeah, -GCL: I just the thing about membranes. Being a solution for this. while it's like technically true, it really bothers me that, that is phrased in that way because it doesn't really represent the usage of the future. Like if someone's trying to set this up in a way that has nothing to do with, You know, the whole, whatever security model this Champion group has thought up. Like I'm, I understand that there's a whole thing there. It's anything that I'm familiar with, and it has nothing to do with the reality of how I would use this feature. And now I need to, like, come in and understand this whole model to do something completely unrelated. It just doesn't make sense to me and I don't appreciate the connection there because it's just not it's just not a it's just not comparable in my opinion and I'm sorry if I'm not phrasing this. +GCL: I just the thing about membranes. Being a solution for this. while it's like technically true, it really bothers me that, that is phrased in that way because it doesn't really represent the usage of the future. Like if someone's trying to set this up in a way that has nothing to do with, You know, the whole, whatever security model this Champion group has thought up. Like I'm, I understand that there's a whole thing there. It's anything that I'm familiar with, and it has nothing to do with the reality of how I would use this feature. And now I need to, like, come in and understand this whole model to do something completely unrelated. It just doesn't make sense to me and I don't appreciate the connection there because it's just not it's just not a it's just not comparable in my opinion and I'm sorry if I'm not phrasing this. -MM: can I ask why it's not comparable? The membranes are the at the default membrane the membrane in the absence of the distortion is trying to be as transparent as possible. So other than the internal and in assume that, that a transparent as possible membrane Library, does come with this in practice there shouldn't be much observable difference between the callable boundary with a transparent membrane on top versus the direct object access that you might prefer. So given that the membrane Library It is easily available. What's the difference that you care about between that and the direct object access? +MM: can I ask why it's not comparable? The membranes are the at the default membrane the membrane in the absence of the distortion is trying to be as transparent as possible. So other than the internal and in assume that, that a transparent as possible membrane Library, does come with this in practice there shouldn't be much observable difference between the callable boundary with a transparent membrane on top versus the direct object access that you might prefer. So given that the membrane Library It is easily available. What's the difference that you care about between that and the direct object access? -CP: Yeah, I want to add that I think Mark is I would write the example that we create with tiny membrane that I mentioned, I call it. I-realm instead of Iframes and is exactly what you will get with and I present you with it. When you create a friend, you insert it, you get the window out of it. And now you start interacting with it. You're getting some objects that they have different identities that you try to check the identity. If they do not match your identity and another exactly what you're gonna get. if you have a library instruction that does that for you? again this is only if you really need to have identity across the realm boundary. So if you are interested on objects that are on the other room and you need to hold to that identity. Because either you wanted to share it later on, so, or do something with identity. If you don't care about identity like the case of amp or many other use cases, you're just fine. You don't need The membrane or if you need, you is just a simple abstraction at the user level, level, bring a library does the extension of the realm that give you This thing that when you created, you have access to a global this and that globalThis happens to be a proxy of the glow body from the other side. And and it works just fine, just like an iframe, you don't see difference. +CP: Yeah, I want to add that I think Mark is I would write the example that we create with tiny membrane that I mentioned, I call it. I-realm instead of Iframes and is exactly what you will get with and I present you with it. When you create a friend, you insert it, you get the window out of it. And now you start interacting with it. You're getting some objects that they have different identities that you try to check the identity. If they do not match your identity and another exactly what you're gonna get. if you have a library instruction that does that for you? again this is only if you really need to have identity across the realm boundary. So if you are interested on objects that are on the other room and you need to hold to that identity. Because either you wanted to share it later on, so, or do something with identity. If you don't care about identity like the case of amp or many other use cases, you're just fine. You don't need The membrane or if you need, you is just a simple abstraction at the user level, level, bring a library does the extension of the realm that give you This thing that when you created, you have access to a global this and that globalThis happens to be a proxy of the glow body from the other side. And and it works just fine, just like an iframe, you don't see difference. GCL: Yeah, no, I understand that it technically works if maybe like this the now like say that TC39 saw that symbol polyfills right, which use like strings weird prefixes in them. We're doing well right? And they said oh well these work we don't need to do anything and like it technically worked but it's just it's like a weird tacky work-around to do something that should just be a different Way. And that's sort of how I'm seeing it. I understand that like membranes can nothing technically, but it just seems like a very complex weird work around to something that should just naturally exist. @@ -444,67 +447,67 @@ MM: We have blocking objections to providing direct object access. I mean, that GCL: Yeah. And they objects like having an option too right? -CP: Apparently yes. +CP: Apparently yes. GCL: Okay, that's already. I don't understand that but I'm not going to block this proposal. Either way. I'm just very perplexed about it. -JHD: I have a clarifying question there, Caridy. if I call another Realms object function and pass it string and then is the object I get back. Does it have the internal Slots of a string? +JHD: I have a clarifying question there, Caridy. if I call another Realms object function and pass it string and then is the object I get back. Does it have the internal Slots of a string? -CP: Nope. Nope, you're getting a proxy. The proxy does not have the internal slot, right? +CP: Nope. Nope, you're getting a proxy. The proxy does not have the internal slot, right? JHD: So, Okay, yeah, thank you. As Mark has pointed out, there have been some blocking objections by folks who are represented by people in this room, but who have not debated these objections directly in this room. It's my understanding that this is basically saying the capability (that browsers and node users already have) should not be something we permit in a new API for "reasons". All the arguments I've heard are very good, compelling arguments for what the default should be. The default should be what Justin and AMP needs - which is the safer thing, where you can't escape the abstraction. That's a very compelling argument, but none of those arguments preclude the capability, and there are use cases that require the capability. I specifically had these cases where I need to get objects with internal slots. A Proxy to it is not sufficient for built-ins like that. I could probably write a wrapper that like, detects an incoming thing with slots and reconstructs the thing with slots coming out, but that's going to very difficult and very easy to get wrong. So I guess it's if I think it's like my personal feeling is that 90% of the people who want solve this problem in this room, and that's a ballpark number. Some will have their use cases met, and we'll have much lesser, much reduced forms of motivation to advance things, and the people who have created this sort of deadlock and forced callable boundary Realms as an alternative will not have ever needed to come into this room and argue their points directly. They'll they have only done it by proxy, and that is very frustrating because It doesn't solely matter what use cases the champion group has - the proposal as it entered stage 2 solved a set of use cases. The champion group may only cover a subset of those, but all of those use cases are what were met. The stage 2 criteria means we expect the solution to go into the language, we agree those are these cases we want to solve. This approach is effectively magically creating a new proposal that's already at stage 2, somehow - and asking for it to go to stage 3 while simultaneously magically discarding the existing proposal. I'm saying that in the sense that some of those use cases are no longer achievable. -MM: I want to object to Jordan's characterization of the (?). +MM: I want to object to Jordan's characterization of the (?). -JHD: And I tried to word it very carefully because I'm not trying to imply bad faith +JHD: And I tried to word it very carefully because I'm not trying to imply bad faith MM: Well, the purpose of the stages is to adjust to adjust the proposal in reaction to objections that are raised and in doing that the proposal changes and the use cases that it meet changes. I don't think that seeing this as qualitatively different than the normal process of adjusting a proposal in reaction to objections is a fair characterization. I think that Is simply what happened here. We can disagree with whether these were the right adjustments, but it is simply adjusting a proposal as it advances through these stages in response to objections as the stage process was meant to support. -CP: Yeah. And also want to add that I think is a mischaracterization as well. I believe true as percent the Google has presented concerns Yulia has presented concerns I think they have represented well and we have X extended discussions around those independent plenary. +CP: Yeah. And also want to add that I think is a mischaracterization as well. I believe true as percent the Google has presented concerns Yulia has presented concerns I think they have represented well and we have X extended discussions around those independent plenary. -JHD: I do not feel like there have been extensive discussions about those viewpoints in plenary. There's been a summary of them and allusion to them, but I just, I have not heard any. I do not feel like I have, I've been attending every plenaries for a long time now. And so, I haven't any, and I don't feel I have heard any opportunity to fully debate. The belief that the mere existence of a capability that already exists and can never be removed is a problem +JHD: I do not feel like there have been extensive discussions about those viewpoints in plenary. There's been a summary of them and allusion to them, but I just, I have not heard any. I do not feel like I have, I've been attending every plenaries for a long time now. And so, I haven't any, and I don't feel I have heard any opportunity to fully debate. The belief that the mere existence of a capability that already exists and can never be removed is a problem -CP: the second part of your statement is also the collective. At this point, we do want to continue working on this proposal. We have already ideas that we want to explore including the maybe the possibility of getting out of the deadlock, with respect to access to object for another realm as I mentioned before, just a of. We need to put out the work and if people who start using this API, once we get it out there, start providing meaningful feedback that this is not enough for the use cases that they have. We will have champion group that goes on. Like that. There's There's nothing wrong about it. +CP: the second part of your statement is also the collective. At this point, we do want to continue working on this proposal. We have already ideas that we want to explore including the maybe the possibility of getting out of the deadlock, with respect to access to object for another realm as I mentioned before, just a of. We need to put out the work and if people who start using this API, once we get it out there, start providing meaningful feedback that this is not enough for the use cases that they have. We will have champion group that goes on. Like that. There's There's nothing wrong about it. Greg: we actually formally figure out whether or not we can move forward with stage 3 or not because I do have a concrete question to Jordan. but I don't want to like, because we only have 10 minutes like how do we go about doing that? Like with Jordan’s frustration about potential addition like negate requesting that or what? -JHD: as far as whether we have more time and stuff, that's a something that we can talk, ask the chairs in the committee I think. But as far as just to see if this helps, if I believe that there was a good faith path to direct object access meaning like there's just some things to work out and then we can pursue it. I wouldn't really have the same concern. I'd unhappy with the current situation but I would be I would be content, but I have give been given no indication that there will ever be budging of any kind by the folks that are blocking direct object access especially if the most, the majority of the impetus to solve some places. All of these cases +JHD: as far as whether we have more time and stuff, that's a something that we can talk, ask the chairs in the committee I think. But as far as just to see if this helps, if I believe that there was a good faith path to direct object access meaning like there's just some things to work out and then we can pursue it. I wouldn't really have the same concern. I'd unhappy with the current situation but I would be I would be content, but I have give been given no indication that there will ever be budging of any kind by the folks that are blocking direct object access especially if the most, the majority of the impetus to solve some places. All of these cases -Greg: I have a fundamental issue with a non technical argument being something that walks progress of a proposal because there's nothing precluding us from unblocking like that's essentially the argument against objects. +Greg: I have a fundamental issue with a non technical argument being something that walks progress of a proposal because there's nothing precluding us from unblocking like that's essentially the argument against objects. JHD: Direct object access is also in non-technical one. So that seems to be the situation We're in. -MM: The argument against direct opposite object access, whether you agree with it or not is a technical argument. the technical argument, is that Shu, You're the one who is the closest that I know of to a representative in the room of the objection. So, please correct me if I'm characterizing the objection wrongly here, but the objection as I understand it, is that if forums are provided, people will try to use it as effectively and isolation mechanism. Call it a security mechanism. If you wish experience and I'll agree with this from the Agoric experience, trying to do, ad hoc isolation on around boundary, with direct object access shows that it is terribly easy to get wrong. It's incredibly easy to make a mistake, such that objects leak, that can be solved either by having well debuggable available libraries like membranes or like your user level calledable boundaries. but the objection is people will underestimate the problem. preserving isolation with ad hoc libraries at the boundary, people will get it wrong. And we will then have security bugs as a result of not having had an enforced reliable isolation +MM: The argument against direct opposite object access, whether you agree with it or not is a technical argument. the technical argument, is that Shu, You're the one who is the closest that I know of to a representative in the room of the objection. So, please correct me if I'm characterizing the objection wrongly here, but the objection as I understand it, is that if forums are provided, people will try to use it as effectively and isolation mechanism. Call it a security mechanism. If you wish experience and I'll agree with this from the Agoric experience, trying to do, ad hoc isolation on around boundary, with direct object access shows that it is terribly easy to get wrong. It's incredibly easy to make a mistake, such that objects leak, that can be solved either by having well debuggable available libraries like membranes or like your user level calledable boundaries. but the objection is people will underestimate the problem. preserving isolation with ad hoc libraries at the boundary, people will get it wrong. And we will then have security bugs as a result of not having had an enforced reliable isolation JHD: So we basically switched from the “only has the foot gun / capability” version to an “Only lacks the foot gun / capability” version. The other option that I would be more interested in is something that defaults to the safer, more restrictive one and allows opting in to the wider, more powerful, but easier to screw up version. To me, that would solve all of the use cases presented. I know without having that version available and regathering the evidence that SYG was talking about. I don't know how to confirm that guess. But my guess is that if we named everything appropriately, whatever that means, that we would be able to convey the risk and reduce the footgun likelihood. So that's what I'm interested in seeing. -CP: That's something that we can work on. Now that log is real like implementers specific to Google and Mozilla we're gains allowing them the direct axis and we have to work on it from the feedback that we get from users. At this point is is all about what people wants to use or do we These API because we need to get there first. +CP: That's something that we can work on. Now that log is real like implementers specific to Google and Mozilla we're gains allowing them the direct axis and we have to work on it from the feedback that we get from users. At this point is is all about what people wants to use or do we These API because we need to get there first. -LEO: Yeah, I also want to express Jordan and it's surprising say like you see. No one's saying in good faith that the plan to to move this forward after this. I remember like we talked about this, I think about this where I told like I have plans, I like my team is going to be working on it like ever much more to explore on it. And yes, I we already collecting feedback from Community. is so much to work through this with like, the Chrome team has like, should destination like to get repaid at. There is Beauty in their decision and That's good. Like, in order to do that, I need to be able to gather data to counteract your counter argument for that and saying like the usefulness and work through this. and I believe we can have so much more material with this callable boundary because like otherwise its a hard blocker. For us and there's so many use cases. This is already making available but there's so much more. We rely so much today on the usage of Realms. So if we are definitely not invested in just like this API in that work done. There's much more, much more explore from this. This is what I talked in async, but I also make this public here and, +LEO: Yeah, I also want to express Jordan and it's surprising say like you see. No one's saying in good faith that the plan to to move this forward after this. I remember like we talked about this, I think about this where I told like I have plans, I like my team is going to be working on it like ever much more to explore on it. And yes, I we already collecting feedback from Community. is so much to work through this with like, the Chrome team has like, should destination like to get repaid at. There is Beauty in their decision and That's good. Like, in order to do that, I need to be able to gather data to counteract your counter argument for that and saying like the usefulness and work through this. and I believe we can have so much more material with this callable boundary because like otherwise its a hard blocker. For us and there's so many use cases. This is already making available but there's so much more. We rely so much today on the usage of Realms. So if we are definitely not invested in just like this API in that work done. There's much more, much more explore from this. This is what I talked in async, but I also make this public here and, -SYG: I think Jordan would if I read you correctly, you were directly addressing Chrome in the good faith path forward. Not not that the champion group right now, the champion group. +SYG: I think Jordan would if I read you correctly, you were directly addressing Chrome in the good faith path forward. Not not that the champion group right now, the champion group. JHD: That's right. -SYG: Yeah, so that's correct. Let me reply to that. I think the currently the objection stands that we don't want the d and we don't don't want to direct object access and don't want the option partly, because of the point that Mark has already made that a big part of the value of taking the engineering effort to build in isolation into the platform, is that it's not a scale form and perhaps that could be overcome with education naming, it something like super unsafe expose direct object access or something like that. I don't know. That is a possibility but at the same time, the other half of the reason why I don't look at. So I have to kind of to kind of counter arguments to your counter argument. So one I don't personally Find the “get Originals” use case very compelling. We've been through the get Originals thing. This is slightly different, but I find that to be an even more niche use case than the membranes. One is despite the complexity and the difficulty of using membranes, Salesforce products and AMP do great reach. So I'm kind backing that into into calm, weighing it, it's less clear to me. despite some of the delegates, personal use cases here for wanting direct object access the true reach of that side of the expressive of the expressivity. And the second counter argument to your kind of argument is kind of I agree with what Dan has been saying. Was that Yes, the scope of this proposal has been reduced to your disagreement but it sounds like because the scope was reduced in such a way that this proposal no longer Directly address these, your use case you would rather not be other half of the use case, go through either. and that's why I don't quite understand, that's not really an argument for not allowing the other half of the use case to go through. Even if there were no path open to address your use case. +SYG: Yeah, so that's correct. Let me reply to that. I think the currently the objection stands that we don't want the d and we don't don't want to direct object access and don't want the option partly, because of the point that Mark has already made that a big part of the value of taking the engineering effort to build in isolation into the platform, is that it's not a scale form and perhaps that could be overcome with education naming, it something like super unsafe expose direct object access or something like that. I don't know. That is a possibility but at the same time, the other half of the reason why I don't look at. So I have to kind of to kind of counter arguments to your counter argument. So one I don't personally Find the “get Originals” use case very compelling. We've been through the get Originals thing. This is slightly different, but I find that to be an even more niche use case than the membranes. One is despite the complexity and the difficulty of using membranes, Salesforce products and AMP do great reach. So I'm kind backing that into into calm, weighing it, it's less clear to me. despite some of the delegates, personal use cases here for wanting direct object access the true reach of that side of the expressive of the expressivity. And the second counter argument to your kind of argument is kind of I agree with what Dan has been saying. Was that Yes, the scope of this proposal has been reduced to your disagreement but it sounds like because the scope was reduced in such a way that this proposal no longer Directly address these, your use case you would rather not be other half of the use case, go through either. and that's why I don't quite understand, that's not really an argument for not allowing the other half of the use case to go through. Even if there were no path open to address your use case. -JHD: So maybe another day I'll discuss that. +JHD: So maybe another day I'll discuss that. -USA: Okay. Wait, we're at the top of the time box. Thank you everyone for the discussion. Do we want to continue this at a later time as an extension? +USA: Okay. Wait, we're at the top of the time box. Thank you everyone for the discussion. Do we want to continue this at a later time as an extension? CP: Well, first I would like to ask is if there is any objection for stage 3 at this point. -JHD: There needs to be discussed out before we get to that question. +JHD: There needs to be discussed out before we get to that question. -SYG: Is there anything that we can work through to we re discussed it? And what are the available slots for continue this discussion? +SYG: Is there anything that we can work through to we re discussed it? And what are the available slots for continue this discussion? -JHD: I think there needs to be more discussion in plenary before we can get to that question. +JHD: I think there needs to be more discussion in plenary before we can get to that question. USA: Yeah, I think we have a lot of free time so I think if you figure out I think day three would have a lot of free time. Queue topics from DE: -New Topic: Disclosure of Salesforce/Igalia collaboration: Detached iframe debugging, Proxy profiling/optimization, Realm spec -New Topic: Describe HTML integration, ask for feedback +New Topic: Disclosure of Salesforce/Igalia collaboration: Detached iframe debugging, Proxy profiling/optimization, Realm spec New Topic: Describe HTML integration, ask for feedback ### Conclusion/Resolution + Continue conversation later in the week diff --git a/meetings/2021-07/july-14.md b/meetings/2021-07/july-14.md index 5a2e4ff0..4f3c5d1b 100644 --- a/meetings/2021-07/july-14.md +++ b/meetings/2021-07/july-14.md @@ -1,4 +1,5 @@ # 14 July, 2021 Meeting Notes + ----- **Remote attendees:** @@ -14,8 +15,8 @@ | Philip Chimento | PFC | Igalia | | Jamie Kyle | JK | Rome | - ## Ergonomic Brand Checks for Stage 4 + Presenter: Jordan Harband (JHD) - [proposal](https://github.com/tc39/proposal-private-fields-in-in/) @@ -32,16 +33,16 @@ WH: I approve of this. JHD: Awesome. Thank you, everybody. ### Conclusion/Resolution -Stage 4 +Stage 4 ## Accessible Object hasOwnProperty update + Presenter: Jamie Kyle (JK) - [proposal](https://github.com/tc39/proposal-accessible-object-hasownproperty) - [slides](https://docs.google.com/presentation/d/1UbbNOjNB6XpMGo1GGwl0b8lVsNoCPPPLBByPYc7i5IY/edit#slide=id.p) - JK: This is just a stage three update with the accessible Object.prototype.hasOwnProperty.call, Otherwise known as Object.hasOwn. A super-fast explainer: hasOwnProperty is not reliably accessible due to things like Object.create(null), so stuff like Object.prototype.hasOwnProperty.call is common, but also requires lots of understanding of what would the concepts all at once for new users. So with that in mind, there's a lot of libraries that popped up like, has and low has that make hasOwnProperty easier to use and they have billions of NPM downloads. Object.hasOwn with an object and a key mirrors hasOwnProperty.call with a key to make it accessible. and they are identical and besides one minor flip of the ordering of ToObject and ToPropertyKey steps fixing what was supported for legacy in hasOwnProperty. JK: Status, there is a prepared PR, there is the 262 tests that's already been merged. In terms of applications, it is implemented in V8 behind a flag. SpiderMonkey has implemented, but only in nightly builds, and it seems like WebKit is implementing. It has been some notes in the public issue, tracker, not sure what the state of it is and it's also been shipped into other implementations of JavaScript in Serenity and engine262. There's also been a couple of community contributions. There's Object.hasOwn on npm, it's also shipped in core-js. So it's inside of the updated version of Babel polyfill. I also implemented this codemod that helps people migrate from the many different ways, the libraries that are used today and how they can refactor, I've been getting some community feedback from this channel. That was very successful. And yeah, in terms of the feedback that we received, there's a github issue tracking it. But no new problems that would block the proposal have come up, mostly just feedback on what has already been addressed. Overall there’s been a lot of excitement. People have been using the polyfill successfully and lots of people are on the latest version of coreJS shims. just check some dependabot updates and it seems to be working for people. So, in terms of the stage 4 requirements, the pr is ready, tests are ready. It's implemented in two browsers but feature flagged so still waiting on that for a check mark and we're getting more polyfills and the plan is to seek the stage for pending feedback from browsers. And before any questions, I just want to thank everyone who has been involved. This is my First TC39 proposal. So really thank you to people who are very helpful. So thank you, everyone. @@ -51,10 +52,11 @@ SYG: V8 in Chrome has actually already shipped it. It's just riding the trains r BT: Any other questions or comments? `[silence]` All right, I guess we're all happy with the progress. Awesome. ### Conclusion/Resolution -No changes sought or made +No changes sought or made ## Import assertions update + Presenter: Dan Clark (DDC) - [proposal](https://github.com/tc39/proposal-import-assertions/) @@ -62,9 +64,9 @@ Presenter: Dan Clark (DDC) DDC: proposal is allows information aside from a module specifier to be passed in a module Imports via this new assert syntax and the purpose of these asserts are not really to affect the host interpretation of the module, but just for the host to decide whether or not to fail the import via some additional checks. And that's like the flagship use case for this is with JSON modules, where hosts like the web might import resources externally but they don't have control over the contents of the resource and they don't want any surprises where they think they're importing JSON content JS where that's sort of a privilege escalation. And so we want them to be allowed to allow them to assert the type of a module so that it doesn't change surprisingly. -DDC: and one question that has come up with this is what to do with unsupported module types, like a module type that the host doesn't know what to do with, like, note the typo ("jsonn") in this example. Hosts have free reign to decide what to do with this. Whether it's a fail, it to ignore it or something else. HTML, the web is always going to fail the module graph if there's an unknown type of solution present, basically, for security reasons, around aforementioned like Escalation of privilege type issues. And so there was a question of whether we should standardize that behavior of failing on unrecognized module types, because that would drive further alignment among hosts. For example, it might be nice if a typo, like the one in this example, would always fail rather than, like, being ignored in some hosts and not others. And so one suggestion of how this might be done is like we could just have hosts provide a static list of the types that they support and ecmascript on the ecmascript side will enforce failure if there's some type assertion present that isn't in like this list of types provided by the host that is supports. And so, this seems to work pretty well for the web because like the web is going to have a static list of types that like we might add to it over the time over time via spec changes but like there's no loader hooks where types can be added dynamically. So it seems pretty straightforward to just ban unknown types. However, the problem with this that came up last time, this was discussed, and during an intermediate SES call, is that this type of (?) restrictions are problematic for hosts like Node.js, for example that have lie loader hooks where a Node author can come in and Define, new module types, and define transformations, between module types. So the list of types supported by the host can change at runtime, kind of arbitrarily in such an environment. It's hard to really say, what an unrecognized type even means because like host might support some default set of types, but at the point where user authored JavaScript can change that. It's hard to have such a restriction on what types are supported and what aren't is kind of hard to be it be a limitation on what those hopes were capable of. And there were concerns about that limitation. And so it's not clear to me. I don't really see a path forward with introducing a restriction like this without breaking these kind of scenarios for these hosts. My preference is kind of just leave the proposal as it stands which is the hosts are just up to able to do whatever they like with module types that they don't support. There are some alternatives we could consider, which is like maybe we could try something in prose, that could be a strong enough statement to be useful, but doesn't force problematic restrictions on hosts like node. I've seen other suggestions in the thread along the lines of like environment specific types, where an import would have an additional key that specifies the environment, and there's a set of types that goes along with environment, but I feel like environment specific types kind of gets us further from that goal of having having code that works in multiple environments, to the extent possible, which I think was one of the original goals of introducing such a restriction, like this. And maybe there are - maybe others have ideas for other ways to restrict behavior of unknown types without placing undue limitations on environments with dynamic types of module type systems. But I don't really have anything to suggest there, so leaves me wanting to eventually ask for consensus that we leave, the proposal just as is currently but I suspect that like there may be concerns with that. +DDC: and one question that has come up with this is what to do with unsupported module types, like a module type that the host doesn't know what to do with, like, note the typo ("jsonn") in this example. Hosts have free reign to decide what to do with this. Whether it's a fail, it to ignore it or something else. HTML, the web is always going to fail the module graph if there's an unknown type of solution present, basically, for security reasons, around aforementioned like Escalation of privilege type issues. And so there was a question of whether we should standardize that behavior of failing on unrecognized module types, because that would drive further alignment among hosts. For example, it might be nice if a typo, like the one in this example, would always fail rather than, like, being ignored in some hosts and not others. And so one suggestion of how this might be done is like we could just have hosts provide a static list of the types that they support and ecmascript on the ecmascript side will enforce failure if there's some type assertion present that isn't in like this list of types provided by the host that is supports. And so, this seems to work pretty well for the web because like the web is going to have a static list of types that like we might add to it over the time over time via spec changes but like there's no loader hooks where types can be added dynamically. So it seems pretty straightforward to just ban unknown types. However, the problem with this that came up last time, this was discussed, and during an intermediate SES call, is that this type of (?) restrictions are problematic for hosts like Node.js, for example that have lie loader hooks where a Node author can come in and Define, new module types, and define transformations, between module types. So the list of types supported by the host can change at runtime, kind of arbitrarily in such an environment. It's hard to really say, what an unrecognized type even means because like host might support some default set of types, but at the point where user authored JavaScript can change that. It's hard to have such a restriction on what types are supported and what aren't is kind of hard to be it be a limitation on what those hopes were capable of. And there were concerns about that limitation. And so it's not clear to me. I don't really see a path forward with introducing a restriction like this without breaking these kind of scenarios for these hosts. My preference is kind of just leave the proposal as it stands which is the hosts are just up to able to do whatever they like with module types that they don't support. There are some alternatives we could consider, which is like maybe we could try something in prose, that could be a strong enough statement to be useful, but doesn't force problematic restrictions on hosts like node. I've seen other suggestions in the thread along the lines of like environment specific types, where an import would have an additional key that specifies the environment, and there's a set of types that goes along with environment, but I feel like environment specific types kind of gets us further from that goal of having having code that works in multiple environments, to the extent possible, which I think was one of the original goals of introducing such a restriction, like this. And maybe there are - maybe others have ideas for other ways to restrict behavior of unknown types without placing undue limitations on environments with dynamic types of module type systems. But I don't really have anything to suggest there, so leaves me wanting to eventually ask for consensus that we leave, the proposal just as is currently but I suspect that like there may be concerns with that. -DDC: So I think I'd like to go to the queue at this point and get thoughts. Like are there other ideas for having some kind of useful limitation here or is this something that we're okay with? Just going to dropping after learning about these concerns from other hosts. +DDC: So I think I'd like to go to the queue at this point and get thoughts. Like are there other ideas for having some kind of useful limitation here or is this something that we're okay with? Just going to dropping after learning about these concerns from other hosts. GCL: So from, from the requirements that node has, as long as the spec doesn't say something like an implementation must declaratively know what types it supports, you know, something like where you're like putting it at the limitation on the specific way that the implementation determining whether or not it understands what a type is, you could have something that says the host should throw. I don't know exactly what the text for that would look like, but I don't think this is inherently like something that node - like I think it's reasonable to say that within any like VM context you could add new types but also unknown types could still throw. I think we could get to text that does that, but I'm also fine with leaving the proposal as if I just wanted to mention that. @@ -125,10 +127,11 @@ JHD: So My personal intuition here, and obviously I could be wrong, is that if a BT: All right, the queue is empty. ### Conclusion/Resolution -No changes sought or made, general agreement on the status quo +No changes sought or made, general agreement on the status quo ## Decorators update + Presenter: Kristen Hewell Garrett (KHG) - [proposal](https://github.com/tc39/proposal-decorators) @@ -168,7 +171,7 @@ DDC: Yeah, I think it needs to be (?). For every symbol that gets defined on the DDC: So moving on, next up are decorator modifiers. So like I said before, modifiers are the ability to prepend a keyword to the Decorator itself in order to give an additional capability. And this was really the way that we figured out how to solve a common use case, which is the ability to add initializers to class elements and classes themselves. Actually initializers be at in a keyword adds the ability to add initializers to the elements and those initializers run for any. - I'm sorry to interrupted, I think Mark had a question about previous slide. And since you're taking questions in the middle, I might just do the queue now. I know Jordans on the queue as well. +I'm sorry to interrupted, I think Mark had a question about previous slide. And since you're taking questions in the middle, I might just do the queue now. I know Jordans on the queue as well. DDC: That's fine. I'm okay with that answering questions. @@ -206,7 +209,7 @@ IID: all right, so there's a common source of bugs in engines where we add the a DDC: I think there are places where user code would execute that it previously was not the case. So, for instance, In the bound decorator case, Let's say you were doing this to a class field the class field would be fully defined and then at defined on the instance and then the initializers would run immediately after that, which gives the user, the ability to do things for instance, like make the field change to readable, or rather, not writable or not configurable, stuff like that which they would not be able to do previously. So I think that would be a new place. Does that sound correct? -IID: That seems plausible. I'd have to look at a little more closely. Would it be possible for the proposal to sort of figure out what the list of places, where new things are happening, just so that it's possible for engines to - this isn't a pressing concern because it's not going to be particularly relevant until we implement. But yeah I think it's an important thing to think about. +IID: That seems plausible. I'd have to look at a little more closely. Would it be possible for the proposal to sort of figure out what the list of places, where new things are happening, just so that it's possible for engines to - this isn't a pressing concern because it's not going to be particularly relevant until we implement. But yeah I think it's an important thing to think about. DDC: Absolutely. It is I guess, implicitly there in the spec we can also add an explicit list and I also think it is - actually, if we have static blocks, I don't know if - no because static blocks only run during the class. Okay. Yeah. The short answer is yes. Yes, we can. @@ -273,12 +276,11 @@ DDC: Can everybody who's saying that they're happy to review comments on Github? KG: It will be in the notes. ### Conclusion/Resolution -Stage 3 reviewers: Richard Gibson, Shu-yu Guo, Jordan Harband, Leo Balter - - +Stage 3 reviewers: Richard Gibson, Shu-yu Guo, Jordan Harband, Leo Balter ## Array find-from-last + Presenter: Wenlu Wang (KWL) - [proposal](https://github.com/tc39/proposal-array-find-from-last) @@ -330,19 +332,16 @@ MM: congratulations. BT: Yeah, this is awesome. Excellent. - ### Conclusion/Resolution -Proposal achieves stage 3 - - +Proposal achieves stage 3 ## Guidance for nested namespaces + Presenter: Philip Chimento (PFC) - [slides](https://ptomato.github.io/talks/tc39-2021-07/index.html) - PFC: This is a short last-minute agenda item that SYG suggested that I add. Coincidentally enough from the context of the previous discussion, this is a request for plenary to give guidance and set a precedent for the situation that we have in a proposal, so that future proposals will be consistent with it. Namespace objects, I think nobody disagrees that they should start with a capital letter. We have Math with a capital M since 1995 probably; and Temporal with a T. In the plenary about a year ago we decided that namespace objects should have a @@toStringTag property at least for top level namespace objects, which are the only namespace objects that we have so far that I'm aware of. The Temporal proposal is going to add a nested namespace object, `Temporal.now`, which until now has been spelled with a lowercase n probably because nobody actually thought about it, and it started out life as a function. So we got a request to change this to a capital N. And you know, this also raises the question, should it have a @@toStringTag property and if so, what should that be? Should it be `"now"` or should it be `"Temporal.now"`? It seems like this is something that we should provide explicit guidance about so that we don't make an ad hoc decision that's done differently by different proposals. It seems from the thread that was started, that people think in general that nested namespaces should be capitalized. My proposal here that I'm going to ask for a consensus on is, is that, plus having a @@toStringTag property equal to the fully qualified name. So, in the case of `Temporal.Now`, it would be Now with a capital N, and the @@toStringTag property, would have a value of `"Temporal.Now"`. So, after whatever discussion we have, I'd like to ask for consensus on a guidance and consensus on making this change in any current proposals. Temporal is the only one that I'm aware of that is affected by this, but there may be others. So discuss away. MM: I just have a clarifying question first. Last I looked it was just simple. What is currently in the namespace? @@ -412,19 +411,17 @@ MM: I'm glad you brought up the C++ example because that really helps understand SYG: The concrete definition that I put towards this, as a refinement of KG's, is that you have an object, that you create, that has a collection of properties, that is not a constructor, where the identity of this namespace object does not matter for the semantics of anything. [bot mangled] that it has this concept exists at the top level. The salient bit to me, the namespace objects that we have, it's not that they're top level, but the thing I just said, and that's why I feel they already are nestable, that it's not a stretch to nest them. We already have this concept that we treat these objects differently. ### Conclusion/Resolution -No conclusion, will revisit later this meeting if there is time - - +No conclusion, will revisit later this meeting if there is time ## Restricting callables to only be able to return normal and throw completions + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/ecma262/pull/2448) - [slides](https://docs.google.com/presentation/d/1BYX6iJqYJSNL0pR-De074hhQceXqNzGHTVyS9UesGZQ/edit?usp=sharing) - -SYG: when doing a review of the resizable buffers proposal, Mike Pennisi noticed that we don't really strictly restrict the completion types that can be returned from host hooks. So as a quick recap of the completion types that we have. The completion records have this type field that describes how control should continue. If the type is normally it's just a value. If the type is throw. We start unwinding stack as with exceptions, if it's break, we're breaking out of the current Loop, with continue we're continuing to the next iteration of the loop, if it's return. We're doing return. We use completions to describe control. I think this is just true. They should only be able to return normal, or throw a complete It's like they should never be able to break you out of the loop. They should never return you from the call site. Basically, it's they should alter control. control they should be, they should act like functions. Does anyone have concerned with this? This is not normatively said right now, I propose that we add a normative restriction but also close tooks must return either normally, or with throw completion. +SYG: when doing a review of the resizable buffers proposal, Mike Pennisi noticed that we don't really strictly restrict the completion types that can be returned from host hooks. So as a quick recap of the completion types that we have. The completion records have this type field that describes how control should continue. If the type is normally it's just a value. If the type is throw. We start unwinding stack as with exceptions, if it's break, we're breaking out of the current Loop, with continue we're continuing to the next iteration of the loop, if it's return. We're doing return. We use completions to describe control. I think this is just true. They should only be able to return normal, or throw a complete It's like they should never be able to break you out of the loop. They should never return you from the call site. Basically, it's they should alter control. control they should be, they should act like functions. Does anyone have concerned with this? This is not normatively said right now, I propose that we add a normative restriction but also close tooks must return either normally, or with throw completion. MM: I enthusiastically support this. @@ -450,16 +447,15 @@ SYG: Right. So until somebody proposes call/cc or delimited stuff, we're good to WH: There are two things going on here. One is you want host callables to not be able to do return, break, or continue. That is a normative spec change. Once you've done that, you can then also write the invariant. But until you actually specify that host things cannot do this, you do not have an invariant. -SYG: I guess it depends on if you think the invariant is - you can have an invariant that is true of everything within ecma 262 and maybe also 402, if that's what you mean by invariant. We can't have the environment but if what you mean by invariant is both Ecma 402 and all upstream specs, then you are correct. Then we cannot have it as an invariant until the host also has this restriction. +SYG: I guess it depends on if you think the invariant is - you can have an invariant that is true of everything within ecma 262 and maybe also 402, if that's what you mean by invariant. We can't have the environment but if what you mean by invariant is both Ecma 402 and all upstream specs, then you are correct. Then we cannot have it as an invariant until the host also has this restriction. BT: Queue is empty. SYG: Let the record show that there's consensus to adopt these two normative PRs. ### Conclusion/Resolution -Consensus on both PRs - +Consensus on both PRs ## Guidance for nested namespaces again @@ -494,13 +490,12 @@ MM, WH: Yeah, I'm fine with that as well. BT: Are there any objections using the fully qualified name in the to string tag? `[silence]` Sounds like you can go forward with that. ### Conclusion/Resolution + Consensus to use fully qualified name in `@@toStringTag` No outcome on capitalization, we may or may not revisit at this meeting, status quo holds unless revisited - - - ## Renaming Strawperson to Concept or something better + Presenter: Hemanth HM (HHM) - [slides](https://docs.google.com/presentation/d/11PBKeQOGVj3r3F9xBJIKpgftfyeW5lGHHAJrI7Misgc/edit?usp=sharing) @@ -575,7 +570,7 @@ SYG: I want this to be narrowly about removing the name column from the process AKI: Okay, so I'm seeing the strongest feelings about just getting rid of the column. There are some people who are unconvinced. There's some people who are indifferent. Those of you who are unconvinced, do you think that we should not get rid of that column and instead should spend some time bikeshedding what we call things, or - I would like to be done with this actually, I would like to know what people who are unconvinced, how strongly you feel. `[silence]` I think we have consensus to remove the column, that is what it sounds like to me. No one has spoken up to stop that. I will give everybody one brief opportunity to speak up and otherwise let's just get rid of it and move on and we can all use whatever phrasing we want when we're educating people because use the language that matches your audience. -LEO: I am sorry. I just got late to this topic because it was putting my kid to sleep. I totally just saw the slides already mentioned part of my point of view. I'm getting late to this train, just I have like there are many problematics with Straw Men, straw person or anything that derives from these terms, like my biggest pet peeve is any wording that derives from that. And also like the original one comes in from like very problematic . I just want to mention like for someone who does have English as a second language even like Straw Men was problematic so many perspectives, not only like by, there's one, there is a most of anyone but like, just a perspective, it doesn't mean anything else, like, for technical for a technical naming - In concept was actually like, dictionary based naming for that, but there's another discussion here that see on removing that column. Sorry. I just wanted to give this perspective as like, if I see a column, if there is anything that we call for stage zero, I'd rather have it with something that it's easier for me to translate. And it's also like legit to what it means. That's that's like the point of view why I support this. There are many other problematics that I also support this change as well but I'm just giving a perspective that that I don't believe everyone shares the same point view and I hope you understand that Thank you Leo for that perspective. +LEO: I am sorry. I just got late to this topic because it was putting my kid to sleep. I totally just saw the slides already mentioned part of my point of view. I'm getting late to this train, just I have like there are many problematics with Straw Men, straw person or anything that derives from these terms, like my biggest pet peeve is any wording that derives from that. And also like the original one comes in from like very problematic . I just want to mention like for someone who does have English as a second language even like Straw Men was problematic so many perspectives, not only like by, there's one, there is a most of anyone but like, just a perspective, it doesn't mean anything else, like, for technical for a technical naming - In concept was actually like, dictionary based naming for that, but there's another discussion here that see on removing that column. Sorry. I just wanted to give this perspective as like, if I see a column, if there is anything that we call for stage zero, I'd rather have it with something that it's easier for me to translate. And it's also like legit to what it means. That's that's like the point of view why I support this. There are many other problematics that I also support this change as well but I'm just giving a perspective that that I don't believe everyone shares the same point view and I hope you understand that Thank you Leo for that perspective. SFC: Yeah, I think the mental model is useful, but now that I've thought through this a little more, I think the column called "Acceptance Signifies" is actually more useful than like the single word stage names. That already forms a very good mental model, because, as others have said, trying to use a single word for this is problematic; there are lots of issues with that. So I'll withdraw my negative vote, and move it to a weak positive (for removing the single-word names altogether). @@ -584,12 +579,11 @@ AKI: I think we just go ahead and remove the column and if we decide we want to HHM: Thank you, everybody. ### Conclusion/Resolution -Remove "name" column from process document - - +Remove "name" column from process document ## ArrayBuffer to/from Base64 + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/bakkot/proposal-arraybuffer-base64) @@ -605,7 +599,7 @@ KG: Should we also support hex, the other common method of encoding arbitrary, b KG: If we are doing base64, of course there are different variants of base64. There is the URL safe alphabet rather than the default alphabet. I think that we should default to the regular alphabet and provide an options bag option to pick the base64url alphabet instead. And a bunch of others. Should we deal with shared array buffers? Probably. How should we handle padding? This proved to be unexpectedly controversial, so I will come back to it. Should we support just doing a part of the array buffer or a part of the base64 string? I think no. Should we support taking a base64 string and writing it to an existing array buffer? Again I think no. The last two are easy enough to do in user land. They might incur a copy but a copy is pretty fast so I'm not going to worry about it. -KG: Now padding is controversial. The RFC for base64 does not say that decoders are required to verify that the string that they are decoding is correctly padded, it gives them the option of doing so. Almost all base 64 decoders do not enforce that the string that they are given is correctly padded. Note here that I'm speaking of both kinds of padding, the equals signs on the end and the additional bits that might be in the last character. If you don't know what those are, don't worry about it. Just for those who are aware, I want to emphasize I'm talking about both kinds of padding. The fact that decoders don't typically verify means that you end up in the situation where base64 is not actually a canonical encoding. I think that this surprises many people, it surprised me when I learned about it. I have a nice collection of screenshots of it surprising other people. And because people are not aware of this, it is very easy to write code which is subtly incorrect, possibly in a way that causes a security vulnerability, that relies on the assumption that it is canonical. For example, you might be checking membership in a list of revoked keys by comparing the base64 encoding of some values and that simply does not work if your decoding is not canonical, meaning to say, if your decoding does not enforce that padding is correct and reject strings that are incorrectly padded. So it is my opinion that we should verify padding by default and have an option that allows you to not verify padding. However, there's disagreement about this point. I don't want to fight that out before stage 1, but do want to fight that out before stage 2. So I also would be interested in hearing opinions on that topic, if people think that the proposal as a whole is reasonable, so that I have something to be going with towards advancing this in the future. So let's go to the queue. +KG: Now padding is controversial. The RFC for base64 does not say that decoders are required to verify that the string that they are decoding is correctly padded, it gives them the option of doing so. Almost all base 64 decoders do not enforce that the string that they are given is correctly padded. Note here that I'm speaking of both kinds of padding, the equals signs on the end and the additional bits that might be in the last character. If you don't know what those are, don't worry about it. Just for those who are aware, I want to emphasize I'm talking about both kinds of padding. The fact that decoders don't typically verify means that you end up in the situation where base64 is not actually a canonical encoding. I think that this surprises many people, it surprised me when I learned about it. I have a nice collection of screenshots of it surprising other people. And because people are not aware of this, it is very easy to write code which is subtly incorrect, possibly in a way that causes a security vulnerability, that relies on the assumption that it is canonical. For example, you might be checking membership in a list of revoked keys by comparing the base64 encoding of some values and that simply does not work if your decoding is not canonical, meaning to say, if your decoding does not enforce that padding is correct and reject strings that are incorrectly padded. So it is my opinion that we should verify padding by default and have an option that allows you to not verify padding. However, there's disagreement about this point. I don't want to fight that out before stage 1, but do want to fight that out before stage 2. So I also would be interested in hearing opinions on that topic, if people think that the proposal as a whole is reasonable, so that I have something to be going with towards advancing this in the future. So let's go to the queue. WH: What do you mean by canonical? @@ -619,7 +613,7 @@ WH: Okay. GCL: Yeah, I love this proposal. I think it's great. Something I'd like to express: I noticed for one thing that utf-8 is not mentioned at all here and I assume it is not an accident. That's not mentioned here but I feel like this is something that should be in scope for a proposal like this. -KG: I pretty strongly disagree. This proposal is about the serialization and deserialization of arbitrary binary data. It has nothing at all to do with text and utf-8 is strictly a way of encoding text. It's not particularly related to binary data. +KG: I pretty strongly disagree. This proposal is about the serialization and deserialization of arbitrary binary data. It has nothing at all to do with text and utf-8 is strictly a way of encoding text. It's not particularly related to binary data. GCL: I think maybe utf-8 was a poor way to say just like raw strings because I don't think we need to enforce the like well-formedness of Unicode data, but besides hex and base 64. I feel like that would be a very useful thing. That's a thing I run into all the time at least and I'm sure other people do. @@ -717,7 +711,7 @@ JWK: Okay, I'm speaking for septs. They think they prefer the Node style. It use KG: my intention is for this proposal to cover base64 and hex, those two and no others. But not to rule out others in the future just for those to be the scope of this particular proposal. -WH: I just re-read the spec. The forgiving spec removes whitespace from the string before parsing. On GitHub KG wrote that the only differences between the forgiving and the strict versions are the padding and the overflow bits. So does this mean that the strict version will also ignore whitespace? +WH: I just re-read the spec. The forgiving spec removes whitespace from the string before parsing. On GitHub KG wrote that the only differences between the forgiving and the strict versions are the padding and the overflow bits. So does this mean that the strict version will also ignore whitespace? KG: My comment on GitHub was mistaken. I had missed the white space difference, @@ -750,16 +744,19 @@ AKI: if you think will need to be addressed it's also a great reminder for every KG: So with Peter's reservations noted, can we ask for stage 1? WH: I support stage 1. + ### Conclusion/Resolution -* consensus on Stage 1 with stated reservations from PHE + +- consensus on Stage 1 with stated reservations from PHE ## Module fragments + Presenter: Daniel Ehrenberg (DE) - [proposal](https://github.com/tc39-transfer/proposal-module-fragments) - [slides](https://docs.google.com/presentation/d/1t5i4bpQ1-Dh7-PaRDgkaZUjxeI5P7YyPsX_1Gy1RMEY/edit#slide=id.p) -DE: So I wanted to present on module fragments. We talked about this a few months ago and since the last we discussed it based on feedback especially from Gus and issue. Number five, I've made some changes to the proposal and I wanted to discuss those. So for a little review, module fragments are inline JavaScript modules, in another module, the idea is that they are named so that they can be targeted by either import statements, this makes them different from module fragments, which are anonymous and can only be used in Dynamic import and things that take module specifiers as a runtime value. Whereas module fragments exist as kind of keys in the module map - Not just as keys in module map but things that can be sort of statically named. +DE: So I wanted to present on module fragments. We talked about this a few months ago and since the last we discussed it based on feedback especially from Gus and issue. Number five, I've made some changes to the proposal and I wanted to discuss those. So for a little review, module fragments are inline JavaScript modules, in another module, the idea is that they are named so that they can be targeted by either import statements, this makes them different from module fragments, which are anonymous and can only be used in Dynamic import and things that take module specifiers as a runtime value. Whereas module fragments exist as kind of keys in the module map - Not just as keys in module map but things that can be sort of statically named. DE: So the motivation is that module fragments allow bundling multiple modules in a single file. At first, I thought that we could handle bundling just by general-purpose resource bundles that contain multiple different file types. I still think we should have general-purpose resource bundles, but my understanding is that resource bundles that operate at the network level are just going to be too slow for the huge number of Javascript modules that we have so we probably also want a complimentary JavaScript only bundling format and that's what module fragments can accomplish. So for this basic bundling example, if you declare these modules `countBlock` and uppercase block, then you could declare another module that imports from them and you can see that none of these have quotes around them. So this is kind of the difference. Another aspect of this proposal is that these module fragments are only exported if they have the `export` keyword, so you can import from this private local module fragment and that's possible in the same file and then if something else imports this, then it can also import that export here @@ -835,7 +832,7 @@ GCL: yeah, that's a fair point. DRR: hey, so I think from the typescript perspective there's really two things that I just want to call out the first is what you've already mentioned, in your with the module and namespace sort of Collision there, right? We've really pushed the community to move off of the `module` keyword to proper namespaces just because that's general parlance what they represent. But we really have never pulled the rug out from underneath someone on this on syntax. That's something I think we'll have to speak a little bit more broadly as a team about, so I'll bring that back. The other thing there is something that I've raised in the inline modules proposal discussion, which is just whether or not the tooling can support the sort of scenarios that you have in mind. While bundling is a fine scenario, I don't know how well this can model something like a worker that is in another project context, for example. That has all to do with being able to nest multiple global environments within the same project. That's something that we're not exactly wired up to do. And we don't really have a good sense of how to capture that today. Well today. So that is technically an implementation concern but it's something that I need to be up front with you about now because it's still something that we're not really clear on how we would achieve that. So we don't want you to have a feature that has a crappy developer experience but it is something that will continue to investigate. -DE: Yeah. Thanks for bringing up that second point. I mean, we've been discussing that point pretty - kind of on and off, over the recent months, and understanding is that there's already lots of developer excitement about solving this pre-existing problem of getting a better developer experience for those cases. Because juggling multiple projects, even if there are multiple files, is not really fun for anybody. So so, you know, seems like the same opportunity for improving things. +DE: Yeah. Thanks for bringing up that second point. I mean, we've been discussing that point pretty - kind of on and off, over the recent months, and understanding is that there's already lots of developer excitement about solving this pre-existing problem of getting a better developer experience for those cases. Because juggling multiple projects, even if there are multiple files, is not really fun for anybody. So so, you know, seems like the same opportunity for improving things. DDR: Yeah, just being forthright with you. @@ -886,18 +883,19 @@ DE: I'm definitely not asking for consensus for everything. Can we have an overf AKI: Yes. Thank you all. ### Conclusion/Resolution -* More discussion later +- More discussion later ## Array filtering / grouping for stage 2 + Presenter: ​​Justin Ridgewell (JRL) - [proposal](https://github.com/tc39/proposal-array-filtering) - [slides](https://docs.google.com/presentation/d/1fY_jsD8bVZ8P95Mr7cEr3WdCbhMLdEQ7OS5hhLCbfJ4/edit) -JRL: So I'm talking about array filtering and also grouping and I'll get to why that is in a minute. To begin with, let's just talk about array filtering. I brought a proposal a year ago about trying to solve the issues I have with the way I think about filtering. To recap, filter selects the items which we return true and it puts those items into the output array. And what I've come to understand is that a lot of people think about this as the way filtering works. This is their point of view on filtering. But I and a few others think about filter the opposite way. And to give you an example that was actually brought up before: think about a coffee filter. It's completely valid for you to look at a coffee filter and think the filter acts on the liquid, it allows the liquid to go through. But for people like me that think like I do, we see a coffee filter and think of it as acting on the grounds, it prevents the grounds from going through and this causes a lot of confusion whenever we're trying to use the filter method in JavaScript because it's the opposite. My proposal is to add a filtering out method, a method that operates the same way as I intuit that filter works, it would reject the items which return true the same way my coffee filter rejects the grounds when it's acting on them. The goal isn't primarily to make a negation easier as everything is currently possible with the filter method, you can just add a not in your predicate or you can negate it with a higher order function or something. So everything is already technically expressible. Instead I see the primary goal of this as helping with people who have the same mental model that I do. We can place our intuition onto the filtering out method and that helps us better understand both filtering out and filtering. To give you concrete terms, I'm proposing a filterReject method. This has changed from the previous time I talked about this because there was criticism about calling it filterOut. I think filterReject correctly describes what the method does so that everyone who's reading it can understand without confusion about what is being operated on and what the output will be. filterReject rejects items that return true. Giving it a name like filterReject, helps me put my intuition of filtering on to it. I can now think about rejection as my coffee filter does. And this also allows me to better understand the regular filter, because it'll be the opposite pairing. Having both helps people like me understand both methods better. And as long filterReject is named appropriately for people who think it the other way, the selection way, I don't think it's going to cause any confusion for people who think that way. So it should just help the people who are like me. To give you the code example, filter operates as the selection. filterReject operates as the rejection, and you get the arrays that you want. +JRL: So I'm talking about array filtering and also grouping and I'll get to why that is in a minute. To begin with, let's just talk about array filtering. I brought a proposal a year ago about trying to solve the issues I have with the way I think about filtering. To recap, filter selects the items which we return true and it puts those items into the output array. And what I've come to understand is that a lot of people think about this as the way filtering works. This is their point of view on filtering. But I and a few others think about filter the opposite way. And to give you an example that was actually brought up before: think about a coffee filter. It's completely valid for you to look at a coffee filter and think the filter acts on the liquid, it allows the liquid to go through. But for people like me that think like I do, we see a coffee filter and think of it as acting on the grounds, it prevents the grounds from going through and this causes a lot of confusion whenever we're trying to use the filter method in JavaScript because it's the opposite. My proposal is to add a filtering out method, a method that operates the same way as I intuit that filter works, it would reject the items which return true the same way my coffee filter rejects the grounds when it's acting on them. The goal isn't primarily to make a negation easier as everything is currently possible with the filter method, you can just add a not in your predicate or you can negate it with a higher order function or something. So everything is already technically expressible. Instead I see the primary goal of this as helping with people who have the same mental model that I do. We can place our intuition onto the filtering out method and that helps us better understand both filtering out and filtering. To give you concrete terms, I'm proposing a filterReject method. This has changed from the previous time I talked about this because there was criticism about calling it filterOut. I think filterReject correctly describes what the method does so that everyone who's reading it can understand without confusion about what is being operated on and what the output will be. filterReject rejects items that return true. Giving it a name like filterReject, helps me put my intuition of filtering on to it. I can now think about rejection as my coffee filter does. And this also allows me to better understand the regular filter, because it'll be the opposite pairing. Having both helps people like me understand both methods better. And as long filterReject is named appropriately for people who think it the other way, the selection way, I don't think it's going to cause any confusion for people who think that way. So it should just help the people who are like me. To give you the code example, filter operates as the selection. filterReject operates as the rejection, and you get the arrays that you want. -JRL: The second part of this proposal is about array grouping. In the first time I presented array filtering, it was requested that I don't focus specifically on just the filterReject method but instead expand it into different forms of filtering and grouping/partitioning. One possibility here is a partition method, so if you call partition, it returns an array filled with two sub arrays, the first being the things that the predicate returns true for and the second being the things the predicate returns false for. This gives you a way of getting both the selections and the rejections. It's filtering except you get both things back. But there are a couple of issues with a partition that I can see. For instance, is the return value trues then falses or falses then trues? My initial guess is just assuming false is loosely equal to 0, so I kind of assumed falses should be the first subarray but that's not the way functional languages like Haskell work. All the functional languages that have partition always produce trues subarray first. I think it's a little confusing but it's not a huge issue. However, there's a better option that exists in lodash and underscore, there's a method called groupBy. groupBy is just a more generic form of partition, instead of having your keys be 0 & 1 for trues and falses, you return the key that you want to group into. So by calling groupBy, and then using the same true/false predicate, I can get back a key called false and a key called true and each will have an array populated with the items that returned that key. And it can be expanded out into really complex examples. This is actually something I had the code in Babel a couple of months ago because node 8 doesn't doesn't support stable sorting. So to go through the code quickly, just as an overview, I'm grouping each of the keys on an integer priority. I'm sorting the integer keys and then, concatting based on the output of that. And essentially, I have this giant chunk of before code.If we had a groupBy method I could have just written the latter code. Just bucket with groupBy on the priority and concatenate the priority buckets together to get the output. So this is to give you an example of where I have actually written this exact thing out, and I think it could be generically useful for everyone else. +JRL: The second part of this proposal is about array grouping. In the first time I presented array filtering, it was requested that I don't focus specifically on just the filterReject method but instead expand it into different forms of filtering and grouping/partitioning. One possibility here is a partition method, so if you call partition, it returns an array filled with two sub arrays, the first being the things that the predicate returns true for and the second being the things the predicate returns false for. This gives you a way of getting both the selections and the rejections. It's filtering except you get both things back. But there are a couple of issues with a partition that I can see. For instance, is the return value trues then falses or falses then trues? My initial guess is just assuming false is loosely equal to 0, so I kind of assumed falses should be the first subarray but that's not the way functional languages like Haskell work. All the functional languages that have partition always produce trues subarray first. I think it's a little confusing but it's not a huge issue. However, there's a better option that exists in lodash and underscore, there's a method called groupBy. groupBy is just a more generic form of partition, instead of having your keys be 0 & 1 for trues and falses, you return the key that you want to group into. So by calling groupBy, and then using the same true/false predicate, I can get back a key called false and a key called true and each will have an array populated with the items that returned that key. And it can be expanded out into really complex examples. This is actually something I had the code in Babel a couple of months ago because node 8 doesn't doesn't support stable sorting. So to go through the code quickly, just as an overview, I'm grouping each of the keys on an integer priority. I'm sorting the integer keys and then, concatting based on the output of that. And essentially, I have this giant chunk of before code.If we had a groupBy method I could have just written the latter code. Just bucket with groupBy on the priority and concatenate the priority buckets together to get the output. So this is to give you an example of where I have actually written this exact thing out, and I think it could be generically useful for everyone else. JRL: There are a few open questions that we have about grouping. The first is what should be the return value. groupBy in the ecosystem, meaning primarily lodash or underscore that I'm familiar with, return just regular objects. But if we're returning an object, that means it could have weird prototype inheritance bugs. So if my callback function returned a toString key name, you could have a conflict. Especially if you didn't return a toString key in this particular input array, the toString would be the inherited one. We could avoid all the inheritance issues by creating a prototype-less object. And the third option that I can think of, is that instead of returning an object, we would return a Map. The keys would obviously whatever you returned. This would actually allow you to return things like complex objects for your keys and have a group on those. All of these are possible. I would prefer to follow the ecosystem here and just use a normal object. But I am willing to discuss all of them. @@ -917,7 +915,7 @@ JHD: Okay. WH: I’m weakly unconvinced about `filterReject`. I just don't see much of a use case for it, and if we do have it, we should call it `reject`. -WH: I'm much more interested in `groupBy`. It seems like a useful thing for grouping things. My concern is about making things which work 99 percent of the time and have weird edge cases like the inheritance problems you mentioned. People will want to use this for database-like things where you get a bunch of results and group them by some part of a key. When that happens, I don't want to have to look up what happens if somebody uses “__proto__” for that key. Or if somebody puts both the value `true` and the string `"true"` in there. So my preference would be to have Maps because that's the most well-behaved kind of output. +WH: I'm much more interested in `groupBy`. It seems like a useful thing for grouping things. My concern is about making things which work 99 percent of the time and have weird edge cases like the inheritance problems you mentioned. People will want to use this for database-like things where you get a bunch of results and group them by some part of a key. When that happens, I don't want to have to look up what happens if somebody uses `__proto__` for that key. Or if somebody puts both the value `true` and the string `"true"` in there. So my preference would be to have Maps because that's the most well-behaved kind of output. JRL: I could agree to that. I don't feel strongly enough about any of the three options to force anything here. I think all three options are valuable and the only reason prefer the regular object is just because of the ecosystem precedent. @@ -967,7 +965,7 @@ JRL: Thank you. I actually had that same opinion brought up when I was calling i ???: ?? -JRL: I don't think it's appropriate to ask for stage 2 on groupBy. So I'll ask for that separately. I am looking for stage two on filterReject. +JRL: I don't think it's appropriate to ask for stage 2 on groupBy. So I'll ask for that separately. I am looking for stage two on filterReject. MF: I would not support stage 2 on filterReject until we've done further research on whether groupBy solved the originally stated problem here because I feel that if we have grouped by and it is a solution to your originally stated problem that we do not need filterReject. @@ -989,7 +987,7 @@ MF: Great. I don't want to be difficult here, but like Aki was saying earlier, s MM: Okay, groupBy solves a much bigger range of problems. So I would certainly. So maybe there's some writing that needs to happen before we can do this. But I would be willing to say that the problems that groupBy solves are well enough understood that I'm willing to say let's go to stage 1 on that set of problems with groupBy being the example approach for addressing those problems. -MF: I'm fine with that. Please Justin in your description in your repository, address the problem and not just the solution. +MF: I'm fine with that. Please Justin in your description in your repository, address the problem and not just the solution. WH: I take a different procedural position and I would say that, in my opinion, the `groupBy` proposal is almost at stage 2. We already have spec text for it, with the only modulo being that I would want the prototype gone from the produced objects and possibly a Map version. @@ -1001,7 +999,7 @@ JRL: I agree. AKI: Do we have a conclusion to record here? -JRL: I'm hoping the conclusion is groupBy reaches stage one. +JRL: I'm hoping the conclusion is groupBy reaches stage one. JHD: Can we come up with a name for the problem that groupBy solves and perhaps grouping with its own repo. Okay. And then that addresses Michael's point and then filter rejected be discussed separately. @@ -1020,8 +1018,6 @@ MM: So what am I agreeing to? If I were to agree to stage two, I'm sorry I keep MF: I would not like `filterReject` to advance to stage 2 until `groupBy` has had further progress, so it should not advance to stage 2 today. I don't think we need to go into process discussion right now. ### Conclusion/Resolution -* array grouping gets stage 1 -* filterReject does **not** get Stage 2 - - +- array grouping gets stage 1 +- filterReject does **not** get Stage 2 diff --git a/meetings/2021-07/july-15.md b/meetings/2021-07/july-15.md index be02f2c3..b61eab20 100644 --- a/meetings/2021-07/july-15.md +++ b/meetings/2021-07/july-15.md @@ -1,7 +1,8 @@ # 15 July, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -13,9 +14,8 @@ | Josh Blaney | JPB | Apple | | Philip Chimento | PFC | Igalia | - - ## Intl.NumberFormat v3 for stage 3 + Presenter: Shane Carr (SFC) - [proposal](https://github.com/tc39/proposal-intl-numberformat-v3#readme) @@ -23,9 +23,9 @@ Presenter: Shane Carr (SFC) SFC: Hello everyone. I'm here to present my proposal: Intl.NumberFormat v3 for Stage 3. So first, let me go ahead and just for people who haven't seen this proposal yet just tell you what this is all about. This first came up for Stage 1 last year. So this has been baking for about almost a year and a half. Now for this proposal, the way that we came up with this was by going through all of the feature requests that get filed against the ecma-402 repository and evaluating each one of them based on its merits based and in particular, based on whether it has a lot of stakeholders, whether it has prior art, and whether it's expensive to implement in user land. Just as an example, these are two features that we considered: additional scientific notation styles, but we decided not to implement that particular feature because it doesn't have a lot of stakeholders, and it doesn't have a lot of prior art. Whereas Number Ranges had a lot of stakeholders and a lot of prior art, so that's how we came up with this proposal. -SFC: So, I'll go ahead now and show you what got into the proposal, what we're hoping to advance, and I will highlight updates. The last time I gave a presentation on this was at the April meeting, and I'll highlight any changes that have been made since April. +SFC: So, I'll go ahead now and show you what got into the proposal, what we're hoping to advance, and I will highlight updates. The last time I gave a presentation on this was at the April meeting, and I'll highlight any changes that have been made since April. -SFC: Okay, so first we have the format range function in Intl number format. This supports all of the features that are otherwise supported in Intl.NumberFormat. It allows ranges of currencies, numbers, and measurement units. As you can see here, here are some details about how it works. The formatToParts will follow a model much like datetimeformat range. We use a localized approximately symbol. We also support this in Intl plural rules and we support ranges to Infinity but not NaN. +SFC: Okay, so first we have the format range function in Intl number format. This supports all of the features that are otherwise supported in Intl.NumberFormat. It allows ranges of currencies, numbers, and measurement units. As you can see here, here are some details about how it works. The formatToParts will follow a model much like datetimeformat range. We use a localized approximately symbol. We also support this in Intl plural rules and we support ranges to Infinity but not NaN. SFC: The next is the grouping enum. So, for the grouping enum, this has been one of the biggest pain points, one of the top two issues that gets filed against 402 is that people want more control over how to display grouping separators. And we're now going to be following the CLDR and ICU convention for how we do this by adding additional options: instead of just `true` and `false`, we have three different types of true, which are represented as strings, and then `false`. So we're reusing the existing useGrouping feature and retrofitting it to split the value true into three different types of true as previously discussed. @@ -45,13 +45,13 @@ SFC: So that's my last question. So then let's see I'll check if there's anyone JHD: I think my comment is the limit of my interests. -BT: You've got a couple of folks on the queue as requested. Ask and you shall receive, Ujjwal’s first with explicit support. Would you like to make that more explicit? [silence] Don't hear you if you're trying to talk. So let's go to Kevin Gibbons. +BT: You've got a couple of folks on the queue as requested. Ask and you shall receive, Ujjwal’s first with explicit support. Would you like to make that more explicit? [silence] Don't hear you if you're trying to talk. So let's go to Kevin Gibbons. -KG: Yeah, for the hex question, I suspect that I am neutral on it, but can you give an example of an API that will now accept a hex string? +KG: Yeah, for the hex question, I suspect that I am neutral on it, but can you give an example of an API that will now accept a hex string? SFC: It looks like actually we already accept a hex string. So this is a moot point, isn't it? Because I actually just tested this and we actually already support hex strings. So we continue to support hex strings. -KG: All right, then my opinion is that we should continue to support hex strings. Can you reassure me that you don't accept octal? +KG: All right, then my opinion is that we should continue to support hex strings. Can you reassure me that you don't accept octal? SFC: It supports whatever is in the StringNumericLiteral grammar. @@ -61,7 +61,7 @@ SFC: I intend to use the one that's in section 7.1.4.1, not the one in Annex B KG: cool. Okay. Good to know. -SFC: I believe, if I check the spec, that should be correctly cross-referenced. We had this interesting part of the spec that you know, if you're interested in this, I would hope that I would like to have one more set of eyes on like, how I did the Intl mathematical value. Because basically like, you know, it's a little interesting what I did here because I basically said like, parse the number using the StringNumericLiteral grammar. But extract out the mathematical value before returning it as a number. So if I could get one more set of eyes on that, that be useful. +SFC: I believe, if I check the spec, that should be correctly cross-referenced. We had this interesting part of the spec that you know, if you're interested in this, I would hope that I would like to have one more set of eyes on like, how I did the Intl mathematical value. Because basically like, you know, it's a little interesting what I did here because I basically said like, parse the number using the StringNumericLiteral grammar. But extract out the mathematical value before returning it as a number. So if I could get one more set of eyes on that, that be useful. KG: Yeah, I'd be happy to give that a review. I've been - well, the 262 editors have all been messing with this stuff lately so it's all paged in. @@ -71,7 +71,7 @@ USA: Yeah. I just wanted to explicitly support this. Thank you for working so ha SFC: Thank you. I don't see anyone else joining the queue. There's someone, Waldemar? -WH: Regarding KG’s question: There are actually three numeric grammars in the spec. There is (1) the grammar for numeric literals that appear in source code, there is (2) the Annex B grammar that modifies grammar (1), and there is (3) the *StrNumericLiteral* grammar that’s used for numeric literals parsed from strings. I don’t believe Annex B modifies grammar (3). Note that *StrNumericLiteral* does allow octals using the `0o` prefix. +WH: Regarding KG’s question: There are actually three numeric grammars in the spec. There is (1) the grammar for numeric literals that appear in source code, there is (2) the Annex B grammar that modifies grammar (1), and there is (3) the *StrNumericLiteral* grammar that’s used for numeric literals parsed from strings. I don’t believe Annex B modifies grammar (3). Note that *StrNumericLiteral* does allow octals using the `0o` prefix. KG: That's right. Yes, so presumably that is accepted here, which is okay. I don't have a problem with accepting octals with the `0o` prefix. @@ -82,8 +82,8 @@ BT: All right. Any objections out there for stage 3, any worries? [silence] Soun SFC: Thank you. It's been a great experience, working on this proposal. ### Conclusion/Resolution -* Stage 3, KG to review the specification of the string grammar (and WH if interested). +- Stage 3, KG to review the specification of the string grammar (and WH if interested). ## Module fragments (continuation) @@ -95,9 +95,9 @@ DE: Thanks for the question. So I MAH raised this question yesterday, that is, h CZW: Yeah, we can maybe discuss this offline. -MM: So I just want to note that since Dan's presentation, we've tried to figure out how these concepts relate. And we do not understand how module fragments can be module blocks. We very much like the idea of ending up with fewer concepts and of unifying module blocks with static module records. So I just to note that we are looking forward to working on this with Dan and trying to work through these issues because we are very interested in seeing how simple a system can be that satisfies all of the nees. +MM: So I just want to note that since Dan's presentation, we've tried to figure out how these concepts relate. And we do not understand how module fragments can be module blocks. We very much like the idea of ending up with fewer concepts and of unifying module blocks with static module records. So I just to note that we are looking forward to working on this with Dan and trying to work through these issues because we are very interested in seeing how simple a system can be that satisfies all of the nees. -DE: I'm looking forward to working with you here too. So when we were discussing yesterday, I was trying to give this explanation about how they relate in terms of - so could you elaborate on the unsure-ed-ness? +DE: I'm looking forward to working with you here too. So when we were discussing yesterday, I was trying to give this explanation about how they relate in terms of - so could you elaborate on the unsure-ed-ness? MM: Yeah. So your explanation included the phrase "closing over" which is not a something that a purely static reflection of - that's semantically equivalent to something reusable source text could do. So, as soon as you have the thing closing access to other modules, then it's something quite beyond a module block or a static module record. I see Chris is on the queue. I'll defer to Chris for a more in-depth explanation of our puzzlement. But like I said, we really want to work through this and solve problems. We would like to see something go forward. @@ -113,7 +113,7 @@ KKL: Also linked on the calendar invitation is the list if you'd like to partici MM: who has the power to edit the TC39 calendar item? -AKI: Funny you should ask. I was just looking that up for a completely different reason. I don't have an answer to her yet +AKI: Funny you should ask. I was just looking that up for a completely different reason. I don't have an answer to her yet JDH: a number of us do. I'm pretty sure I can make those changes, Mark. Message me. @@ -121,7 +121,7 @@ BT: So for those who want to continue this discussion, check the TC39 calendar f GB: Unfortunately, I didn't make the discussion yesterday but from what Dan was just saying, it sounds like there was discussion around representation of these in import maps. The topic specifically is one that I've brought up previously. I just wanted to check if that's been discussed about the principle of how sharing code between different builds can work under the constraints of this type of an optimization or using this as a form of optimization. -DE: I had a slide about a way that it might work, but I think it would be useful for you to describe the motivation from your end of having this support. +DE: I had a slide about a way that it might work, but I think it would be useful for you to describe the motivation from your end of having this support. GB: Okay, I guess the appeal here is that it opens the door for a kind of an optimization of combining modules together into a single file where they would have been separate module files and that's a very appealing aspect of the specification and one of the natural things that you end up Doing with with a large application is you have your sort of core application build that has a bunch of core modules and then you have your sort of apps or components that build on top of that core application that you might want to build separately, that again need to link against modules in the core build. And what happens if there's a module that's been inlined into the core build that you now want to reference from outside the core build and in one of the subsequent builds. And so you end up needing some kind of Registry manifest that maps them and tells you how you can reference that module. The current way that this specification is written, is you've needed - sorry, I'll just post the issue that explains the motivation for them. If you need it as sort of an import and you need to know the identifier and it would also need to be a public export of the core build it. So just thinking of that, If you've got react in this build and you only want to then link your components also against that same version of react. So it's kind of a portability code-sharing use case question. @@ -135,7 +135,7 @@ GB: Yeah, I see that. I guess my only concern with that kind of approach is crea DE: I think this is pretty common for TC39 proposals that they only really work if there's something that's done in hosts to make them work. For example, we're discussing Realms at this meeting also. And to make module fragments work at all it definitely needs host integration. so, the way that we work this out procedurally in TC39 is typically to have a proposed host integration laid out, maybe not all the details, but at least the broad strokes, before stage 3 and get some kind of buy-in from the host or at least some of the hosts - maybe rough buy-in - befor stage 3. So I think that would be a reasonable thing to ask for here, given the really tight Connections. I agree that there's this timing issue that we wouldn't want to get ourselves into a sticky situation where we kind of commit ourselves to do this, but then don't manage to solve some of the important problems. -GB: Yeah, I agree that starting those conversations early and sketching out those Integrations and getting feedback on that is a great direction and I'd like to see progress on that as well. +GB: Yeah, I agree that starting those conversations early and sketching out those Integrations and getting feedback on that is a great direction and I'd like to see progress on that as well. DE: Yeah, I guess I want to suggest that we order this by first working out these kind of foundational issues that other people have raised and then if we end up feeling confident about non-string module fragment specifiers, then we would try to build this stronger consensus about the import Maps extension. @@ -143,7 +143,7 @@ GB: I mean, I'm not sure what would work best for the process. I just think it's DE: Yeah, I agree. Thanks for maintaining that eye and let's keep working on this particular thing. -JRL: on the same topic we were discussing import maps. These module blocks and the named module blocks that you're proposing. We don't require import maps, correct? It's just a nice to have that would make it easier use. +JRL: on the same topic we were discussing import maps. These module blocks and the named module blocks that you're proposing. We don't require import maps, correct? It's just a nice to have that would make it easier use. DE: So TC39 doesn't require that host support, import maps. I think they're not supported in all browsers, yet. They're not technically, on a standards track yet, but hopefully they will be soon, like, maybe they'll be part of the WHATWG HTML spec, I'm not sure but that's going to take multiple browser support. So, definitely not something that TC39 requires. @@ -151,7 +151,7 @@ JRL: But the proposal that you're talking about now does not require it and we c DE: So you know I'm pretty convinced that Guy's usage pattern is important and we should work out a solution to it. -GB: To try and clarify, for the non-import-maps scenario. You would still need tohave the same double-import pattern to access one of the imports from within the module fragment package. Like that same example. It's a level of indirection that requires a side manifest or bundlers to maintain, if that makes sense. +GB: To try and clarify, for the non-import-maps scenario. You would still need tohave the same double-import pattern to access one of the imports from within the module fragment package. Like that same example. It's a level of indirection that requires a side manifest or bundlers to maintain, if that makes sense. JRL: Okay. @@ -161,15 +161,15 @@ GB: So for example, if you had a module that imported React and you wanted to li DE: So, can you elaborate about why this would be difficult for tools? -GB: Well, firstly, it requires all build tools to share a common sort of manifest format for declaring these modules in bundles, or, you know, you would stick with a single build system. At the moment, effectively, I guess we are treating all public modules as separate module files and so I guess it is a consequence of effectively inlining everything into a single bundle that you get that indirection between private and public specifiers or modules. I guess just thinking of the consequences for tooling and build processes etc, it's something that is worth engaging further with current build tools on and and prototyping some of these ideas further, to make sure they're working out smoothly. +GB: Well, firstly, it requires all build tools to share a common sort of manifest format for declaring these modules in bundles, or, you know, you would stick with a single build system. At the moment, effectively, I guess we are treating all public modules as separate module files and so I guess it is a consequence of effectively inlining everything into a single bundle that you get that indirection between private and public specifiers or modules. I guess just thinking of the consequences for tooling and build processes etc, it's something that is worth engaging further with current build tools on and and prototyping some of these ideas further, to make sure they're working out smoothly. DE: yeah, definitely, I'd like to encourage prototyping of this stuff before stage 3. So yeah, that's a good set of things that could kind of go wrong. You mentioned public and private module fragments, and I did want to mention again that, you know, being able to declare a private module fragment was a pretty widespread feature request from my initial version which used string-based specifiers and had them only be public. It's really unclear to me how private module fragments could work if we have entirely string-based specifiers because you kind of want those to be meaningful in a general way, right? Right now when you have a string based specifier, not necessarily in all environments, but on the web and in web-like environments, you can normalize that or you can make sure that it's an absolute path or a relative path. And then It has the same meaning, it's a common key and so I don't see how that would fit together with private module fragments, though that's been a pretty common feature requests. And I think lexical scoping or choosing not to export something is a natural way to express privacy. So if we do decide to go back to strings, then I would appreciate more help with how to think through these concepts in a way that could support private module fragments or maybe we would come to an understanding we don’t need them. -GB: Yeah, there's definitely pros and cons. I guess it's important to flesh out the use cases and the prototyping. And see the end to end workflows and make sure that the portability as a feature can work out smoothly. +GB: Yeah, there's definitely pros and cons. I guess it's important to flesh out the use cases and the prototyping. And see the end to end workflows and make sure that the portability as a feature can work out smoothly. DE: Yeah, totally agree. Great. -BT: The queue is empty. Any closing words you want to give Dan? +BT: The queue is empty. Any closing words you want to give Dan? DE: Thank you so much for all the deep thought that all of you are engaging in and looking forward to working with you further here. So please check the SES calendar for when we'll have the next meeting or I can make a little reflector reminder about it for when we will discuss this in more detail. @@ -177,16 +177,13 @@ KG: For the notes, there's no particular conclusion from this item, right? DE: So, the proposal is at stage one. I suggested this change to variables, which it seemed like people liked the idea of, or at least how the syntax looked, but there's real conceptual questions about whether it's viable. And so we're going to be looking into more about whether it's viable, whether we can work out the details, and there are lots of different details to work out and people have raised really good questions. - - ### Conclusion/Resolution -Did not seek stage advancement, interest in variable style but more thinking to be done - +Did not seek stage advancement, interest in variable style but more thinking to be done ## Incubator call chartering -Presenter: Shu-yu Guo (SYG) +Presenter: Shu-yu Guo (SYG) SYG: So the current Charter is now empty, so there's no overflow from last time. I want to call out that partially due to scheduling snafu and partially due to general lack of interest there ended up being no incubator call f for pattern matching. There was one for pipeline but there was none for pattern matching and my recollection from the previous plenary was that we had added pattern matching as an item on the charter because there was interest from non-champion delegates, but nobody actually signed up. So I'm considering it not overflow for now, but I would like to call out again: Is there interest from non-champions to discuss pattern matching? @@ -196,8 +193,7 @@ DE: I would be interested in a call. I'm sorry I haven't been taking the reflect SYG: Okay, I'm certainly fine with having it on the charter again this time provided. We get enough general interest. It's my understanding that the champion group for pattern matching already has their own call so the only reason we have an incubator call for this is to get the broader feedback. So with at least Dan and possibly some other Igalians, we can have that and I was going to propose that we add the base64 proposal to the Charter for this time, given the surprising amount of contention around padding and just how complex this entire space is. Would folks be interested? - -KG: Yeah, as champion, I would certainly be interested, especially if Peter thinks he might have time to attend I think that would be especially helpful. Or anyone else who was interested. +KG: Yeah, as champion, I would certainly be interested, especially if Peter thinks he might have time to attend I think that would be especially helpful. Or anyone else who was interested. PH: Schedule permitting I would be happy to join a call on that. @@ -223,21 +219,19 @@ KG: Would it maybe make sense to ask for people who are interested in being ping SYG: Yeah, especially if they can issue visitors captured as part of what I can just write it down, I guess it can. It can also be captured as part of the notes, that would be easier. If you are interested in either pattern matching or base64, please indicate in the 8x8 chat and then I will record it and ping you when we schedule. - ### Conclusion/Resolution + - chartered pattern matching - interest: RGN, BSH, WH - chartered base64 array buffers - interest: KG, PH, RGN - - ## Capitalization of nested namespace objects (continuation) + Presenter: Philip Chimento (PFC) - [slides](https://ptomato.github.io/talks/tc39-2021-07/index.html) - PFC: This is a continuation of the discussion from yesterday. It seems like we need some overflow time to figure out the capitalization of nested namespace objects and set a precedent for future proposals. So I made couple of additions to this slide relative to yesterday. People agreed about the value I proposed for @@toStringTag for namespaces, and we were still talking about capitalization. There was some talk about the definition of a namespace object. So I wondered, maybe to get the discussion going again, is it strictly necessary to have an unambiguous definition of a namespace object? Could we say 'I know it when I see it,' but when I do see it it has to be capitalized or has to be not capitalized? And then I thought I'd put on there what I found the most convincing point from yesterday's discussion: the convention for how a namespace is capitalized, or spelled, usually is derived from the kind of thing it is, not from where it's put, from SYG yesterday. Let's discuss. WH: The discussion is too abstract for me. I'd like to see a list of which names we’re talking about and which are the controversial and interesting cases. @@ -250,23 +244,23 @@ PFC: At the moment, yes. MM: So we have this thing in the language that's already called namespace object. That doesn't mean that we necessarily consider it the same category but since we're searching for what the criteria is, those who think that they have a category in mind that is crisp even if you can't state the definition: do you consider module namespace objects to be namespace objects? -SYG: I do not, MM, and I'll quickly explain why. It's because they are module namespace objects make sense (?) in JavaScript. And they are always kind of reified actual objects. The crisp intuition I have, but lacking a rigorous definition is that, if there was a concept of the namespace that does not require reifying an actual object and I could do that, then that's a namespace object. Like, I could do that with Math, I could do that with Atomics. I cannot do that with module namespace objects, +SYG: I do not, MM, and I'll quickly explain why. It's because they are module namespace objects make sense (?) in JavaScript. And they are always kind of reified actual objects. The crisp intuition I have, but lacking a rigorous definition is that, if there was a concept of the namespace that does not require reifying an actual object and I could do that, then that's a namespace object. Like, I could do that with Math, I could do that with Atomics. I cannot do that with module namespace objects, -MM: Oh, Atomics. That's our other precedent, right? +MM: Oh, Atomics. That's our other precedent, right? KG: And JSON, right? And Reflect, and Intl. MM: Yeah, I guess Reflect. Wow, that's a lot. I'm not opposed to having the naming convention, the initial cap, to distinguish these, if we can find a criteria and that's all I wanted. So, I want to encourage you to find a criteria. To go forward with this as a precedent without having a criteria we can state I think is going to lead to the same kind of confusion that having two bottom values `null` and `undefined` does, which is in that case there's a stated criteria that's essentially useless in practice and people just don't have guidance about when to use one versus the other. So they do it based on an unstated intuition and they end up writing code that disagrees with other code for no good reason. I would like to avoid introducing a naming convention based on a similarly vague in practice criteria. -SYG: Fair enough. I kind of like what matches the queue item from MAH, but before that, KG, did you have anything to add about yours? +SYG: Fair enough. I kind of like what matches the queue item from MAH, but before that, KG, did you have anything to add about yours? KG: Oh, this was just answering MM's question. I would agree with SYG that module namespace objects are not namespace objects in this sense, I think it's kind of a category error to ask that question. I would regard namespace objects as being a property of code itself, not a property of particular values in the language. So module namespace objects are particular values in the language but a namespace object in the sense that we are using it here is about how your code is structured, and what role this code serves in the rest of your program. -MM: And I'm perfectly fine with the module namespace to be understood as just a completely separate category. I wanted to probe the advocates, so thank you. +MM: And I'm perfectly fine with the module namespace to be understood as just a completely separate category. I wanted to probe the advocates, so thank you. -PFC: Is it fair to say, this is not a language concern but a standard library concern? KG, is that what you're saying? +PFC: Is it fair to say, this is not a language concern but a standard library concern? KG, is that what you're saying? -KG: No, saying something is a namespace object is about the structure of your code, not about the value itself. So `Temporal.now` is a namespace object because it is a singleton collection of values. It exists only to provide access to other values. I think those are perhaps the defining criteria actually. That is about how my program is structured. Not about what kind of value it is, on the whole. +KG: No, saying something is a namespace object is about the structure of your code, not about the value itself. So `Temporal.now` is a namespace object because it is a singleton collection of values. It exists only to provide access to other values. I think those are perhaps the defining criteria actually. That is about how my program is structured. Not about what kind of value it is, on the whole. BT: Just a reminder that we just had 15 minutes for this item. @@ -306,9 +300,9 @@ WH: Given the concrete discussion is about `Temporal.now`, I'm in the camp that PFC: I think we said yesterday that they weren't, because they are constructors. -MM: In terms of the way I phrased my question about capitalized paths, the names of those constructors would count as qualifying. So you could have a path of alternating constructors and namespace objects, all of which are named with capitals, would qualify by the suggested criteria that I was asking. +MM: In terms of the way I phrased my question about capitalized paths, the names of those constructors would count as qualifying. So you could have a path of alternating constructors and namespace objects, all of which are named with capitals, would qualify by the suggested criteria that I was asking. -WH: I would prefer that. To give a ridiculous example, if we made an `Array.Now` with the same contents as in `Temporal.Now`, the `Now` should be capitalized. +WH: I would prefer that. To give a ridiculous example, if we made an `Array.Now` with the same contents as in `Temporal.Now`, the `Now` should be capitalized. JHD: I think, regardless of what we come up with, something being a constructor defines its capitalization, or a constant defines its capitalization, in a way that precludes it being a namespace object, having anything to do with its capitalization. It doesn't matter if it's used as a namespace object or not. If it's a constructor it's capitalized, it's a title case, and if it's a constant, it's all caps or something, right? @@ -330,7 +324,7 @@ SYG: Yes. I was mainly trying to distinguish from getters and setters. It should KG: This definition sounds good to me. -MM: The thing that I was reaching for that is omitted by what SYG said and what MAH wrote in his upcoming question is, that the thing that I want to disqualify is, e.g. `Array.prototype.Now`. I think that would be a bad place to put a namespace object. I don't want this precedent to lead to thinking that that would be a good place for a namespace object or likewise hung off an `Array.prototype` method like `push()`. The intuition here, I think I can justify. It's not just a typographic thing, it's that right now, the lowercase names are about instances, and the uppercase names are about the static world. I understand that's not crisp either, but the disqualifying criteria that I'm suggesting is at least a crisp criteria, that's motivated by a non-crisp intuition. +MM: The thing that I was reaching for that is omitted by what SYG said and what MAH wrote in his upcoming question is, that the thing that I want to disqualify is, e.g. `Array.prototype.Now`. I think that would be a bad place to put a namespace object. I don't want this precedent to lead to thinking that that would be a good place for a namespace object or likewise hung off an `Array.prototype` method like `push()`. The intuition here, I think I can justify. It's not just a typographic thing, it's that right now, the lowercase names are about instances, and the uppercase names are about the static world. I understand that's not crisp either, but the disqualifying criteria that I'm suggesting is at least a crisp criteria, that's motivated by a non-crisp intuition. SYG: I completely agree that prototypes should most definitely not be considered to be able to contain namespaces. @@ -360,7 +354,7 @@ JHD: I personally still prefer the lower case one, I think it looks better and I JHX: Objects are normally not capitalized in userland. -SYG: What are userland name space objects? in your example, you have a module. +SYG: What are userland name space objects? in your example, you have a module. JHX: When you import a module you get a module namespace, and normally it is not capitalized. And I think the system case, like if you use lodash or, lodash is actually very like a namespace. @@ -374,13 +368,13 @@ MAH: I know one module pattern that followed this namespace somewhat globally ac MM: We agree that the thing that we're calling the module namespace object is simply a different concept than the thing that in this conversation we're calling a namespace object. I think we should be concerned about creating confusion if we use the term namespace for both concepts. Can we rename one of them? -KG: I don't think this comes up enough to be that necessary. How often do we introduce a new "namespace object" (in this sense)? Once every two years? +KG: I don't think this comes up enough to be that necessary. How often do we introduce a new "namespace object" (in this sense)? Once every two years? -MM: I don't have a suggestion. In the absence of a suggestion, there's nothing to rename it to. +MM: I don't have a suggestion. In the absence of a suggestion, there's nothing to rename it to. -SYG: Yeah, I think I agree with KG here that sorting this out is for our benefit as delegates in be precise in our meaning for future proposals. I think the broader community will probably say stuff like `Math` and `Atomics` and `Temporal.Now`, and continue to call module namespace objects, module namespace objects. +SYG: Yeah, I think I agree with KG here that sorting this out is for our benefit as delegates in be precise in our meaning for future proposals. I think the broader community will probably say stuff like `Math` and `Atomics` and `Temporal.Now`, and continue to call module namespace objects, module namespace objects. -MM: I don't have a better suggestion. +MM: I don't have a better suggestion. SYG: My concrete suggestion is, so long as the phrase 'module namespace object' remains like an idiom for us, I don't see the need yet to rename it. @@ -400,16 +394,16 @@ SYG: I'll volunteer to write that up and I would appreciate your review, MM. MM: Yeah. And MAH. -WH: Yes, I agree on all counts. +WH: Yes, I agree on all counts. PFC: Great. It sounds like we're done then. ### Conclusion/Resolution + - Namespace objects will be capitalized, including when nested. - This means `Temporal.now` is renamed to `Temporal.Now` - SYG to write up a definition of "namespace object" - ## Realms for Stage 3 (continued) CP: We added a couple of slides to try to shape the conversation based on the discussion that we had two days ago and kind of with, with this topic about the scope of the current proposal and specifically the direct access to objects, and the two slides are very simple. The first one is an example trying to demonstrate how much hazard you have to deal with when you try to use an object from another realm directly, this is a simple example. Assuming that you’re a library author and this one of the use cases that I know Jordan has talked about multiple times, being able to create a library that is resilient to changes that are happening in the current realm, specifically, polyfills and other things that developers do today to modify the intrinsics. This is a good example of the kind of things that Jordan has in mind for realms with direct access to objects. In this particular example, we're accessing the global object from a newly created realm. Let's call it “the other realm” and accessing a couple of intrinsics from the global object of the new realm. In this case, the slice and indexOf intrinsics, which are very common operations that you would do when you're doing any kind of array manipulation. I’m assuming that we have, somehow, access to a function that returns a fibonacci sequence, a simple function that when you call it with a number, it returns an array, or an array-like. Something that looks like an array that has the Fibonacci sequence for the number that you passed into it. It’s a very simple example, it's just a hypothetical Library. Don't go too much into the details there when the sequence comes back, which is an array-like, then you're going to use the indexOf intrinsic operation to try to find the index of the min number passed as the first argument. Let's assume that the minimum is always present. Let's not get into too much of the potential things that can go wrong here in this algorithm. Assuming that the index is there, then you're slicing that array in order to get the segment that you care about. This library is supposed to be called the Fibonacci Segment, so it's giving you a segment of the Fibonacci sequence, and you are just trying to create code that is resilient to any environment modification, so that this function can be called without being affected by the current environment. Is this safe? The answer to that question is, “no”, this is not safe. And when I say safe, I'm not talking about security, I’m simply talking about the fact that this is very hazardous for the library author and whoever uses this library and we can explain why. I will give you a minute to think about it and let’s try to highlight the problem exhibited here. @@ -422,9 +416,9 @@ JHD: I mean, the specific potential issue here is that this will return an array CP: right, but there is a list of other problems here. I did a little bit of research on these, I think, about a year ago, when I was doing a research around the type of intrinsics that we have in the language. At that point, we were talking about what we call the undeniable intrinsics and some other kind of intrinsic. So I did some research on slicing and dicing the intrinsics, and at the time I identified two types of intrinsics when it comes to leaking realm information. I have one slide for that. At the time, I used these two terms: “computational intrinsics” and “realm bound intinsics”. Computational intrinsics are any intrinsics that when you call it with whatever arguments, they return a primitive value, so they don't leak any realm's specific data or specific identity. While realm-bound intrinsics do leak because the values returned are objects that are somehow bound to the realm associated to the function that you're calling, which is an intrinsic. Additional they both leak, and this is the biggest challenge, they both leak if an error occurs when you call that intrinsic, the error itself is going to be leaking information about the realm, it will be an error object with `__proto__` set to an intrinsic from the realm. And this problem is, in my opinion, fatal in many cases, fatal for library authors and consumers, because people are going to bang their heads against the wall to try to figure out what's going on. Why is this thing important? Let's assume that instead of you actually using an array you're dealing with a Date objects. If you’re doing any kind of date operation, you’re doing any creation of new dates objects of any kind, you're going to run into these kinds of problems. Any intrinsic, and we don't have a survey of how many of them fall into these two categories, but I can tell you that many of them will be leaking information on the return value, and the majority of them could potentially leak information through an error, and this is the kind of things that we believe having the callable boundary eliminate entirely. I would say that for you particular use cases of using realms to be able to get brand-new intrinsics, I would say it's probably not economically viable because you have to defend against all these problems and then the people that are using your library, the returning values that are not from the same realm that you are on, it will be a deal breaker for many of them as well. That's my opinion on the matter but it is a reality that we have faced multiple times when dealing with multiple realms with the iframes and the research that we have done around it. So I hope that this information also helps others to understand that there is a real problem when it comes to using anything coming from another realm with direct access to it. -JHD: I had a quick clarifying question, CP. If this is using callable realms, how would it work? +JHD: I had a quick clarifying question, CP. If this is using callable realms, how would it work? -CP: If you are using a callable realm, You have to do some gymnastics, because you're trying to do something with a data structure. In this case, if the array contains multiple primitive values, in order for you to pass that information to the outer realm, you have to do certain gymnastics, I would call it like that, if the array that you're going to leak is an array of primitive values. And this is a trick that we use in the membrane implementation that we create just to kind of provide a proof of concept. If what you're leaking is an array of primitive values. In that case, what you could do if the array is never going to exceed the maximum amount of argument that a function allows, you can use a wrapped function from the outer realm to call it with an array as arguments, so you do `Reflect.apply` on it and you basically provide all these arguments on the other end, you destruct the arguments, and you get an array from the outer realm. That's a trick that we use sometimes. If the array has more items, you have to do other kinds of tricks. Again, it's part of what we call the boilerplate that you have to do when you're sharing data and that data has some identity and so on. It's obviously complicated, but it doesn't have these footgun. +CP: If you are using a callable realm, You have to do some gymnastics, because you're trying to do something with a data structure. In this case, if the array contains multiple primitive values, in order for you to pass that information to the outer realm, you have to do certain gymnastics, I would call it like that, if the array that you're going to leak is an array of primitive values. And this is a trick that we use in the membrane implementation that we create just to kind of provide a proof of concept. If what you're leaking is an array of primitive values. In that case, what you could do if the array is never going to exceed the maximum amount of argument that a function allows, you can use a wrapped function from the outer realm to call it with an array as arguments, so you do `Reflect.apply` on it and you basically provide all these arguments on the other end, you destruct the arguments, and you get an array from the outer realm. That's a trick that we use sometimes. If the array has more items, you have to do other kinds of tricks. Again, it's part of what we call the boilerplate that you have to do when you're sharing data and that data has some identity and so on. It's obviously complicated, but it doesn't have these footgun. JHD: Arrays are kind of a unique example here, obviously, because we've all been using `Array.isArray` for a long time and Array.isArray tunnels through proxies. So it's like one example but we're... @@ -435,10 +429,11 @@ CP: So yes, it's not only about recognizing that it is an array. It's also about JHD: Date has a brand check. ### Conclusion/Resolution -No resolution +No resolution ## Realms for Stage 3 (Continued) + Presenter: Caridy Patiño (CP) - [proposal](https://github.com/tc39/proposal-realms) @@ -448,7 +443,7 @@ CP: All right. So, that was the example that we wanted to show. You don't have a MM: yeah, I just want to add a qualification to make sure to clarify the example. Everything they said about the example, the points that they're making with the example, I completely endorse all of points that are made in the example. The thing that might be misleading about this particular example, is that all of the array methods are generic. They work on anything. I like and that includes a proxy to array. Not because it punches through proxy. is nothing to do with that. It just has to do with the fact that element access and asking for the length all work fine, through proxies and all of the array methods, only assume that date is an example that does not work through membranes but but there's you to date does not - there's nothing on Date that produces as colorful an example, so just sort of understand this example as if the array methods were builtins that assumed array instances. -CP: Well Mark, I think disagree on this one because really all these intrinsics, they all- obviously Array is special, but all the intrinsic that mostly using internal slots and accessing internet has lots to do operations the objects that are given to them and so on and they work just fine across Realms. So even if the thing is a date, you will still be able to use intrinsics for the other realms to access and do operations on them. The problem is when the intrinsic is actually creating a new object and returning that object to the caller or throwing an error in the process +CP: Well Mark, I think disagree on this one because really all these intrinsics, they all- obviously Array is special, but all the intrinsic that mostly using internal slots and accessing internet has lots to do operations the objects that are given to them and so on and they work just fine across Realms. So even if the thing is a date, you will still be able to use intrinsics for the other realms to access and do operations on them. The problem is when the intrinsic is actually creating a new object and returning that object to the caller or throwing an error in the process MM: So it is the case that those would work with direct realm to realm object access but they would not work through a membrane. This is misleading because it - say this Example, if the separation was that all the Realms were only reachable through callable boundaries and were further insulated membranes. This example would still work through a membrane. @@ -458,11 +453,11 @@ MM: If it was, for example, Promises is a good example where the abstractions bo GCL: I’m curious if There's a way to allocate an array buffer on one side of the realm boundary and then access the data from that array buffer, On the other data, I'm sorry on the other side without going through the whole membrane proxy thing for each individual byte access. -JWK: I think the answer is no in the current spec but since hosts can provide their own additional global functions on the realm's global objects, I think the host can provide some mechanism to directly share array buffers between two realms. +JWK: I think the answer is no in the current spec but since hosts can provide their own additional global functions on the realm's global objects, I think the host can provide some mechanism to directly share array buffers between two realms. -LEO: Okay, sorry. go ahead. Yeah, I'm not assuming what will be done in host but yes it's it's true that there is no functionality today in this Epi to cooperate with any any sort of shared buffers crossrealms the day Outside +LEO: Okay, sorry. go ahead. Yeah, I'm not assuming what will be done in host but yes it's it's true that there is no functionality today in this Epi to cooperate with any any sort of shared buffers crossrealms the day Outside -GCL: the data doesn't need to be accessible on both sides at once, but it should be possible. But that the question I'm asking is if it can be moved from one side to the other without byte by moving it through the membrane because that would be very swell. +GCL: the data doesn't need to be accessible on both sides at once, but it should be possible. But that the question I'm asking is if it can be moved from one side to the other without byte by moving it through the membrane because that would be very swell. CP: so, we talk about something similar with records on tuples when we were saying, well one, we get those. Obviously those are shared between the two realms because they don't carry any identity. So that's one thing to consider. The other thing is that, nothing prevents us from in the future adding more wrapping mechanisms. We have only the function wrapper right now, and if you try to pass something else, that is an update with throwing errors because the current spec, but it opens the door for in the future allows certain objects to be passed, if we can Define the semantics of what it means to ensure between the two Realms and the corresponding identities that cannot be really easily used in those structures, so we have to figure something about that. If we can, if we want to share the same byte basically because it is in the same process, anyways. @@ -472,11 +467,11 @@ CP: We haven't explored that. We've focused on the basics knowing that with the GCL: Okay, that's interesting. I guess - yeah, I guess it'll just have to be thought about separately. -CP: Again, you will go from throwing an error to allowing that object to be passed around, but you have to explore that at some point. +CP: Again, you will go from throwing an error to allowing that object to be passed around, but you have to explore that at some point. USA: That's it. Surprisingly, there is nothing else on the queue. -LEO: We've had the topics from Daniel from the last meeting that I still want to give space for them to present. But I at this that we were discussing these constraints I wonder if it's time again that we can ask for For the stage advancement, +LEO: We've had the topics from Daniel from the last meeting that I still want to give space for them to present. But I at this that we were discussing these constraints I wonder if it's time again that we can ask for For the stage advancement, USA: I guess you could ask, I guess you could ask again And in after that we could see if anyone would like to talk about it @@ -488,35 +483,35 @@ CP: Can you be more specific about what those use cases you feel that this is no JHD: Your example addresses certainly a big class of the use cases they have which is wanting to reduce the footprint of what I have to cache as first run code, such that I can then write robust code to write code that will run robustly later despite future modification of globalThis and the associated things. I hear the feedback that it's difficult to write that example correctly, but it's entirely possible to write it correctly. And the wrapper that I would write would construct a realm for myself, it would lock it down, maybe even with ses, but it would lock it down in some ways, and then it would I could even find functions and wrap them and like switch the prototypes and so on, there's there's lots of ways that it can be. Be constructed to be so that for my purposes, it will be correct. I'm not claiming that's easy. -CP: but I'm a journey probably not economically viable but let your it's yeah I mean you're gonna. +CP: but I'm a journey probably not economically viable but let your it's yeah I mean you're gonna. MM: Yeah. Yeah. Thanks I want to address exactly the issue you're raising. There is an existing pattern that one cannot you does not solve the problem, but I think the same as which I'll explain in a moment what that existing pattern But my question to you is the current the circumstances under which that pattern does not solve the problem. I believe your scenario equally does not solve the problem. Okay. The pattern that many projects use including the SES shim itself is that it is assumes that it's running first in its realm that the realm has not yet been corrupted at the time that it is running its module initialization. So during module initialization, it grabs the things that needs to remain. That it's going to then apply later and and the SES shim actually doesn't need to be as aggressive on that for security reasons, but it's aggressive about it anyway. In order for shim fidelity with spec, the spec specifies blah blah uses an internal method. It doesn't use the current binding, and the only way shim can emulate that is to grab the original binding before or anything gets corrupted, so it can continue to use the original binary. So that's a common practice, many systems of use that for years. The counter argument. Is that well, what if it doesn't run first and if that and I don't see another counter argument to it... -JHD: that's that's not the problem I'm trying to solve, okay. So I think it is impossible to avoid having that problem. Once once you don't run first, all bets off. +JHD: that's that's not the problem I'm trying to solve, okay. So I think it is impossible to avoid having that problem. Once once you don't run first, all bets off. -MM: Good. +MM: Good. CP: Even the Realms can... -JHD: My concern is about reducing the footprint and scope of what I have to cache or hold onto in that first-run code so that it's all available later, and the original Realms gives me one thing I have to grab. Currently, I have a library that has 15 million downloads a week that I use for all my shims, that grabs everything and I have to explicitly, add each thing I need, and then all of the shims have to load them all. It's a convenience and kind of scalability issue whereas, with the realm approach, I can do it "one and done" to some degree. +JHD: My concern is about reducing the footprint and scope of what I have to cache or hold onto in that first-run code so that it's all available later, and the original Realms gives me one thing I have to grab. Currently, I have a library that has 15 million downloads a week that I use for all my shims, that grabs everything and I have to explicitly, add each thing I need, and then all of the shims have to load them all. It's a convenience and kind of scalability issue whereas, with the realm approach, I can do it "one and done" to some degree. CP: Well, one thing that we want to mention about this particular thing, Jordan is have done some it serves around these as well, in terms of birth, for four months. And if you grab that global object and you use all these do notations to access the things that you want when you eat it, you're going to pay the penalty as well. So that's another thing that at least another reason, why we catch everything because... JHD: I am 100% unconcerned with performance until I have 100% ensured correctness. I will be happy to cross that bridge when I come to it. -MM: So so inside the SES shim we have module called Commons. Commons, that grabs a whole of stuff and then re-exports it. So that under re-exports it, under the expected name. So, then you get the original by importing it from Commons rather than just using it as a global. Our Commons does not have full coverage of everything. You might need it just covers the stuff we do need, but wouldn't one module that covers everything you might need that all of your other shims could then import from satisfy your desire just as well. +MM: So so inside the SES shim we have module called Commons. Commons, that grabs a whole of stuff and then re-exports it. So that under re-exports it, under the expected name. So, then you get the original by importing it from Commons rather than just using it as a global. Our Commons does not have full coverage of everything. You might need it just covers the stuff we do need, but wouldn't one module that covers everything you might need that all of your other shims could then import from satisfy your desire just as well. JHD: I'd have to - I'm not a hundred percent clear like I the only thing that works is some sort of object that other people can't mess with so that can reach the built-ins, which are usually all built-in functions that And I need to be able to invoke and the specific thing that would that makes this especially difficult is function.Prototype. call,apply, and bind - like I have to jump through a bunch of Hoops to call by and everything in advance because can't rely on those available, whereas I could rely on them to be in the other realm as long as I protected access to function prototypes. -MM: So I know exactly the call bind problem you're talking about. Back in the es5 days, I showed a really complex pattern for doing it safely in ES5. Thankfully with starting with es6 with reflect.apply and with a triple dot, we've got a the un-curry, this abstraction is very, very straightforward. So, what what our Commons thing does is for everything, where the the original functionality is in a `this` sensitive method, The thing that we grab and re-export is un-curry `this` of Method. +MM: So I know exactly the call bind problem you're talking about. Back in the es5 days, I showed a really complex pattern for doing it safely in ES5. Thankfully with starting with es6 with reflect.apply and with a triple dot, we've got a the un-curry, this abstraction is very, very straightforward. So, what what our Commons thing does is for everything, where the the original functionality is in a `this` sensitive method, The thing that we grab and re-export is un-curry `this` of Method. CP: We have talked about these multiple time to obviously, many people are doing, we do these, as well as software is extensible, sometimes we talk about having the ability to your simply at any given time in a realm, you can access all the intrinsics. You don't need to create a realm to get the intrinsics of the realm that you're running on. That's better than creating a realm to get and all the other realm transects that are new, you get your own intrinsic in the previous iteration of the realm proposal. We have a getter called intrinsics that returns these object that contains all the intrinsics specified in 262. And those intrinsic could be accessed by the percentage interested in the same name, which wasn't Nice. But we cool explore some of that, but not related to Rome because you wanted to get it from the room that you are on. You don't care about creating a new realm. I believe that's a very elegant solution that we can introduce as that means that anyone can use it. -JHD: You basically just described my `get-intrinsics` implementation. So yeah, I mean, you're right, I don't need all of Realms for that use case. I need something for that use case, and Realms would have solved it - all be it in a way that requires careful work to do correctly. +JHD: You basically just described my `get-intrinsics` implementation. So yeah, I mean, you're right, I don't need all of Realms for that use case. I need something for that use case, and Realms would have solved it - all be it in a way that requires careful work to do correctly. -MM: So would you would you agree that something that gave you all of the original intrinsics from your own realm? When I say original, I don't mean unfortunately Which not I mean of course. +MM: So would you would you agree that something that gave you all of the original intrinsics from your own realm? When I say original, I don't mean unfortunately Which not I mean of course. -JHD: I mean only what is available at the moment that I've first run my code. +JHD: I mean only what is available at the moment that I've first run my code. MM: Good. Thank you. Absolutely. So would you agree that a get Originals in that sense? Because that gave you the originals from your own realm, does not gives you everything that you're asking for without the hazards that Caridy pointed out with his examples. So it's actually a much more robust form of those intrinsics to proceed to use. @@ -524,9 +519,9 @@ JHD: Yes. MM: great! -CP: And that could be also if you - if you don't know where to put it, I believe we should put it in the realm constructor. +CP: And that could be also if you - if you don't know where to put it, I believe we should put it in the realm constructor. -USA: We have some things on the queue just to give some status. There's been some chat the DC 39, delegates channel, one of the things that I caffeine has mentioned And that I think it's important if Kevin wants to bring it here but, also met you has some that, I think it's interesting for these days. +USA: We have some things on the queue just to give some status. There's been some chat the DC 39, delegates channel, one of the things that I caffeine has mentioned And that I think it's important if Kevin wants to bring it here but, also met you has some that, I think it's interesting for these days. MAH: Yeah, I think similar idea, it seems that what Jordan is trying to do is get to the original intrinsics. And there are multiple options. One is a imperative API that you can call that's existing right now. Another one could be to re-explore the standard modules where you would be able to access those original intinsics things by importing them. @@ -536,35 +531,35 @@ MAH: so, you You want them not only in JHD: just like globals, I'd want to be able to synchronously get them everywhere in JavaScript. -MAH: Then I suppose our Global intrinsic a global imperative call is the only only way +MAH: Then I suppose our Global intrinsic a global imperative call is the only only way CP: For me, it's just a matter of finding where to put that function and then we can probably dig out some of the -- well, you already have a proposal, I think somehow raise so we can explore that. that. But again, it's might be some in, might have some intersection semantics with the realms proposal if he happens to be placed into the map Constructor. But other than that, it's just a feature that can be explored as a separate proposal. -MAH: as I just posted in the queue, there has actually been other use cases for this, which is also accessing hidden intrinsics to be able to go and lock them down and in lockdown API. +MAH: as I just posted in the queue, there has actually been other use cases for this, which is also accessing hidden intrinsics to be able to go and lock them down and in lockdown API. CP: Right. I absolutely feel that are very, very difficult to get your hands on and very hazardous. And I think Mark has some documents around how to get some of those. But yeah, this is definitely a feature that we have been talking about for quite some time. MM: Yeah, The shim grabs all of them and has to grab all of them because it's security depends, on freezing all of them. And that means that there's always the that if TC39 introduces what's a new syntax from which a new intrinsic is somehow reachable. There's no way that old says asks can stay secure in the face of those additions. But if there was a get intrinsics query, that would give you all of the intrinsics, all of the primordials whether they're reachable by name, navigation, or syntax or whatever, then we could reliably freeze them. Also there's a lot of good reason to provide this query anyway. -JHD: So, yeah, I think the other challenge, related to what we just talked about. That's been talked about in Matrix Is that like, as you said, some of the intrinsics are not reachable off the global. So it's actually not trivial to just Loop over the global and grab everything. So, yeah, I think I agree that that what has been lightly sketched out some sort of function that I can cache and then call that will get you know the previous code could have replaced. But that will get All the originals at that time is, would solve my use case without direct realm object access? +JHD: So, yeah, I think the other challenge, related to what we just talked about. That's been talked about in Matrix Is that like, as you said, some of the intrinsics are not reachable off the global. So it's actually not trivial to just Loop over the global and grab everything. So, yeah, I think I agree that that what has been lightly sketched out some sort of function that I can cache and then call that will get you know the previous code could have replaced. But that will get All the originals at that time is, would solve my use case without direct realm object access? -MM: For sure, I would solve it better. +MM: For sure, I would solve it better. JHD: Yeah, I would agree with that, the Yes. If so then what are, what is the suggestion based on that? -MM: We've identified a need with some concrete ideas of possible API approaches for satisfying the need, I think we've got something that qualifies very well for a new stage one proposal. I think we should address that need there and we should realize that since the need can be addressed and that there's nothing about the realm proposal in its current state that impedes that in any way that we should allow the Realms proposal in its current state to proceed to stage 3. +MM: We've identified a need with some concrete ideas of possible API approaches for satisfying the need, I think we've got something that qualifies very well for a new stage one proposal. I think we should address that need there and we should realize that since the need can be addressed and that there's nothing about the realm proposal in its current state that impedes that in any way that we should allow the Realms proposal in its current state to proceed to stage 3. JHD: so, the queue is empty. I think that seems a reasonable path forward under one - "condition" is the wrong word because that implies leverage but under one circumstance, which is that the "direct object access for Realms" path does not appear to me that there will ever that will ever be allowed unless the current membership of TC39 completely turns over. It would be kind of weird to ask for consensus on for stage 1 on a proposal or a problem that nothing was written up and put on the agenda before right now. So it'd be nice if that could be granted, but that alone isn't sufficient because the real piece is, I would need to know - in the same way as when realms went to stage 2, it was clearly telegraphed that there were these potential blocking concerns for the direct object access approach, right? Is there anyone in the room who has any reason to believe that such a proposal modulo, you know, dealing with various sorts of concerns that proposals run across, but that such a proposal would, would not have a path toward stage 4. Does that make sense? What I'm asking, I'm trying to figure out if it's worth me investing my time. MM: Write the proposal, you're referring to is get in transits proposal. We Sure, yes. Okay, Good question. -USA: Is it something that you'd like a temperature check on. +USA: Is it something that you'd like a temperature check on. JHD: I mean, a temperature check seems nice, I think specifically if it would, even though everyone is always allowed at any time to say, "I have an objection" or "I have a constraint, even though I never mentioned it before", it would feel really bad if I put in time on a proposal like this and then came back in six months, a year, two months, whatever, and was surprised with a sudden constraint. It's happened to me once before, and it is not a fun thing to do and it's very demotivating. So I really just want to make sure that if there's any possibility that any one, could guess that, that might happen that I could get some hints about it now so that I know whether it's worth putting in my time, this is not really a technical or procedural question, I'm sort of asking you a personal question to try to make sure that I'm not about to waste my time. -CP: I don't honestly, I don't see any problem with that moving forward, we have plenty of experience on that, and the only thing that I can offer, there is collaboration. So we can put some effort as well to try to push for it, because this is the reality that we are on when you write libraries right? You have to cache all these things. Otherwise you're just going to be trouble and SES of as well, does that? So I think it's sufficient manpower to work toward. These are such as it's a proposal. +CP: I don't honestly, I don't see any problem with that moving forward, we have plenty of experience on that, and the only thing that I can offer, there is collaboration. So we can put some effort as well to try to push for it, because this is the reality that we are on when you write libraries right? You have to cache all these things. Otherwise you're just going to be trouble and SES of as well, does that? So I think it's sufficient manpower to work toward. These are such as it's a proposal. -LEO: From my reading as well. I think you re very welcomed at the size of the group to continue a lot of discussion for this and making sure like, we provide feedback and all the work that we need to, to move this forward because I think Mark would be interested. I am assuming Mark who can confirm that right now. +LEO: From my reading as well. I think you re very welcomed at the size of the group to continue a lot of discussion for this and making sure like, we provide feedback and all the work that we need to, to move this forward because I think Mark would be interested. I am assuming Mark who can confirm that right now. MM: Yeah. So, absolutely definitely interested in collaborating on this proposal and seeing it move forward. We cannot write a session that stays secure into the future without something like this. @@ -572,79 +567,75 @@ USA: one question from Greg, from the chat is Jordan are you looking for impleme JHD: Yeah, I mean in particular, since the of the friction that Realms at that has been implemented concerned. So I want to I, particularly for that. Thank you Greg. +LEO: Yeah, yeah, Jordan. I just think one of the understanding for some people here I think there is an interest and if I had to tell you like there is interesting but the Silence from many Who doesn't mean like everyone is on board with these all the way to two stage four, of course, because for me to go through the stage process, you have our support. but we cannot offer a guarantee. -LEO: Yeah, yeah, Jordan. I just think one of the understanding for some people here I think there is an interest and if I had to tell you like there is interesting but the Silence from many Who doesn't mean like everyone is on board with these all the way to two stage four, of course, because for me to go through the stage process, you have our support. but we cannot offer a guarantee. +CP: Yeah, I want to mention a couple of things because there are some prior art on these from Dave Herman's days when he was working on realms with me, the name doesn't seems like a problem we went from A string value with the percentage to string simple name, in camel case, at the time, I believe I don't see the same problem that implementers are scene where they Global names that needs to be stopped the end, the global object. Because in this particular case, where reference to 262 API that we are exposing via a different name space object. It will become calling. A mysterious object I believe from the previous discussion will be probably kind of a name is space object kind of thing. So, in the sense that you have access to all these other things there, we need to discuss if we need to do the `.call` on, then we can use them somehow with reflect our callers or light or are we going to eliminate or a hassle with a `.call`? Those are things that but I don't see any potential issues are not implemented and implemented obviously but I don't see for another discussion that we have about realm that would be problematic because you already have access to all of that. Anyways, you today you have access to all of that, the only potential things that I believe we could discuss with implementers. is is the concern that Shu and Domenic and some other folks mentioned about putting in the developer's mind that there is a separation between 262 and HTML and the web platform in general. Because in that API, you probably won't be have things that are in 262 but doesn't seems like a problem to me on the global object itself. But in this case, I don't think it's a problem. -CP: Yeah, I want to mention a couple of things because there are some prior art on these from Dave Herman's days when he was working on realms with me, the name doesn't seems like a problem we went from A string value with the percentage to string simple name, in camel case, at the time, I believe I don't see the same problem that implementers are scene where they Global names that needs to be stopped the end, the global object. Because in this particular case, where reference to 262 API that we are exposing via a different name space object. It will become calling. A mysterious object I believe from the previous discussion will be probably kind of a name is space object kind of thing. So, in the sense that you have access to all these other things there, we need to discuss if we need to do the `.call` on, then we can use them somehow with reflect our callers or light or are we going to eliminate or a hassle with a `.call`? Those are things that but I don't see any potential issues are not implemented and implemented obviously but I don't see for another discussion that we have about realm that would be problematic because you already have access to all of that. Anyways, you today you have access to all of that, the only potential things that I believe we could discuss with implementers. is is the concern that Shu and Domenic and some other folks mentioned about putting in the developer's mind that there is a separation between 262 and HTML and the web platform in general. Because in that API, you probably won't be have things that are in 262 but doesn't seems like a problem to me on the global object itself. But in this case, I don't think it's a problem. - -SYG: we've been talking about this in Matrix and make sure it's and I keep getting confused. I want to say it. Try to be confuse myself and get clear on what the getOriginals thing that Jordan is talking about. Here is the concrete example, I Do not modify the object.prototype. I don't give it a different value. I don't think I can. Anyway, I add a new property - or rather, I override one of the methods with my own. In this getOriginals API, when I say, getOriginal object dot prototype.%% does that. me the not only the original object, but the original object pre-modification. +SYG: we've been talking about this in Matrix and make sure it's and I keep getting confused. I want to say it. Try to be confuse myself and get clear on what the getOriginals thing that Jordan is talking about. Here is the concrete example, I Do not modify the object.prototype. I don't give it a different value. I don't think I can. Anyway, I add a new property - or rather, I override one of the methods with my own. In this getOriginals API, when I say, getOriginal object dot prototype.%% does that. me the not only the original object, but the original object pre-modification. JHD: No, in that like in that if somebody has simply mutated the object and I'm asking for the object, I want the object by identity, and so, it wouldn't any sense to me to have like a snapshot of it. It's more akin to like, if someone, if I cache getIntrinsic and then later, somebody says math.pi equals 3. And I get the math.Pi intrinsic. It will give me the value of pi. -SYG: Yes, I think that is unproblematic. What is problematic for memory constraints especially is if we not have to give you back the identity they argue original object with the same identity but the old original actual original object in which case, you know that directly contradicts the need for identity. +SYG: Yes, I think that is unproblematic. What is problematic for memory constraints especially is if we not have to give you back the identity they argue original object with the same identity but the old original actual original object in which case, you know that directly contradicts the need for identity. -JHD: If change the object, let's say the `Object.prototype` object to something else that's empty, and then I asked for `Object.prototype.toString`, I need to get the "original" `toString` function. +JHD: If change the object, let's say the `Object.prototype` object to something else that's empty, and then I asked for `Object.prototype.toString`, I need to get the "original" `toString` function. SYG: Yeah, I think that's unproblematic. MM: I think we need to be very Clear here. Jordan's way of stating it using dot invites confusion. - -SYG: let me restate how I understand it to see if it makes sense. Supposed that getOriginals for the sake of just for the sake of discussion takes a string. string is in the format of how we format intrinsics in the spec, which is % and then what looks like dot access. But really, that's just a convention, and then close by a percent sign. Okay, what I understand Jordan getting saying is that if you type getOriginals("%object.prototype.keys"), that will give you the function and I think Is unproblematic. I was confirming. That what is problematic is that if you want the type `getOriginals("%object.prototype.keys").slice` where the `.slice `is an actual property access but that would be the original slice. If that was the guarantee that you want that is problem, but it sounds like that is not the guarantee you want. +SYG: let me restate how I understand it to see if it makes sense. Supposed that getOriginals for the sake of just for the sake of discussion takes a string. string is in the format of how we format intrinsics in the spec, which is % and then what looks like dot access. But really, that's just a convention, and then close by a percent sign. Okay, what I understand Jordan getting saying is that if you type getOriginals("%object.prototype.keys"), that will give you the function and I think Is unproblematic. I was confirming. That what is problematic is that if you want the type `getOriginals("%object.prototype.keys").slice` where the `.slice`is an actual property access but that would be the original slice. If that was the guarantee that you want that is problem, but it sounds like that is not the guarantee you want. JHD: awesome. CP: Yeah, I can confirm. Yeah and and they there is only one detailed Shu that maybe is important which is the fact that we have what we call undeniable intrinsics. Those are can be created from or from syntax. you modified, you modify the value of our array.prototype and you try to access the arena rate of prototype, you still the One, the intrinsic one. So it's very similar to what you just playing but even what I'm trying to say is that even for undeniable the same applies, you will get the reference that is defined in 262 to the object. That is an undeniable object intrinsic object -USA: So now that that's out of the way you can Greg’s question, which is from stage 3. I guess I really do want to ask for that. Yes, sure. That's why we are here. Can we get a stage three on the realms proposal? +USA: So now that that's out of the way you can Greg’s question, which is from stage 3. I guess I really do want to ask for that. Yes, sure. That's why we are here. Can we get a stage three on the realms proposal? JHD: That's fine with me. USA: Does anyone want to explicitly support? actually wanted to express support. Thank you. - AKI: Looks like JWK is on the queue to say so. -USA: Jack, do you want to see that load? +USA: Jack, do you want to see that load? JWK: Yeah, I'm supporting stage 3. We have been waiting for this for too long. - CP:Tell me about it. ### Conclusion/Resolution + - Stage 3 - Bikeshed the name "Realm" for one more meeting. - Engines (at least V8, Node, FF) agree **not** to ship unflagged until after bikeshedding - ## `getOriginals` for stage 1 -Presenter: Jordan Harband (JHD) +Presenter: Jordan Harband (JHD) -USA: Yeah. And next up we have Jordan couldn't do you want to ask for stage 1? Yeah. +USA: Yeah. And next up we have Jordan couldn't do you want to ask for stage 1? Yeah. JHD: So even though I don't yet have a repository, we sort of sketched it out here. I would be happy to make one tomorrow after I get some sleep. is there any reason that could not be stage 1 already? SYG: So, I don't want to slow down your word, I don't want them walking, so let me hear from you. Do you think that if we wait for an official explain it, I'm proposing everything through grants X3 would that slow down velocity for you? -JHD: No, it's just a nice signal. I think stage one is a problem statement, right? So the specifics of the solution don't need to be explained or understood just yet but - like I'm happy to write up that explainer, it's more like, if the problem is unclear to someone such that they are not willing to or not comfortable, making it stage 1 without that explainer, then obviously it would need to wait. But there's a possibility we don't meet tomorrow, otherwise I would just say let's revisit it tomorrow. +JHD: No, it's just a nice signal. I think stage one is a problem statement, right? So the specifics of the solution don't need to be explained or understood just yet but - like I'm happy to write up that explainer, it's more like, if the problem is unclear to someone such that they are not willing to or not comfortable, making it stage 1 without that explainer, then obviously it would need to wait. But there's a possibility we don't meet tomorrow, otherwise I would just say let's revisit it tomorrow. -GC: I say given the confusion surrounding at least the discussion on this API in Matrix over the last ten minutes. think it would be good to have some. Some some things written down concretely that we can understand before discussing this more, +GC: I say given the confusion surrounding at least the discussion on this API in Matrix over the last ten minutes. think it would be good to have some. Some some things written down concretely that we can understand before discussing this more, CP: yeah, you're on a weekend weekend, trailblaze these and get it next meeting for stage 1 and 2 maybe. I can't even get all the pieces in. WH: I'm also a bit unclear as to what is going into stage one and I would note that there is a deadline to post documents a number of days in advance of a meeting so we can review them. -JHD: Sure. And I mean that and the lack of materials are both valid reasons to procedurally reject stage 1 - but that doesn't mean you have to reject it. It sounds like there's at least a few people that aren't comfortable with stage 1 yet. +JHD: Sure. And I mean that and the lack of materials are both valid reasons to procedurally reject stage 1 - but that doesn't mean you have to reject it. It sounds like there's at least a few people that aren't comfortable with stage 1 yet. -WH: Yeah, but I also want to mention that I do like where you're going with this. +WH: Yeah, but I also want to mention that I do like where you're going with this. -JWK: So I didn't catch up with the conversation before. What is the use case? Why do we need an API for it? +JWK: So I didn't catch up with the conversation before. What is the use case? Why do we need an API for it? JHD: So that people can write code, that can't be trivially broken by someone later messing with the environment like node core uses this, because if I do `delete Function.prototype.call` node core just craps itself and dies immediately. All over node core, they're trying to slowly patch this problem so that its robust against user modification of the platform and all of my npm packages are written in ways where as long as they run first, before user code messes with anything, then they are robust against user code messing with things later. getIntrinsic would be a much more convenient and potentially more performant (although that's less of a concern for me) way to to write this robust code. - JWK:I guess that must be able to mock otherwise it might break visualization. +JWK:I guess that must be able to mock otherwise it might break visualization. KG: Right. So the assumption is that you are running first so that you can save off a copy of getOriginals that no one else will thereafter be in a position to modify. @@ -658,7 +649,7 @@ JHD: Yes, it's a fair point that there is a small number of people that may util JWK: One of the use cases I can imagine is that I can get the async function prototype without actually using an async function literal or that might not be able to ship to some old browsers. -JHD: Could you rephrase that question? I'm not sure. I understood +JHD: Could you rephrase that question? I'm not sure. I understood JWK: some intrinsics require some newer syntax reach, for example, async function. AsyncGeneratorPrototype requires an async generator literal to get. And if we ship it, it breaks on the old browser. @@ -668,40 +659,39 @@ JWK: That's the only reason why I think getting intrinsics might be useful. JHD: Thank you. -MM: I want to give a piece of history as a very concrete and Vivid answer to Jack-works and to the current discussion there are hidden intrinsics that you can only reach inderect syntax. And by indirect means starting syntax more than the, the iterator prototypes you have to actually create an iterator and walk the Prototype chain. It's really quite messy. but the the history that I want to give is that the old SES, the one that we did at Google and we're using for to secure properties at Google. We got a responsible disclosure vulnerability report that in which the which was due to some browsers starting to ship async generators. and the there is no way for the old code to discover that there are new intrinsics that it needed to freeze. So without something like this, I think genuinely impossible for a security system like the session to protect itself from the introduction of new hidden intrinsics. So I think that makes a very compelling case for something like this +MM: I want to give a piece of history as a very concrete and Vivid answer to Jack-works and to the current discussion there are hidden intrinsics that you can only reach inderect syntax. And by indirect means starting syntax more than the, the iterator prototypes you have to actually create an iterator and walk the Prototype chain. It's really quite messy. but the the history that I want to give is that the old SES, the one that we did at Google and we're using for to secure properties at Google. We got a responsible disclosure vulnerability report that in which the which was due to some browsers starting to ship async generators. and the there is no way for the old code to discover that there are new intrinsics that it needed to freeze. So without something like this, I think genuinely impossible for a security system like the session to protect itself from the introduction of new hidden intrinsics. So I think that makes a very compelling case for something like this -CZW: Yep. So I may just miss some of the contexts. I'm just wondering no proposal text to check at just wondering that does that implies a direct object access across realms or just grabbed the intrinsics from the current realm. +CZW: Yep. So I may just miss some of the contexts. I'm just wondering no proposal text to check at just wondering that does that implies a direct object access across realms or just grabbed the intrinsics from the current realm. -JHD: Yeah, my assumption is it would be available within each realm and it would refer to the realm. +JHD: Yeah, my assumption is it would be available within each realm and it would refer to the realm. JWK: If avoiding pollution is the main use case why not we introduced the SES lockdown directly? -CP: It's not only locking down is being used extensively… +CP: It's not only locking down is being used extensively… -JHD: to explicitly to not lock down intrinsic, to leave them like breakable. So how do I could be robust? +JHD: to explicitly to not lock down intrinsic, to leave them like breakable. So how do I could be robust? MM: Well, let me answer for SES since the, that was directly. The question SES does plan to propose a lockdown primitive, the lockdown primitive would do this. So, the SES example here is more of an the fact that anything that needs to enumerate, all the intrinsics, whether it knows about them statically or not would benefit from something like this or would suffer from not having it. The session is an example of that but session, as proposal would no longer being examples. -CZW: does that mean calling the intrinsic get a copy of the get the same intrinsic, chat function. +CZW: does that mean calling the intrinsic get a copy of the get the same intrinsic, chat function. -JWK: Each room has its different solutions X. So X So, X must be the current realms +JWK: Each room has its different solutions X. So X So, X must be the current realms -CZW: I mean I mean the article get coloring, the Gathering tricks and made some modifications on it like that adding some property on or either. My not change either in intrinsic. how it works. But it but it can be adding some next say, observable properties only. +CZW: I mean I mean the article get coloring, the Gathering tricks and made some modifications on it like that adding some property on or either. My not change either in intrinsic. how it works. But it but it can be adding some next say, observable properties only. -JHD: Yeah. I mean if someone sticks like a foo property, an object that prototype and I the object, a prototype intrinsic, the object, I get will have a foo property. that is totally fine. I'm not trying to get an original snapshot. I'm just trying to be able to get the original functions primarily +JHD: Yeah. I mean if someone sticks like a foo property, an object that prototype and I the object, a prototype intrinsic, the object, I get will have a foo property. that is totally fine. I'm not trying to get an original snapshot. I'm just trying to be able to get the original functions primarily CZW: So people can change the intrinsics when they getting run first and the later on the code cannot get the original intrinsics of a function. They will be seeing the modified one. If they are not... JHD: It is impossible and should remain so to protect against code that runs before you - but it is possible and should be much easier to do, to protect against code that runs after you. -USA: Do we have any consensus in this? No, Jordan. Do you think you'd like to come back to this next meeting? +USA: Do we have any consensus in this? No, Jordan. Do you think you'd like to come back to this next meeting? JHD: Yeah, I think there's no consensus for stage 1 for it, but I haven't heard any obvious blockers for stage 1 except I need to have materials prepared before the deadline. So with that understanding, I will prepare it and consider it "not a waste of my time" and I will plan to come back at the next meeting with that request. So thank you, everyone. I'm sorry to hijack the realms discussion as much as I did. - ## Realms (Continuation) -LEO: I'm sorry too to change the directions. just think what you say it is. maybe 30 seconds, we have a little chat here. It's more informative and homework that I that to get interest from other delegates here. Shu you want to speak? +LEO: I'm sorry too to change the directions. just think what you say it is. maybe 30 seconds, we have a little chat here. It's more informative and homework that I that to get interest from other delegates here. Shu you want to speak? SYG: something that has come up kind of as a lower priority item given all the conversations over the years, is the name of the realm itself as a user expose thing. The there's in Chrome at least there is some hesitancy to expose this. Using the realm for a somewhat, I admit weakness in which is that the concept of a realm has existed for a while. I suspect spec is important technical 262 who and has this used by HTML to mean a bunch of things like this. State as a that is in HTML at least in the state that is available with the page and notably user Realms are not going to be kind of realm. so, over the years, it's one of the topics has been like maybe we should rename go. Escape with the user expose think would be a different name. But as far as I know, nobody has seriously proposed any alternatives. So we've kind of just worked on the higher priority things and just stuck with: results. This is not a blocking consent to stage three. I would like for the vendors who have you know who like to start implementing. takes this stage 3. If we get stage 3, 2 Min, that is ready What I would like buy-in from the other vendors that before any of us ships this thing that we actually have an honest bikeshedding discussion of what to name the thing. and, Yeah, I guess that's it. @@ -709,25 +699,26 @@ CP: Yeah, we yeah, we talked about that briefly in the past there. Uh, we're hop SYG: so concretely, I'm asking if representatives from Firefox, Safari and node are present that suppose, and I'd say node because so suppose V8 implements it behind a flag that I requested, no do not off like that under the name Rome until we've had this buyout of discussion. -GCL:Yeah, that's I can guarantee that we will not unflag it. +GCL:Yeah, that's I can guarantee that we will not unflag it. DE: I want to propose that we time box this bikeshedding to one more meeting. There's only so much byte shooting we can do. Let's try to focus her energy and come up with a final Name by the meeting. -SYG: yeah, and it's difficult to get async participation. Ideally, I would like this to all just happen ond thread or something. But next meeting also cause because it sounds fine to me. +SYG: yeah, and it's difficult to get async participation. Ideally, I would like this to all just happen ond thread or something. But next meeting also cause because it sounds fine to me. KG: Shu, presumably you would need to come back to committee and say here is the name that we have chosen, we are affirming that name. SYG: That's correct. What I want to happen async is the bikeshedding discussion and back and forth itself. Once that has happened with people who participated on the async discussion, ideally I would like to then come back to committee since this is a new name then we discussed further. I don't want to have to be like a long-form bikeshedding the discussion in committee. Are representatives of Safari and Firefox here? -DE: I really don't think there's risk that this will just ship in like irreversibly in a month and a half. I don't agree. No food. I don't know if we need to do this. Listen you know hold up because we're going to this really soon. +DE: I really don't think there's risk that this will just ship in like irreversibly in a month and a half. I don't agree. No food. I don't know if we need to do this. Listen you know hold up because we're going to this really soon. -AKI: Either way, Ian says, SpiderMonkey can commit this. +AKI: Either way, Ian says, SpiderMonkey can commit this. USA: Okay, then, So this is already 4 minutes above the time box. And so in the interest of time, would anybody mind if the put off the HTML discussion for until later and started with the Ecma proposal? -DE: there's one thing that I wanted to say explicitly, which was that, you know, we try to be upfront when possible about sources of sponsorship. And so the work that I did towards the Realms proposal and HTML integration was - I mean the work that's happened over the last couple of months has been sponsored by Salesforce with their contract with Igalia and we're working on both the realm specification and proxy performance and detached iframe debugging and hopefully in the future realm implementation. So just wanting to be explicit about that disclosure, so thanks. +DE: there's one thing that I wanted to say explicitly, which was that, you know, we try to be upfront when possible about sources of sponsorship. And so the work that I did towards the Realms proposal and HTML integration was - I mean the work that's happened over the last couple of months has been sponsored by Salesforce with their contract with Igalia and we're working on both the realm specification and proxy performance and detached iframe debugging and hopefully in the future realm implementation. So just wanting to be explicit about that disclosure, so thanks. CP: Awesome. So yeah, we're trying to fund that much as possible. All this work. ### Conclusion/Resolution -See earlier Realms item + +See earlier Realms item diff --git a/meetings/2021-08/aug-31.md b/meetings/2021-08/aug-31.md index 1dc806d3..8027a131 100644 --- a/meetings/2021-08/aug-31.md +++ b/meetings/2021-08/aug-31.md @@ -1,9 +1,10 @@ # 31 August, 2021 Meeting Notes + ----- **In-person attendees:** None -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -28,17 +29,17 @@ ## Opening -AKI: Good morning, everyone. Welcome to New York, New York. It’s a hell of a town. I think that’s where we are. Right? That’s where we are. In case you haven’t met me before, I’m Aki, I am co-chair along with Brian Terlson and Rob Palmer. We will be facilitating your day today. +AKI: Good morning, everyone. Welcome to New York, New York. It’s a hell of a town. I think that’s where we are. Right? That’s where we are. In case you haven’t met me before, I’m Aki, I am co-chair along with Brian Terlson and Rob Palmer. We will be facilitating your day today. -AKI: I’m just going to assume that everyone has signed the form because you’re here. If you have not, please do so. We use that information because it’s required by the Bylaws. It’s not optional. I ask that you all please take a moment to read the code of conduct. It’s available on the website, TC39.es. +AKI: I’m just going to assume that everyone has signed the form because you’re here. If you have not, please do so. We use that information because it’s required by the Bylaws. It’s not optional. I ask that you all please take a moment to read the code of conduct. It’s available on the website, TC39.es. -AKI: We expect you to behave in a manner that aligns with the code of conduct. So it’s probably a good idea to familiarize yourself with it. +AKI: We expect you to behave in a manner that aligns with the code of conduct. So it’s probably a good idea to familiarize yourself with it. AKI: Our communication tools are: TCQ, which you can find a link to in the reflector or on the schedule. If you are unfamiliar with how TCQ works, please send myself or one of the other co-chairs a message and we’ll talk you through it. We also have a chat on Matrix, which is the thing that is going to replace IRC. Apparently, I mean, finally, there’s a chance that something is going to replace IRC for the first time in 20 years. AKI: You can find our [Matrix] Space which—think like Discord server, Slack team. It’s TC39. We have a bunch of channels. Obviously, there is a TC39 general channel. There is also a delegates channel. It is logged. It is public, but only registered delegates can speak. We also have Temporal Dead Zone, our backchannel. Feel free to join that and and sass your sass; do not have any technical conversation there because it is not formally part of the technical committee. -AKI: We have a hallway track because we haven’t seen each other’s faces in forever and it gives us a chance to chat. You can find it in Mozilla Hubs; if your computer is struggling with rendering, try setting it to 800 by 600. That really does make a difference. +AKI: We have a hallway track because we haven’t seen each other’s faces in forever and it gives us a chance to chat. You can find it in Mozilla Hubs; if your computer is struggling with rendering, try setting it to 800 by 600. That really does make a difference. AKI: Alright, next on to our IP policy, intellectual-property rights. The very-short version is: In order to participate in TC39, you have to represent an Ecma member as their delegate, or you have to be an invited expert, invited by the secretary-general, or you have to—Nope, that’s it. Just those. And if you’re an invited expert, you need to make sure you sign the RFTG agreement, the royalty-free task-group agreement. The basic concept just means you’re licensing the rights to your IP over to Ecma, so that ECMA can publish the standard at the end of the first quarter. @@ -46,30 +47,30 @@ AKI: Our next meeting will be in London. It’ll be four days. It’s in October AKI: Let’s get moving on to the boring housekeeping stuff. Motion to approve last meeting’s minutes—everyone’s seen the notes, right? Great. I’m going to take that as yes. Already had a chance to see the current agenda goodness? I hope so. All right, motion to adopt the current agenda. Great. - ## 262 Editor's Report + Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1Hu_fPWqQtKuXGvvifGM_v7nEfqDiwR9h1nRTCS9chE4/edit) -KG: (presents slides) +KG: (presents slides) AKI: Any questions? Queue is empty. - ## 402 Editor's Report + Presenter: USA USA: No update. - ## 404 Editor's Report + Presenter: Chip Morningstar (CM) CM: No update. - ## Mark `with` as legacy + Presenter: Jordan Harband (JHD) - [PR](https://github.com/tc39/ecma262/pull/2441) @@ -97,9 +98,11 @@ JHD: That should already exist. The word legacy is a link in the column next to AKI: Sounds like we have consensus. ### Conclusion/Resolution -* Consensus + +- Consensus ## Relative indexing .at() method for Stage 4 + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-relative-indexing-method) @@ -114,15 +117,16 @@ SYG: Both Google and Apple folks, and maybe some Mozilla folks as well. AKI: All right. I think we have consensus. Queue is empty. Great, great. Thank you very much, congratulations. Thank you both. ### Conclusion/Resolution -* Consensus + +- Consensus ## Accessible Object hasOwnProperty for Stage 4 + Presenter: Jamie Kyle (JK) - [proposal](https://github.com/tc39/proposal-accessible-object-hasownproperty) - [slides](https://docs.google.com/presentation/d/177vM52Cd6Dij-ta6vmw4Wi1sCKrzbCKjavSBpbdz9fM/edit?usp=sharing) - JK: This is accessible `hasOwnProperty`. Super-fast explainer: `Object.create(null)` makes `hasOwnProperty` kind of unreliable. To use it reliably you have to manually call `Object.prototype.hasOwnProperty.call()` some object with a key, which is densely packed with concepts for beginners…just to check if a property is there. So `Object.hasOwn()`, it makes it simpler. The background there: a couple of libraries that have like, millions of downloads just dedicated to checking. We’re making `hasOwnProperty` more accessible. The spec is very simple. It is basically the same as `hasOwnProperty` except with steps one and two flipped. Previously `hasOwnProperty` had them flipped for legacy reasons. But it only changes when there are errors thrown. @@ -136,28 +140,28 @@ AKI: Yeah. All right. Well, the queue is empty. So if you’re asking for consen JK: And thanks to everyone who has helped out along the way and implemented it and such. Thank you. - - ### Conclusion/Resolution -* Consensus + +- Consensus ## Class static initialization blocks for Stage 4 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-class-static-block) - RBN: So, class static initialization blocks. We’ve been talking about this for a while. We currently have test262 tests written and merged. We have two implementations: one is in SpiderMonkey currently shipping behind a flag in Firefox 92 and intending to ship unflagged in Firefox 93. It’s also shipping in V8 checked as V8 94146 in the current public release for Chrome, and there is a signed off pull request for this feature in the ecma262 repo. Additionally, Babel has had this feature. It is planning to mark it as enabled by default as soon as we have a Stage 4 acceptance. So to go along with everybody who is quickly moving through, I would like to ask for consensus moving class static initialization blocks to Stage 4. -AKI: Cool, the queue is empty. So if that’s a request for consensus, I do believe you have it. +AKI: Cool, the queue is empty. So if that’s a request for consensus, I do believe you have it. RBN: In that case, thank you very much. - ### Conclusion/Resolution -* Consensus + +- Consensus ## Change Array by Copy + Presenter: Ashley Claymore (ACE) - [proposal](https://github.com/tc39/proposal-change-array-by-copy) @@ -189,7 +193,7 @@ ACE: There’s a little bit of a kind of awkwardness here that we realized while ACE: We also have a `withAt` which again kind of has come over from the tuples proposal. It’s almost certainly not going to be that name. You want to go with that there is now. We’ve got at that Stage 4, which isn’t a mutating method. So when it kind of came from the top of the proposal, the `withAt` was more of a replacement for index assignment. So that one again is one that’s maybe we’ll drop or just completely change its name. It’s less clear where that one fits in. -ACE: [slide 7] Why are we trying to do this, more than just having a kind of nice symmetry? So we think it is actually useful having these methods from an ergonomic point of view. Take `sort` right now. You can do things like spread the array or call `slice` to get a copy and then sort though. The downside of that is that, as soon as you spread, you’re saying, “I’m spreading into an *array*”. So it doesn’t work if you want that to be generic with tuples or `TypedArray`s. +ACE: [slide 7] Why are we trying to do this, more than just having a kind of nice symmetry? So we think it is actually useful having these methods from an ergonomic point of view. Take `sort` right now. You can do things like spread the array or call `slice` to get a copy and then sort though. The downside of that is that, as soon as you spread, you’re saying, “I’m spreading into an *array*”. So it doesn’t work if you want that to be generic with tuples or `TypedArray`s. ACE: You also—for things like assignment, you kind of can’t [use it] in a kind of a chained way, like you can with the methods. So having these as methods, even though you can do them already, we think is nice, because you have kind of a method chaining syntax. So you can immediately say what you’re trying to do, [instead of?] breaking things up across multiple statements. And, again, you have this advantage of it being a generic method lookup. @@ -227,9 +231,9 @@ ACE: Yeah. No, I agree with that one. So that was one that’s come across from PFC: In `Temporal`, `with` copies and mutates, and `to` transforms the type, like `toString`. So there’s a possibility for correspondence here. -ACE: I think that’s why we’ve currently got the `with` and the `to` as possible prefixes because there’s already some convention for those already. Whereas prefixes like `copy`, that would be a kind of a brand new kind of idea of naming to explore. As you say, I think the naming of these will be hard to get right. See, mutate as convention for something like `to`, as you say to is most commonly used to change the type of something and it’s only, I think it’s only “toUppercase” and “toLowerCase” where it’s not being used to change the type though. Potentially “with” is better for them, maybe “with” is left more confusing for people because we’re not actually combining it with another type; we’re combining it with an operation. It’s good to call out that the `Temporal` does have this `with` idea already. +ACE: I think that’s why we’ve currently got the `with` and the `to` as possible prefixes because there’s already some convention for those already. Whereas prefixes like `copy`, that would be a kind of a brand new kind of idea of naming to explore. As you say, I think the naming of these will be hard to get right. See, mutate as convention for something like `to`, as you say to is most commonly used to change the type of something and it’s only, I think it’s only “toUppercase” and “toLowerCase” where it’s not being used to change the type though. Potentially “with” is better for them, maybe “with” is left more confusing for people because we’re not actually combining it with another type; we’re combining it with an operation. It’s good to call out that the `Temporal` does have this `with` idea already. -AKI: Okay. We are at the end of the queue. +AKI: Okay. We are at the end of the queue. CCU: Yeah, the SpiderMonkey team has discussed the justification for this proposal. We believe it’s easily polyfillable and we have concerns about adding more things to the `Array` object. So, where we’ve just had discussions about the justification for this proposal. @@ -305,7 +309,7 @@ SYG: I kind of support Kevin. I don’t think we need to have an exact list that AKI: We won’t be able to come back to it. We have four hours too much content. So we’ll have to come back to this in October. Thank you very much. Ashley and Robin. And so, there we go. No, we did not [?] up. -KG: I said I do not block. +KG: I said I do not block. AKI: You do not block, but we don’t have consensus. We don’t have time to call for it. Again. We have to say that this day to our time boxes. So we’ll come back and discuss this again in October. @@ -315,9 +319,10 @@ AKI: We don’t have consensus because we didn’t even get through the queue. A ### Conclusion/Resolution -* Consensus for Stage 2 +- Consensus for Stage 2 ## DurationFormat Update + Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-intl-duration-format) @@ -372,9 +377,11 @@ SFC: I just wanted to express support for this proposal—and thanks, USA, for w USA: Thank you, and thank you, everyone. All right. Thanks everyone. ### Conclusion/Resolution -* Was not seeking changes + +- Was not seeking changes ## Realms Renaming Bikeshedding Thread + Presenter: Leo Balter (LEO) - [GitHub issue](https://github.com/tc39/proposal-realms/issues/321#issuecomment-900523250) @@ -410,10 +417,14 @@ SYG: I would like to interject here. Sorry, I didn’t get on the queue, and the LEO: And just one more clarification. We are excluding even names that are private personal preferences for specific champions of this. Cool. This is not my main personal preference. I think this is the most pragmatic suggestion. Like my personal preference gives more challenges in the other aspects that I mentioned in the list. -AKI: All right, isn’t that the definition of compromise? Nobody’s truly happy. Do we have consensus? Sounds like a yes to me. ShadowRealms it is. +AKI: All right, isn’t that the definition of compromise? Nobody’s truly happy. Do we have consensus? Sounds like a yes to me. ShadowRealms it is. + ### Conclusion/Resolution -* Consensus for the ShadowRealm name + +- Consensus for the ShadowRealm name + ## Pipeline operator for Stage 2 + Presenter: Tab Atkins (TAB), J. S. Choi (JSC) - [proposal](https://github.com/js-choi/proposal-hack-pipes/) @@ -428,19 +439,19 @@ TAB: [slide 3] Which one JavaScript language gets…There’s a whole bunch of s TAB: The dev community is still pretty split around what they prefer, but there seems to be a pretty overwhelming consensus that people want a pipe of *some* kind. And for most people, it seems that the precise version that we go for isn’t as important as getting one of them out there at *all*. As I argued last time, if you remember from a couple of months ago all the pipeline variations are pretty close to each other. Any way you can do it in one, you can do in the other, with maybe a small bit of [difference in] syntax. It’s not very significant. So our hope is that this should work out for everybody. -TAB: [slide 4] As another reminder, the pipe operator in the State of JS 2020 survey was the number-four most-requested feature there, right behind static typing, better standard library, and the pattern-matching proposal, which I think we’re going to be talking about here as well. So this is clearly a pretty important thing for all of JavaScript. +TAB: [slide 4] As another reminder, the pipe operator in the State of JS 2020 survey was the number-four most-requested feature there, right behind static typing, better standard library, and the pattern-matching proposal, which I think we’re going to be talking about here as well. So this is clearly a pretty important thing for all of JavaScript. -TAB: [slide 5] So the explainer was written by JSC: link over here. There’s a ton of examples in the explainer. It’s very, very well written with stuff taken from real world code, not constructed examples, showing how they can be simplified and made more easy to read by using pipeline. He’s also put together a full draft spec text. It’s still under flux and will be going through precise details, but it looks pretty good for now. I’ll get into some of the problems. As we have with that in a little bit. +TAB: [slide 5] So the explainer was written by JSC: link over here. There’s a ton of examples in the explainer. It’s very, very well written with stuff taken from real world code, not constructed examples, showing how they can be simplified and made more easy to read by using pipeline. He’s also put together a full draft spec text. It’s still under flux and will be going through precise details, but it looks pretty good for now. I’ll get into some of the problems. As we have with that in a little bit. TAB: [slide 6] So I’m not going to go into a big thing for all this, but I want to add a couple of points real quick just to head off any potential basic questions, and I’m happy to hit anything more advanced in the queue afterwards. So the most basic issue is, is nesting really that big of a problem. Do we really need to linearize code; is this important enough to be justifying a new operator? Obviously I think the answer is yes. I’ve got a couple of examples here showing off why. TAB: This is the first one. Obviously it is a constructed example of function chaining and function nesting, but it’s not an unrealistic constructed example. I just needed it to be short enough to show off enough structure. I find this very difficult to read, I cannot tell at a quick moment whether the method call what what the dot method calls been called on either count, parentheses to be able to tell that the foo function has not yet closed and that’s the method call must be on the result of the bar function. That’s hard to figure out. This sort of thing happens. -TAB: [slide 7] All the code where you could structure things carefully to make sure it is readable—while if you can pull it apart into a pipeline, everything becomes as far as I can tell. Immediately clear, you know: you’re going to start with an `x` value, pass it to `baz`, extract the first item from it, pass that over to `bar` and call `method` on it, and finally pass it to `foo`. Code flows, nice and linear. This was [common?] under jQuery methods, you could replace that pipe with a period `.` and you’d have exactly this code basically and people really liked that sort of code. It’s very readable for a reason. +TAB: [slide 7] All the code where you could structure things carefully to make sure it is readable—while if you can pull it apart into a pipeline, everything becomes as far as I can tell. Immediately clear, you know: you’re going to start with an `x` value, pass it to `baz`, extract the first item from it, pass that over to `bar` and call `method` on it, and finally pass it to `foo`. Code flows, nice and linear. This was [common?] under jQuery methods, you could replace that pipe with a period `.` and you’d have exactly this code basically and people really liked that sort of code. It’s very readable for a reason. TAB: Second. This is a realistic example of async. The Fetch API returns promises. And whenever you get any of the values from the response body, they also return promises because they might be a potentially large amount of text to decode. So, to actually get at something, the value of something at a particular URL, you’ve got to double stack your `await`s. Unfortunately, this involves some extra parenthesis and stacking up the beginning of expressions, because of the particular operators we chose in JavaScript. Rust has a slightly easier time of it, for example, because their `await` looks like an attribute access. -TAB: [slide 8] So it just chains more easily…and we can get similar benefits using pipeline, it lets us remove the extra stacking here. You can just deal with each of the operations one by one. With each single `await` where it needs to be. And of course, you can slice these operations up however you want, if you really wanted the `fetch` to show up first, because that’s an important thing. You want to dedicate some brain share [?] to right at the beginning. beginning, you can just pull that `await` off into its own pipeline chunk and deal with the `fetch` on its own. +TAB: [slide 8] So it just chains more easily…and we can get similar benefits using pipeline, it lets us remove the extra stacking here. You can just deal with each of the operations one by one. With each single `await` where it needs to be. And of course, you can slice these operations up however you want, if you really wanted the `fetch` to show up first, because that’s an important thing. You want to dedicate some brain share [?] to right at the beginning. beginning, you can just pull that `await` off into its own pipeline chunk and deal with the `fetch` on its own. TAB: [slide 9] Finally, a very common thing that happens all across JavaScript is dealing with static methods, dealing with constructors, anything that converts from one object to another involves heavy nesting. This code is taken from a realistic example. Somebody came to the what web chat room with an even longer string of code that was doing this with some more steps in the middle and was asking about adding `object.fromEntries` to the object prototype, because they were annoyed with how the nesting made it hard to read this expression and preferred a method. Chaining that lets them produce an object. At the end of this, we explain why that couldn’t happen. There’d be too many problems with adding new things to `Object.prototype`, But of course, these problems are solved if we can just use pipeline to linearize them again: get the `entries.map` over them, turn them back into an object. Nice, linear code flow. You don’t have to go back to the beginning, the expression, wrap the whole thing. @@ -487,14 +498,16 @@ WH: Okay, so it’s not like `this` where you can’t use it inside nested funct TAB: Yeah, and I think that also then would address your second topic (`x = 3%4; a |> %== y; b |> x+%== y;`). Both of those your other one also appears to be about the `%==` operator attempt at parsing. RW: So value piped into a—just to be clear, this is not a modulo operator. It’s the remainder operator; let’s use the right words—value piped into a remainder operation that ballad. I’m going over there because that’s where it is. + ```js a |> % % % ``` + Sorry, is that valid? Sorry, I just wrote percent. TAB: Yes. Yeah. -RW: Okay. Nobody here in this committee has any issues with this? I think that’s wild. +RW: Okay. Nobody here in this committee has any issues with this? I think that’s wild. JSC: I wouldn’t say that’s *good* code but it’s allowed. @@ -552,25 +565,24 @@ AKI: Okay, we are officially at time and we do not have anything blocking consen TAB: Thank everyone. Yulia, more than happy to discuss this with you offline. Thanks, excellent. - - ### Conclusion/Resolution -* Consensus for Stage 2. -* The champions will follow up offline with people who have concerns + +- Consensus for Stage 2. +- The champions will follow up offline with people who have concerns ## Iterator Helpers + Presenter: Yulia Startsev (YSV) - [proposal](https://github.com/tc39/proposal-iterator-helpers) - YSV: [showing proposal explainer] Hello, everyone. Welcome to iterator helpers, which we haven’t heard from in about a year. What we’ve done is we’ve updated the README to cover all of the new methods. So if you’re unfamiliar with the methods, you should now be able to find everything with examples in the readme. YSV: In addition, I will be addressing the last comments that have been brought up in the issues. One of them is to drop “index” characters and replace it with entries, which sounds like totally fine renaming and that looks okay. if there are no concerns against this, then I will go ahead and do that after this meeting if there. Or concerns, please. Let me know and I will discuss it with you. YSV: There have been a couple of other discussions, for example, `slice` instead of `take` or `drop`. I am thinking rather not to go in this direction. So not replacing `take` and `drop` instead of `slice`, largely because we have the problem of accepting negative values into `slice`, and there’s no good way of doing with those right now. We can always introduce that later, right? -YSV: Okay, and [now for the main issue](https://github.com/tc39/proposal-iterator-helpers/issues/122), which I want to raise to the committee to get feedback on. Right now we pass the protocol to all of these new methods, and I would like to point out that these methods are things like `map`, `filter`, `reduce`, `take`, etc. And for a number of these, the protocol, which allows you to `.return`, allows you to `.throw`, etc. doesn’t actually make sense, because we are not expecting to have communicating generators in these contexts. And there’s a long discussion here about the rationale or purpose of passing the protocol that we had with conartist6 [Conrad Buck]…where gradually, Jason and myself, we became convinced that it may be better to drop passing the protocol and make this proposal more restricted…so that you cannot have the full power of iterators—sorry, of generators—when you are applying iterator helpers on them. So this issue is one that Jason is preparing a PR for, but I would like to call for feedback from the committee about this because this will be a substantial change and will require us changing our implementation. However, we do think that this is the right way to go. So I wanted to raise that to everybody, and that’s it. That’s it. That’s the update. Any questions; any comments? +YSV: Okay, and [now for the main issue](https://github.com/tc39/proposal-iterator-helpers/issues/122), which I want to raise to the committee to get feedback on. Right now we pass the protocol to all of these new methods, and I would like to point out that these methods are things like `map`, `filter`, `reduce`, `take`, etc. And for a number of these, the protocol, which allows you to `.return`, allows you to `.throw`, etc. doesn’t actually make sense, because we are not expecting to have communicating generators in these contexts. And there’s a long discussion here about the rationale or purpose of passing the protocol that we had with conartist6 [Conrad Buck]…where gradually, Jason and myself, we became convinced that it may be better to drop passing the protocol and make this proposal more restricted…so that you cannot have the full power of iterators—sorry, of generators—when you are applying iterator helpers on them. So this issue is one that Jason is preparing a PR for, but I would like to call for feedback from the committee about this because this will be a substantial change and will require us changing our implementation. However, we do think that this is the right way to go. So I wanted to raise that to everybody, and that’s it. That’s it. That’s the update. Any questions; any comments? SYG: [from queue] I hate generator `.return`—so sounds good to me. @@ -578,11 +590,11 @@ YSV: Perfect. Okay, that’s really good feedback to have, because we think it SYG: I don’t want to give consensus for dropping because I haven’t seen actual details if that’s what you’re asking. -YSV: You can feel free to review the proposal. This is something that we can take our time on. I just want to make sure people are aware that this is something we’re considering and working on a pull request for. +YSV: You can feel free to review the proposal. This is something that we can take our time on. I just want to make sure people are aware that this is something we’re considering and working on a pull request for. -SYG: Okay, great, but I want to confirm this part: By removing the extra expressivity of the protocol, you’re done with your exploration here. Is that? Making sure not excluding use cases that you had in mind. +SYG: Okay, great, but I want to confirm this part: By removing the extra expressivity of the protocol, you’re done with your exploration here. Is that? Making sure not excluding use cases that you had in mind. -YSV: So, at the moment, we don’t have any use cases in mind that would need this protocol. That’s why we’re suggesting we remove it—and then later on either reintroduce it for specific cases where they’re clearly needed—or to introduce a new set of methods that allow communicating generators to talk to each other in this way. But we would need to defend this with a use case rather than applying it by default. +YSV: So, at the moment, we don’t have any use cases in mind that would need this protocol. That’s why we’re suggesting we remove it—and then later on either reintroduce it for specific cases where they’re clearly needed—or to introduce a new set of methods that allow communicating generators to talk to each other in this way. But we would need to defend this with a use case rather than applying it by default. SYG: A definite +1 for me. Thanks. @@ -594,10 +606,12 @@ JHX: This affects the double ended generator proposal. YSV: We can discuss it; this definitely something that we can add on later, but removing this functionality would be more difficult than creating it from the get-go. - ### Conclusion/Resolution + Update given + ## Temporal + Presenter: Ujjwal Sharma (USA), Philip Chimento (PFC) - [proposal](https://github.com/tc39/proposal-temporal) @@ -613,15 +627,15 @@ USA: [slide 3] To give the progress report on that, I have been working closely USA: [slide 4] It stands for Serializing Extended Data About Time and Events. That’s SEDATE. We ended up having a productive discussion within this working group in IETF 111, which was a couple of months ago. And my draft, which was sort of a personal draft, my individual draft, so far has now been adopted. So now instead of it being “draft-ryzokuken-datetime-extended”, it’s now “draft-ietf-sedate-datetime-extended”. It’s sort of a major step forward, because this means that it’s now not a personal document; it’s formally adopted by the working group, and the working group has made a commitment to finishing it in publishing shortly within the working group. We have a schedule where we plan to make progress on this end and submit it for publication with the ISO [?] within this year. So that’s the timeline that we’re thinking of; we’ve been building consensus and setting up liaison agreements with the ISO team. These things can take time because standards bodies, but the idea is that, hopefully, we’ll set up some sort of agreement where, you know, people involved in the whole process including people in IETF and TC39 can get access to documents like ISO 8601—which I mean, if you don’t have theirs, it’s a bit difficult. I feel we [?] should be able to review this particular facet of the Temporal proposal. So I think that would be helpful for review if you want to. -USA: There are two changes that are requested to the syntax of the serialization as presented today. So we had previously this, the syntax where you could include, you know, the Z with the offset. And in the bracket you could have the bracketed form of [?] a time zone, and PFC is going to mention shortly the changes to this. There’s also the removal of sub-minute timezone offsets. They have not yet been integrated into this change to the format. Not yet been incorporated into the proposal. So it will be presented in October. Just to give a little context on this one. The idea is that the standard in IETF RFC [?], TC39 which [?] does not include support for sub-minute time offsets. So any explicit time offset can only have up to minutes. They cannot have fractional minutes, which is seconds in force. Temporal did include support for that, and we decided that the use cases are just not there to warrant us having long discussions about this within another working group, which we are not as familiar with. So, we’re dropping this one discussion. +USA: There are two changes that are requested to the syntax of the serialization as presented today. So we had previously this, the syntax where you could include, you know, the Z with the offset. And in the bracket you could have the bracketed form of [?] a time zone, and PFC is going to mention shortly the changes to this. There’s also the removal of sub-minute timezone offsets. They have not yet been integrated into this change to the format. Not yet been incorporated into the proposal. So it will be presented in October. Just to give a little context on this one. The idea is that the standard in IETF RFC [?], TC39 which [?] does not include support for sub-minute time offsets. So any explicit time offset can only have up to minutes. They cannot have fractional minutes, which is seconds in force. Temporal did include support for that, and we decided that the use cases are just not there to warrant us having long discussions about this within another working group, which we are not as familiar with. So, we’re dropping this one discussion. -USA: There might be a minor change in syntax about the calendar key. So we’ve already kicked off this discussion. SFC from the committee is also involved in this entire discussion—just to give a quick rundown. The question is if it is all about if the key for calendars would be renamed from `u-ca`, which is what we have right now incorporated in Temporal, to just `ca`. So, apart from that there’s no changes that I see on the horizon. There is agreement within the working group that, you know: these are sort of the concerns that people have, and this is why I’m very optimistic. I think this is something that we can quickly overcome to move forward, hopefully very soon, and everything is on schedule. So given my estimations after talking to different implementers, I think implementers again can be satisfied and be assured that this is going forward, it’s moving forward at the expected pace, and that it will be in an acceptable position moving forward. +USA: There might be a minor change in syntax about the calendar key. So we’ve already kicked off this discussion. SFC from the committee is also involved in this entire discussion—just to give a quick rundown. The question is if it is all about if the key for calendars would be renamed from `u-ca`, which is what we have right now incorporated in Temporal, to just `ca`. So, apart from that there’s no changes that I see on the horizon. There is agreement within the working group that, you know: these are sort of the concerns that people have, and this is why I’m very optimistic. I think this is something that we can quickly overcome to move forward, hopefully very soon, and everything is on schedule. So given my estimations after talking to different implementers, I think implementers again can be satisfied and be assured that this is going forward, it’s moving forward at the expected pace, and that it will be in an acceptable position moving forward. -PFC: [slide 5] This part of the presentation is going to be about the thirty or so normative pull requests that we have for the Temporal proposal. The reason we have so many it’s because it happily has been started to be implemented by engines. In such a large proposal, there are undoubtedly a number of bugs lurking, and implementers have found a bunch of them. So thanks especially to FYT who’s been finding these for V8, to RKG and Yusuke [Suzuki], who have been finding these while implementing it in JavaScriptCore, and Andre [Bargull] who’s been finding these while implementing it for SpiderMonkey. I mentioned in the beginning I'd divide these into 'adjustments', which are actual semantics changes (or otherwise functionality changes that we’re making because implementers recommended them)—and 'bugs', which are just mistakes in the spec text that or things that we overlooked that could not be resolved without a normative change. Since we are short on time, I will go quickly through the adjustments, but I will try to go even more quickly through the bugs. I’m hoping that everyone who had a potential discussion about one of these was able to take a look at the pull request beforehand. I put the slides on ten days ago and had all the pull requests linked from here. Hopefully we’re not going to have to spend a lot of time explaining what each pull request is for. +PFC: [slide 5] This part of the presentation is going to be about the thirty or so normative pull requests that we have for the Temporal proposal. The reason we have so many it’s because it happily has been started to be implemented by engines. In such a large proposal, there are undoubtedly a number of bugs lurking, and implementers have found a bunch of them. So thanks especially to FYT who’s been finding these for V8, to RKG and Yusuke [Suzuki], who have been finding these while implementing it in JavaScriptCore, and Andre [Bargull] who’s been finding these while implementing it for SpiderMonkey. I mentioned in the beginning I'd divide these into 'adjustments', which are actual semantics changes (or otherwise functionality changes that we’re making because implementers recommended them)—and 'bugs', which are just mistakes in the spec text that or things that we overlooked that could not be resolved without a normative change. Since we are short on time, I will go quickly through the adjustments, but I will try to go even more quickly through the bugs. I’m hoping that everyone who had a potential discussion about one of these was able to take a look at the pull request beforehand. I put the slides on ten days ago and had all the pull requests linked from here. Hopefully we’re not going to have to spend a lot of time explaining what each pull request is for. -PFC: [slide 6] First one is a change to guard against garbage sent to the `Temporal.Calendar.prototype.fields()` method. It expects an iterable as an argument. And, previously, it was possible to make it go into an infinite loop by sending an infinite iterable. Now we are making this change to accept only certain values and limit them to these ten [slide 7] so that infinite `while` doesn’t cause an infinite loop. Here’s an example of what that looks like. +PFC: [slide 6] First one is a change to guard against garbage sent to the `Temporal.Calendar.prototype.fields()` method. It expects an iterable as an argument. And, previously, it was possible to make it go into an infinite loop by sending an infinite iterable. Now we are making this change to accept only certain values and limit them to these ten [slide 7] so that infinite `while` doesn’t cause an infinite loop. Here’s an example of what that looks like. -PFC: [slide 8] The next adjustment is to make adding a Duration to a PlainDate work the same as using the appropriate method on the Temporal.Calendar. There was a discrepancy with this, and we want to make them consistent. [slide 9] On this slide, there’s also an example of how that works. And what would change? Previously 24 hours was balanced to one day when adding it to a PlainDate and not when using the `dateAdd()` method on the Calendar. These are changed to be consistent now. +PFC: [slide 8] The next adjustment is to make adding a Duration to a PlainDate work the same as using the appropriate method on the Temporal.Calendar. There was a discrepancy with this, and we want to make them consistent. [slide 9] On this slide, there’s also an example of how that works. And what would change? Previously 24 hours was balanced to one day when adding it to a PlainDate and not when using the `dateAdd()` method on the Calendar. These are changed to be consistent now. PFC: [slide 10] Next adjustment is changing the order of observable operations in the `Temporal.PlainMonthDay.prototype.toPlainDate()` method. The order of operations in order to be consistent with PlainYearMonth. Here again is a code sample of what changed. @@ -633,7 +647,7 @@ PFC: [slide 34] I ran through all of those, let’s have the discussion now. The JHD: Yeah, all the changes seem good to me. But given that there’s been so many of them continually it seems like it might be nice if the implementers in the room continued their sort-of-tacit agreement to ship behind a flag yet until we’ve had some period of time where none of these changes are discovered. I don’t know how long that should be. Maybe just between meetings. But given that there’s been so many, I’m growing concerned about somebody shipping it, and then we discover another thirty of these things and then because of web compatibility, we wouldn’t be able to fix them. -USA: JHD you when you say another thirty of these, you mean bugs and not adjustments. +USA: JHD you when you say another thirty of these, you mean bugs and not adjustments. JHD: I’m primarily concerned about bugs: like could have been implemented but didn’t match the intention and thus may be prevented from fixing it to match the intention. There’s always, of course, the smaller thing about, like, actual design type changes, which still may occur. So I can, but just in general, give it—I’m just saying that this proposal has had a lot of spec turbulence since reaching Stage 3 and so I have like it seems like a good bet that there is going to be more, and it seems like it would be useful to sort of buy us all some time to wait for a quiet period, before locking in whatever semantics everyone’s implemented, if that makes sense. @@ -647,14 +661,18 @@ SFC: To reiterate what PFC and USA just said, I’ve been where I’ve been sort CJT: More of the same to echo those points, just by the time an implementation is ready to consider shipping then, because it is complete, it would hopefully have uncovered the last of these bugs, which would have had to go through another plenary. And so, I don’t think there’s any risk that they would uncover more of these bugs. We’ll have to prepare PRs to get consensus and it wouldn’t be shipped unflagged before that. So, I think JHD, there’s some natural waiting. Anyway, I don’t think unflagging would happen before a plenary in which the final ones were approved and we could bring it up then. I don’t think there’s really any need to discuss it at this stage. -SYG: I just wouldn’t worry about it. +SYG: I just wouldn’t worry about it. PFC: Okay. Thanks. So should I call for consensus on these normative changes to the proposal? RPR: Are there any objections to the normative changes that come in? I Haven’t heard anything? No objection. You have consensus. + ### Conclusion/Resolution + No objection to changes + ## RegExp set notation + properties of strings + Presenter: Mathias Bynens (MB) - [proposal](https://github.com/tc39/proposal-regexp-set-notation) @@ -662,7 +680,7 @@ Presenter: Mathias Bynens (MB) MB: This is just an update on current open issues and some recent changes issues that we’ve resolved, and we’re just going to walk through them. So I’m not gonna spend too much time reiterating things that haven’t changed. [slide 2] What was the main proposal about? It’s about RegExp set notation, operations, and [?] impacts on semantics [?] for those. [slide 3] We decided to do this behind a new flag, which we’re gonna call `v`: it would be the new `u` in a way. And it enables this new syntax for difference, subtraction, and nested character classes, and we can also enable the use of properties of strings. So using the familiar `\p{}` syntax, it would now also be able to use properties of strings as opposed to just the character classes, which goes very well together with the set operations. So, none of this means we choose the [unable to transcribe]. -MB: [slide 4] So yeah, let’s talk a little bit about the expanded scope and some recent changes. Markus: Do you want to go over this summary before we dive into each of these? +MB: [slide 4] So yeah, let’s talk a little bit about the expanded scope and some recent changes. Markus: Do you want to go over this summary before we dive into each of these? MWS: Sure. Yeah, so when we merged the two proposals into one, the question that came up was, should we do more? And where would it end? And so we actually had a comparison done between the regular expression features in ECMAScript as well as what would happen after this proposal so far and comparing that with the Unicode regular expression standard. And the things that it recommends or requires and, thanks to Mark Davis and Richard Gibson, that created a nice spreadsheet with a point-by-point comparison. @@ -670,7 +688,7 @@ MWS: We identified a few things and we think it makes sense, if the committee ag MWS: What we are suggesting and what we are asking for a thumbs up / thumbs down for here are things where ECMAScript is still behind on Unicode, regular expressions, and where fixing that gap requires a new flag because it’s incompatible. And, since we are talking about a new flag here, already for the set notation and the properties of strings, now would be a really good time to deal with those gaps that require a new flag. And, so, we are suggesting to expand the scope so that the total would be the set notation plus the strings, as well as aligning `\sdwb` with Unicode and fixing a couple of line boundary things. -MWS: There is one thing that also falls in this category, but we think that goes probably a little bit too far for now. So we are not suggesting to actually add the full default Unicode case-folding matching into the proposal at this time. So that, if that wanted to be implemented later, that would require a new flag. +MWS: There is one thing that also falls in this category, but we think that goes probably a little bit too far for now. So we are not suggesting to actually add the full default Unicode case-folding matching into the proposal at this time. So that, if that wanted to be implemented later, that would require a new flag. MWS: [slide 5] okay, so `\s` looks like it wants to be the same as whitespace. Each property has 25 characters in it, but they each differ by one. And so 24 of the 25 characters are the same but `\s`, I think for historical reasons, contains what’s known as the Byte Order Mark (formerly the Zero Width No Break Space, which is not a space character at all). It’s the former (BOM) and its purpose really is mostly just byte-order mark, since its other original use was taken over by some other characters some twenty years ago. `\s` is missing a clear white-space control, which is the C1 control Next Line, that is of course a white space in Unicode and `\p{White_Space}` has it. So, there is this odd difference between these two properties, that should really be the same, and Unicode Regular Expressions recommend them to be the same. So we propose that, under the new flag, they are the same and so `\s` would be the same as `White_Space`. @@ -678,7 +696,7 @@ MB: [slide 6] Then, this is something that MB has asked relatively early in that MB: [slide 7] And then we have one more. Line boundaries. The Unicode Regular Expression standard suggests that there shouldn’t be a line boundary within CRLF. It’s one line boundary. But also there is this Next Line character and there should be a line boundary after that, just like after a Line Feed. If it’s accepted, this affects some operators that deal with long line boundaries. I think that’s the last one that we’re suggesting to add. -MB: [slide 8] Yeah, that’s right. We do have some slides for the other currently open issues, and we’re very open to hearing everyone’s feedback on that. If there’s no time in this meeting, then the link to the issue is always at the bottom of slides. We also host a weekly meeting about this proposal every Thursday. It’s on the TC39 calendar. And if people have opinions or are interested, you know, please join that meeting and speak up because it really helps us get everyone’s input. +MB: [slide 8] Yeah, that’s right. We do have some slides for the other currently open issues, and we’re very open to hearing everyone’s feedback on that. If there’s no time in this meeting, then the link to the issue is always at the bottom of slides. We also host a weekly meeting about this proposal every Thursday. It’s on the TC39 calendar. And if people have opinions or are interested, you know, please join that meeting and speak up because it really helps us get everyone’s input. MWS: Yeah, so if we have a little more time, I would like to see if we could get a thumbs up / thumbs down on that expansion of scope that we presented. I would also like to get a thumbs up / thumbs down or at least [?] people on the open issues, which are the next three or four slides. @@ -692,7 +710,7 @@ MB: We currently believe that, actually, option three (`uv` invalid) is the simp MWS: [slide 11] Sure, so we’ve looked at an open question in our proposed draft spec changes, on whether to do anything about IgnoreCase when we do complementing or building up a character class from nested classes and properties. And this is particularly interesting, because ECMAScript IgnoreCase matching has this strange feature of taking a character class that has the complement, this the circumflex, and it’s not actually computing the complement and then doing a case-insensitive match. It’s doing the case insensitive match first on the uncomplemented set, then negating the output based on the presence of the circumflex, which is somewhat strange behavior. Apparently, that’s the behavior that experienced regex people expect, but it’s strangely different. If you have the double negation of the complement of a property and a complement from the circumflex on the right side, compared with just the property on its own, which logically should behave the same day, they behave very differently in current regular Expressions. -MWS: So under the `u` flag or no flag at all, these two expressions that you would expect to be the same are very different. And that inspired us to come up with a solution that is in some ways also implemented in the ICU expression engine: to do a deep early-case closure, very early on, from when we build up the set—and computing the simple complement on the spot—for something like the example here, where we have the character class. For circumflex, we get the same result as before, but by doing it consistently for character classes and properties, we can make the `\p{Ll}` [lowercase] behave the same actually consistently, and then have a good consistent story throughout on what happens with nested classes. So we think that’s the right solution going forward. It does mean that behavior changes with the expression on the left side. But we think it’s a very good thing that it then finally behaves like the expression on the right side. +MWS: So under the `u` flag or no flag at all, these two expressions that you would expect to be the same are very different. And that inspired us to come up with a solution that is in some ways also implemented in the ICU expression engine: to do a deep early-case closure, very early on, from when we build up the set—and computing the simple complement on the spot—for something like the example here, where we have the character class. For circumflex, we get the same result as before, but by doing it consistently for character classes and properties, we can make the `\p{Ll}` [lowercase] behave the same actually consistently, and then have a good consistent story throughout on what happens with nested classes. So we think that’s the right solution going forward. It does mean that behavior changes with the expression on the left side. But we think it’s a very good thing that it then finally behaves like the expression on the right side. SFC: [slide 12] Yeah, sure. So this is another issue that was raised regarding the experience of practitioners, in RGN’s terminology, regarding the behavior of escape sequences, and how escaping rules are different in different areas of the regular expression, as seen in the top line here. In particular, `a*` is the same outside and inside parentheses, but outside a bracket means something different than `a*` in parentheses if it’s inside brackets. There are several ways to address this regarding different rules for escaping, and my proposed follow-up for this issue… Since the main premise of this whole issue is that this could cause unexpected behavior by practitioners writing regular expressions, and given that, that’s the main premise of this issue—I have proposed to do further research, and I’ve put this on the agenda for the TC39 research call next week on September 9th. So if you’re interested in this subject, please join that meeting and I’m hoping that we can put together some sort of survey, so we can get some actual data on this. @@ -700,7 +718,7 @@ MB: [slide 13] Right [unable to transcribe]. And then there is one more issue th MB: And yeah, other than that, we have some settled issues that we’ve already covered before. So I’m not gonna go over those slides until maybe discussion happens after this. [slideshow paused at slide 14] -WH: I’m deeply scared about the changes you’ve made to this proposal since the last meeting, especially the changes to how negation works and the changes to the semantics of fundamental things like `\d`. I like the regularization of character classes, but I want to be able to use it without breaking `\d` or breaking how negation works and it sounds like you’re not going to give me a choice. The problem with things like `\d` is that they’re often used for machine parsing and such. Silently changing `\d` to allow other unicode decimals will introduce a lot of bugs. My preferred solution for that is, if for whatever reason you want something which matches all unicode decimal digits, use something other than `\d`—introduce new syntax, a new letter, or something like that. And similarly I don’t want subtle changes to how negation works, which break some really simple existing regular expressions. The proposal for complement is trying to alter the behavior of complicated regular expressions to work the way you want, but that breaks simple regular expressions. I gave some examples at past meetings. +WH: I’m deeply scared about the changes you’ve made to this proposal since the last meeting, especially the changes to how negation works and the changes to the semantics of fundamental things like `\d`. I like the regularization of character classes, but I want to be able to use it without breaking `\d` or breaking how negation works and it sounds like you’re not going to give me a choice. The problem with things like `\d` is that they’re often used for machine parsing and such. Silently changing `\d` to allow other unicode decimals will introduce a lot of bugs. My preferred solution for that is, if for whatever reason you want something which matches all unicode decimal digits, use something other than `\d`—introduce new syntax, a new letter, or something like that. And similarly I don’t want subtle changes to how negation works, which break some really simple existing regular expressions. The proposal for complement is trying to alter the behavior of complicated regular expressions to work the way you want, but that breaks simple regular expressions. I gave some examples at past meetings. MB: I don’t know what things we are actually breaking. @@ -708,25 +726,25 @@ WH: I don’t want to dwell on this because we don’t have a lot of time. But t MB: I don’t think we’re actually changing behavior of character classes with an initial circumflex. -WH: Anyway, I am certainly not in favor of merging the regularization of a square bracket syntax proposal, which I think is a very good one, with things which alter existing functionality like negation or `\d` in obscure ways. +WH: Anyway, I am certainly not in favor of merging the regularization of a square bracket syntax proposal, which I think is a very good one, with things which alter existing functionality like negation or `\d` in obscure ways. KG: Yeah, I don’t want to use a strong of a term as break, but I agree with WH that changing the semantics of other stuff is kind of scary. Changing `\d` amounts to making a whole new mode instead of just changing semantics for some edge cases that you weren’t going to use, like `&&` or whatever. I see where you’re coming from with wanting this, but I share WH’s concern about changing the semantics of a bunch of stuff. JRL: Also voicing support, I would not change these shorthands. -BFS: So, I’m in the opposite boat. I think changing shorthands is actually okay because we have an opt-in flag. But if these are considered problematic, particularly if people are copy-pasting regular expressions across different places…We were talking about how there’s a Unicode recommendation on how regular expressions work if our regular expressions don’t work the same as other places that are using the shorthand. One route we could do to resolve this—and I slightly prefer it—is if we just don’t support the problematic short hands. `\d` would be really ugly though, I think, if we don’t have that because I don’t know how realistic it would be for people to actually Implement that themselves. But I mean, if `\d` has different meanings, and that’s the problem with the Unicode recommendation and what JavaScript does—[we] could just not allow `/d` in this mode? Because it seems like there’s a conflict there. +BFS: So, I’m in the opposite boat. I think changing shorthands is actually okay because we have an opt-in flag. But if these are considered problematic, particularly if people are copy-pasting regular expressions across different places…We were talking about how there’s a Unicode recommendation on how regular expressions work if our regular expressions don’t work the same as other places that are using the shorthand. One route we could do to resolve this—and I slightly prefer it—is if we just don’t support the problematic short hands. `\d` would be really ugly though, I think, if we don’t have that because I don’t know how realistic it would be for people to actually Implement that themselves. But I mean, if `\d` has different meanings, and that’s the problem with the Unicode recommendation and what JavaScript does—[we] could just not allow `/d` in this mode? Because it seems like there’s a conflict there. -WH: It seems like we would be breaking a lot more people by dropping `/d` instead of adding some other new syntax for those who do want to match Unicode decimal digits. +WH: It seems like we would be breaking a lot more people by dropping `/d` instead of adding some other new syntax for those who do want to match Unicode decimal digits. BFS: I don’t have strong opinions. I don’t actually think this is breakage due to requiring an opt-in flag and then allowing linting seems fine to me personally…That doesn’t seem to be a consensus amongst everybody. -WH: Those are not the only choices. We can leave `\d` alone and introduce a new thing which matches Unicode digits. +WH: Those are not the only choices. We can leave `\d` alone and introduce a new thing which matches Unicode digits. BFS: This kind of bleeds into the previous one as well. So, I do think the proposed shorthands, whatever their functionality is, do simplify some common workflows. So I do like those workflows being supported in an easier way. It seems like there’s disagreement on it, but I think if we make a stance about what’s considered “breaking”… When you generally copy/paste across different modes of regex, they’re not expected to work the same. If we could just get some clarity on if shorthands can never change across the modes, that would be helpful for this proposal. That’s it. RBN: I don’t know if we’re going to actually have time during this meeting to get to the proposal that I put together around regular-expression feature parity, given the time constraints—but one of the things that I planned on presenting and proposing was inline flag modifiers that would allow you to exit a certain mode. So if we did want to go forward with `/d` in this mode meaning all digits, and you wanted to switch out of that mode and into regular `u` Unicode mode, then you could use inline modifiers to switch your mode settings if necessary. -MED: There are a couple of possibilities that the problem currently is that people expect `\w` to work with words, but it only works with ASCII words, and so people, you know…It’s fine if all you ever use is ASCII and that works just fine, but when you start to use Cyrillic [?] or whatever, because that’s the target that you’re working with, then you get bad breakages with when you leave these things as they were. So the suggestion is to modify them when you have this flag on so that they work properly. Now an alternative would be to have modifiers on each one of those so that I could have a `\d{u}` or something, `\w` and, you know, `\b{u}`, and so on. And that would at least provide the functionality; people would still have to learn that they have to use that syntax for it to work, right, but you really do need this syntax if you’re going to work with regular Expressions, if you’re not working with English. +MED: There are a couple of possibilities that the problem currently is that people expect `\w` to work with words, but it only works with ASCII words, and so people, you know…It’s fine if all you ever use is ASCII and that works just fine, but when you start to use Cyrillic [?] or whatever, because that’s the target that you’re working with, then you get bad breakages with when you leave these things as they were. So the suggestion is to modify them when you have this flag on so that they work properly. Now an alternative would be to have modifiers on each one of those so that I could have a `\d{u}` or something, `\w` and, you know, `\b{u}`, and so on. And that would at least provide the functionality; people would still have to learn that they have to use that syntax for it to work, right, but you really do need this syntax if you’re going to work with regular Expressions, if you’re not working with English. MB: Can I quickly respond to that as well? So people have been using the example of `\d` but that’s really the simplest one out of these three here, because there is already a fairly straightforward workaround with `\p{gc=Decimal_Number}`. `\w` is actually a better example, because if you have to roll that by yourself, even with the current support for property escapes, it’s still a bunch of different Unicode properties to combine: `\p{Alpha}\p{gc=Mark}\p{digit}\p{gc=Connector_Punctuation}\p{Join_Control}`—it’s much less obvious, much less ergonomic. Perhaps we could add a Unicode-level property called something like `Word` that combines all these into a single property, so that people can do `\p{Word}` if they want the Unicode-aware `\w`. But then still we’d need a solution for `\b` that aligns with that. @@ -734,7 +752,7 @@ MED: Well, I think the problem with that is that we agree about `\d` is that the RPR: 14 minutes left on this topic and on this agenda item. -SYG: It sounds like maybe WH’s next topic will answer what I asked. I want to add some more detail on what WH thought as breakages…Since it is a separate mode as BFS has said, is it the same copy/paste concern that BFS raised? +SYG: It sounds like maybe WH’s next topic will answer what I asked. I want to add some more detail on what WH thought as breakages…Since it is a separate mode as BFS has said, is it the same copy/paste concern that BFS raised? WH: I think the presenters are assuming that there will no longer be any need or use for ASCII `\d`, always replacing it with the Unicode one. Having fixed dozens if not hundreds of bugs in other languages caused by this change, I think this is false. The issue I have is I want to be able to use the new mode for the nice new character-class syntax in general. But for doing things like parsing and validating inputs it is essential that `\d` still work using ASCII digits. The argument that this is a new mode doesn’t solve the problem of still needing the functionality of `\d` matching ASCII digits while being able to use the new syntax for set unions and intersections. @@ -754,25 +772,25 @@ MED: What about if we had alternatives for the `\d` curly? `{u}` and so on, with WH: That seems fine. -MED: Maybe we can talk about the other issue that WH raised, which was IgnoreCase. Or something would be better off taken offline. +MED: Maybe we can talk about the other issue that WH raised, which was IgnoreCase. Or something would be better off taken offline. MWS: I think I think we need probably a meeting with WH and whoever else is interested, sort of separately. -WH: The case that breaks is `/[^x]/i`. +WH: The case that breaks is `/[^x]/i`. MWS: I don’t think what we are suggesting changes that behavior. WH: Yes, it does. -MED: Yeah, I think there’s a misunderstanding here, because that’s not our—that’s not what’s part of this proposal. And perhaps the wording we’re using is not making that clear. +MED: Yeah, I think there’s a misunderstanding here, because that’s not our—that’s not what’s part of this proposal. And perhaps the wording we’re using is not making that clear. WH: I went through the proposed semantics and it breaks that one. MB: I would like to remind people that we do have a weekly meeting for this every Thursday. It’s on the TC calendar. It’s open to everyone. So, like just feel free to and yeah, we’re happy to have these kinds of discussions with anyone who’s interested also on the GitHub issue tracker. We’d appreciate your comments and input there. -KG: Can you repeat what the change to `\s` is? +KG: Can you repeat what the change to `\s` is? -MWS: Yes, `\s` and `\p{White_Space}` differ, but they are the same in 24 characters. They differ in one each. And we are proposing to make `/s` be the same as `/p{White_Space}`. And that means `\p` would lose the Byte Order Mark, and it would gain the Next Line control. +MWS: Yes, `\s` and `\p{White_Space}` differ, but they are the same in 24 characters. They differ in one each. And we are proposing to make `/s` be the same as `/p{White_Space}`. And that means `\p` would lose the Byte Order Mark, and it would gain the Next Line control. MB: …And all of this, only in the new `v` mode. @@ -782,13 +800,14 @@ BFS: I’m getting really uncomfortable with calling modes affecting all [?] sil WH: If you had a mode whose main effect was changing only the behavior of `\d`, `\s`, etc., that would not be a silent change. But here you get those things riding along as a side effect of a much larger syntax change to a completely different part of regular expressions. The rationale for calling this a silent change is that the `v` mode reforms the syntax of how you do character classes. Having it also introduce other little and obscure side effects, such as how you do line breaks or what’s white space, is an unexpected change. -BFS: I appreciate the attempt to comfort me, but I think it has had the opposite effect. +BFS: I appreciate the attempt to comfort me, but I think it has had the opposite effect. ### Conclusion/Resolution -* Status Update – comments received +- Status Update – comments received ## String is USV String + Presenter: Guy Bedford (GB) - [proposal](https://github.com/guybedford/proposal-is-usv-string) @@ -808,7 +827,7 @@ DE: I’d like to hear if people need to check in their application code to dist GB: Just posted one instance in the chat of a [unable to transcribe] issue, dealing with exactly this topic. So why isn’t [unable to transcribe]? When you’re dealing with interfacing between WebAssembly and JS, they obviously need to do this check. -BFS: So I come across this every so often. I’m interacting with various things where I want to serialize Strings. Actually, in the nightmare world of JSON, you can actually have lone surrogates as well. Their [unable to transcribe] is basically in the same situation as JavaScript. You can slice strings’ lone surrogates [?] like that. I opened an issue against a heap dump in V8 because they were slicing things with lone surrogates in them. And that meant that I couldn’t parse them in various ways because I would get replacement characters. Various APIs and host environments, like TextEncoder, will automatically replace lone surrogates with replacement characters. So if you round trip through them, you can’t actually compare that something is the same, because it’s not the same. It’s been encoded in the round trip. So you actually need to kind of see if something is going to be lossy when you round trip it for these cases. So anytime you send something through UTF-8 and back, you really want to check if you’re going to have a lossy transform happen, and you need something like this to do it. So, UTF-8. There are plenty of things to do like writing to disk, commonly sending it over the network, commonly all of this—if you don’t have it and you get split on one of these lone surrogates, you get weird things happening. This happens all the time with streams. Streams are the worst for this because sometimes you get a network chunk in that split on a lone surrogate because of backpressure, and JavaScript has decided to use string.slice. For whatever reason, this shows up in Node.js. I don’t really have too many more off the top of my head. I could probably think of more. Yeah, that’s it. +BFS: So I come across this every so often. I’m interacting with various things where I want to serialize Strings. Actually, in the nightmare world of JSON, you can actually have lone surrogates as well. Their [unable to transcribe] is basically in the same situation as JavaScript. You can slice strings’ lone surrogates [?] like that. I opened an issue against a heap dump in V8 because they were slicing things with lone surrogates in them. And that meant that I couldn’t parse them in various ways because I would get replacement characters. Various APIs and host environments, like TextEncoder, will automatically replace lone surrogates with replacement characters. So if you round trip through them, you can’t actually compare that something is the same, because it’s not the same. It’s been encoded in the round trip. So you actually need to kind of see if something is going to be lossy when you round trip it for these cases. So anytime you send something through UTF-8 and back, you really want to check if you’re going to have a lossy transform happen, and you need something like this to do it. So, UTF-8. There are plenty of things to do like writing to disk, commonly sending it over the network, commonly all of this—if you don’t have it and you get split on one of these lone surrogates, you get weird things happening. This happens all the time with streams. Streams are the worst for this because sometimes you get a network chunk in that split on a lone surrogate because of backpressure, and JavaScript has decided to use string.slice. For whatever reason, this shows up in Node.js. I don’t really have too many more off the top of my head. I could probably think of more. Yeah, that’s it. DE: Yes, those use cases sound really relevant, and I’m glad to hear about [unable to transcribe] Stage 1 for this proposal. @@ -816,31 +835,34 @@ MB: I like this proposal and am supportive of Stage 1. I already posted on the i BFS: We should be really careful here. You don’t want to accidentally compare a string that already has a replacement character with something that doesn’t yet have the replacement character and treat them as equal. Yeah, that it is lossy if they are combined as well. -MF: I guess my original topic here was what do consumers do with this, but I guess what I’m understanding is that, whenever you test for well-formedness, it’s so that you can then ensure that you have a well-formed string. Is there a reason to do the test and not do a replacement? You’re going to use some replacement character or what else are people doing with it after the test? And I guess I have to test the API at all. +MF: I guess my original topic here was what do consumers do with this, but I guess what I’m understanding is that, whenever you test for well-formedness, it’s so that you can then ensure that you have a well-formed string. Is there a reason to do the test and not do a replacement? You’re going to use some replacement character or what else are people doing with it after the test? And I guess I have to test the API at all. -BFS: I can answer that someone directly. We could bikeshed the API a bit. So you don’t always want to introduce a replacement character because replacement characters actually can be lossy too. So say I had two different emojis. I’ve got example sites with this in the wild: emojis often end up with lone surrogates. If you split them down the middle, one could be the fire emoji and another could be a heart emoji. And if I split them just right, and then I use replacement characters, they are now equivalent. So we don’t want to always use a replacement character. Sometimes we want to trim off the end. That in my reality is the more common case for safety reasons, but if you are forced to round trip through Unicode, you are going to be wanting to do something with replacement characters. +BFS: I can answer that someone directly. We could bikeshed the API a bit. So you don’t always want to introduce a replacement character because replacement characters actually can be lossy too. So say I had two different emojis. I’ve got example sites with this in the wild: emojis often end up with lone surrogates. If you split them down the middle, one could be the fire emoji and another could be a heart emoji. And if I split them just right, and then I use replacement characters, they are now equivalent. So we don’t want to always use a replacement character. Sometimes we want to trim off the end. That in my reality is the more common case for safety reasons, but if you are forced to round trip through Unicode, you are going to be wanting to do something with replacement characters. MF: Is that all? That’s a transform that’s just like a function of the lone surrogate. That’s there though. Is there anything about that? BFS: So, the lone surrogate, the high surrogate, is often shared amongst these emojis. So there’s nothing to differentiate it. So it is generally just a transform of the high surrogate, and they can be equivalent. Even though the total emoji, if you were to get all the strings combined, would have a different low surrogate. -MF: I guess I’m not understanding. You slice the first half of those, get the [?] pair, and they have the same high surrogate. And you don’t want those to compare this thing. You don’t want those streams to compare the same. I’m not sure what that has to do with the replacement character. +MF: I guess I’m not understanding. You slice the first half of those, get the [?] pair, and they have the same high surrogate. And you don’t want those to compare this thing. You don’t want those streams to compare the same. I’m not sure what that has to do with the replacement character. BFS: Then those strings are the same high surrogate that point for that case. Yes, but if you have something that already has a replacement character generally, you don’t want to compare. It is with something with a high surrogate, at least in my experience, if you want to do some kind of more lengthy test. You could be waiting for more to see if there is a proper low surrogate. You could be trimming it. You could want to do the replacement character. The problem is there are multiple options depending on the workflow you’re currently doing. MF: Okay, I see. Yeah, this is sufficient justification for me from you for us to have both of those methods in the Stage 1. -GB: To try and dig a little bit deeper into the validation use case, for the most part, if you’re working with an API that expects valid [?] Unicode while [?] providing invalid Unicode, that’s something you want to normally cache during the development phase of the application. So being able to hide errors for that, that you can turn into a user-level [unable to transcribe]. Are you taking an unstructured, string input? Actually would be desirable in many cases, because if you don’t have a valid Unicode to all [?] with your program. Some of this is really well-defined [unable to transcribe]. +GB: To try and dig a little bit deeper into the validation use case, for the most part, if you’re working with an API that expects valid [?] Unicode while [?] providing invalid Unicode, that’s something you want to normally cache during the development phase of the application. So being able to hide errors for that, that you can turn into a user-level [unable to transcribe]. Are you taking an unstructured, string input? Actually would be desirable in many cases, because if you don’t have a valid Unicode to all [?] with your program. Some of this is really well-defined [unable to transcribe]. RPR: Thank you. So GB. You want to ask for Stage 1? -GB: I would like to ask for Stage 1. Yes. +GB: I would like to ask for Stage 1. Yes. -RPR: Any objections to Stage 1? Congratulations, you have Stage 1. He does have a point here about that [unable to transcribe] should be captured. So, please do clarify that with him offline. On the cues [?]. Excellent. Thank you very much. +RPR: Any objections to Stage 1? Congratulations, you have Stage 1. He does have a point here about that [unable to transcribe] should be captured. So, please do clarify that with him offline. On the cues [?]. Excellent. Thank you very much. ### Conclusion/Resolution -* Stage 1 achieved + +- Stage 1 achieved + ## Array.fromAsync + Presenter: J. S. Choi (JSC) - [proposal](https://github.com/js-choi/proposal-array-from-async) @@ -848,7 +870,7 @@ Presenter: J. S. Choi (JSC) JSC: [slide 1] Hi everyone. My name is J. S. Choi. Joshua S. Choi. I’m a physician. I’m with Indiana University. I am a physician of medicine, but I also work in biomedical informatics. So I do a lot of data analytics, application design, and that’s why I’m here. We might go shorter depending on how many questions everyone has. -JSC: [slide 2] I’m assuming everyone is probably familiar with `Array.from`, the static method on `Array`. It’s used a lot; people use it to turn iterable things into arrays. This proposal is for a companion method for async iterables to arrays: `Array.fromAsync`. +JSC: [slide 2] I’m assuming everyone is probably familiar with `Array.from`, the static method on `Array`. It’s used a lot; people use it to turn iterable things into arrays. This proposal is for a companion method for async iterables to arrays: `Array.fromAsync`. JSC: [slide 3] I run into this a lot. It’s not like it's very difficult to do: flattening an async iterable into an array. You can just use a `for await` loop, but I actually do it quite a bit and quite a few libraries do it, too—for debugging: If you want to see what an async iterable looks like, you of course can’t print it out to the console. You have to flatten it into an array. So there’s that. @@ -866,13 +888,13 @@ JSC: [slide 6] I’m going for Stage 1. That of course means that not every sing JSC: There is a question of naming. DD brought up that `async` after `from` matches existing patterns more than `from` then `async`. There’s a couple methods in `Atomic` and the Web GPU API that already match `.fooAsync`. -JSC: There’s also the question whether this is redundant with the iterator-helpers proposal, which I’m very excited about too. It already has specified a `toArray` method. However, I think that this arguably—If we’re going to choose one, I think that it should be in the `Array` class in order to parallel what already exists, `Array.from`. And apparently duplication of functionality between the `Array` class methods and iterator helpers is fine, since `toArray` also is redundant with `Array.from` synchronously too. I don’t think that should block anything. I think that they can coexist, and, if we have to choose one, we should choose what parallels what already exists, which is an `Array` static method. +JSC: There’s also the question whether this is redundant with the iterator-helpers proposal, which I’m very excited about too. It already has specified a `toArray` method. However, I think that this arguably—If we’re going to choose one, I think that it should be in the `Array` class in order to parallel what already exists, `Array.from`. And apparently duplication of functionality between the `Array` class methods and iterator helpers is fine, since `toArray` also is redundant with `Array.from` synchronously too. I don’t think that should block anything. I think that they can coexist, and, if we have to choose one, we should choose what parallels what already exists, which is an `Array` static method. -JSC: Seeing strong support for this from JHD: “Should have been a requirement.” +JSC: Seeing strong support for this from JHD: “Should have been a requirement.” JHD: Yeah, I mean, it’s largely in the topic. But yeah, I feel like it’s been a huge pain to not have. Like, I use `Array.from` all the time, though `for of` exists, and it’s been a huge pain to not have an equivalent version of it for async iterators. As soon as I saw this proposal, my reaction was, “Why did we not insist that this be part of `for await` in the first place?” So I think it was great—like the spec text is already written. I’d like it even to be Stage 2, though that’s not being asked for. -JSC: Yeah, so I could ask for Stage 2 if nobody objects to one. +JSC: Yeah, so I could ask for Stage 2 if nobody objects to one. YSV: I would want more time to look at before Stage 2, because it’s not just an exploration at that point. Whether it should be in the language. And I have a few questions around that because we do have the redundancy with the iterator `toArray` method. And something we want to ask is how do we want to approach that in addition to objects and sets. I think there’s still some open questions there. So I would want to take this a little slower and make sure that we understand exactly what we want to get out of this. So Stage 1 will be fine, but I’m not sure yet about Stage 2. @@ -880,7 +902,7 @@ JSC: No problem. I’ll ask for Stage 1 only. RBN: One of the concerns that I have…I generally support this feature and and I agree with JHD’s interest, that we should have had some support like this, some of the complexities that we have in there. But one thing that I worry about is that, it would be very easy to start trying to create an array from a data source that might have to wait a long time for results. If you’re pulling from the web or if it might possibly be infinite and will just constantly allocate more memory in the background while other tasks are processing…and it does make me wonder if we need to bring back cancellation. -RBN: We now have AbortController and AbortSignal: both in pretty much every major browser. And also in Node. But no way to manage it from within the ECMAScript language itself. And we did have a discussion a couple years back on the possibility of introducing a symbol-based protocol for cancellation that could be adopted as part of AbortController and AbortSignal. I don’t want to block—and I saw this in the chat: I don’t want to gate this, but I am wondering if it’s something we might want to consider adding in the future. Because you can promise-race from async. It’ll still keep running in the background, even if you stop early. +RBN: We now have AbortController and AbortSignal: both in pretty much every major browser. And also in Node. But no way to manage it from within the ECMAScript language itself. And we did have a discussion a couple years back on the possibility of introducing a symbol-based protocol for cancellation that could be adopted as part of AbortController and AbortSignal. I don’t want to block—and I saw this in the chat: I don’t want to gate this, but I am wondering if it’s something we might want to consider adding in the future. Because you can promise-race from async. It’ll still keep running in the background, even if you stop early. JSC: Yes, thank you. Your point is well taken. This does make it easier to write bad code that might block a long time. But it’s already easy in the first place. I would say this is a convenience function more than anything. @@ -892,56 +914,57 @@ BT: Interesting to note that the reason why we don’t actually have a method— SYG: No concerns with Stage 1, certainly. Could you please go back to the spec slide? I had missed what you had planned to do with @@species. If it’s updated, I cannot see. Maybe you can just directly speak to what you were planning with @@species. Does it async from it right now? -JSC: As I recall, `asyncFrom` doesn’t even mention @@species. It does not address species. I don’t remember `Array.from` even mentioning @@species either. +JSC: As I recall, `asyncFrom` doesn’t even mention @@species. It does not address species. I don’t remember `Array.from` even mentioning @@species either. -SYG: Oh, sorry. This is a static thing. Yeah. Okay. Yeah. It’s a transferable factory method, right? Okay. Yeah. I retract my question; never mind. +SYG: Oh, sorry. This is a static thing. Yeah. Okay. Yeah. It’s a transferable factory method, right? Okay. Yeah. I retract my question; never mind. YSV: Yeah, so I’m just thinking a lot about this and one thing that I think works well in iterator helpers is we can effectively scope the number of async elements that are being taken. So this can allow people to say take five elements out of a certain stream and just operate on those, giving you a snapshot of what you’re working with. So that might actually address the size complaint—and it makes me actually lean towards iterator helpers here as being the right solution…but I think that this is just something we want to spend a bit more time with, and really understand what we want to get out of this API. How it should communicate to developers how it should be used. And maybe we can find some good common ground there later on. Because I feel like…One thing that I’m a little worried about with iterator helpers is people who are unfamiliar with working with iterators and generators. They might not be expecting all of the different kinds of behavior you can end up with. So just thinking aloud. JSC: Yes. Thank you for raising those points. Please, feel free to open an issue on the repository, or I could do it and ping you, and we could talk more about its relationship to this proposal with iterator helpers and ramifications for teaching. -YSV: Yeah. Yeah, just something to think a little bit about. I mean, I’m hoping to take iterator helpers to Stage 3 in the next meeting. So probably we want to do that sooner rather than later. Maybe, what we’ll do is we take `toArray` and pause it. I don’t know, Michael [MF?], what your thoughts are there. But yeah, just down here down here for sure. +YSV: Yeah. Yeah, just something to think a little bit about. I mean, I’m hoping to take iterator helpers to Stage 3 in the next meeting. So probably we want to do that sooner rather than later. Maybe, what we’ll do is we take `toArray` and pause it. I don’t know, Michael [MF?], what your thoughts are there. But yeah, just down here down here for sure. -JSC: From my perspective, having both `toArray` and `array.fromAsync` isn’t a big deal, because `Array.from` already exists, but we can definitely hash this out more. +JSC: From my perspective, having both `toArray` and `array.fromAsync` isn’t a big deal, because `Array.from` already exists, but we can definitely hash this out more. -YSV: Yeah. I think I’m also leaning towards we should have both to be honest. Yeah. +YSV: Yeah. I think I’m also leaning towards we should have both to be honest. Yeah. -MF [?]: Yeah, I agree. I don’t think it’s a problem to have both. +MF [?]: Yeah, I agree. I don’t think it’s a problem to have both. JSC: Yes. Asking for Stage 1. RPR: Any objections to Stage 1? Any support for Stage 1? We like to ask for support. All right. Congratulations. You have Stage 1. Thank you very much, everyone. ### Conclusion/Resolution -* Stage 1 achieved +- Stage 1 achieved ## continue labels should not pass through blocks + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/ecma262/pull/2482) - KG: All right, so someone pointed out recently that, the way the spec is written, if you have a labeled block whose sole statement is an iteration statement, and the body of the iteration statement contains a `continue` which has the same label as the label for the block, the spec says this is legal. And it has said that since ES2015. I don't think that was intentional; implementations don't actually do this, except ChakraCore does something weird. It is not a hundred percent clear to me what the actual semantics are. I think the semantics are that the `continue` completion propagates up to the top of the script where it causes the script to stop executing, but I'm genuinely not sure. So I would like to have consensus for making this syntax illegal. It's a normative change, because the current spec is in some sense coherent, but it is a change to match web reality. YSV: It sounds good. -WH: I agree. +WH: I agree. KG: That's my item. Okay, excellent. -RPR: Have consensus will change. +RPR: Have consensus will change. ### Conclusion/Resolution -* Change is approved + +- Change is approved ## The Realm for the error when tail-calling a revoked Proxy + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/ecma262/pull/2495) - [slides](https://docs.google.com/presentation/d/1txbE6t69AAufBlKsCF20Gzyq2pUDBxK2Az7XQwljZC0/edit#slide=id.g106f4536d9_0_109) - KG: [slides 2–3] Sorry about this. I did not want to take the committee's time on it. I have a different goal that I am trying to accomplish which requires this change and it is a normative change. KG: This only affects implementations that have tail calls. If you don't care about tail calls, please just tune out the next two minutes. Go about your life. Please don't fight about tail calls or anything during this presentation. @@ -956,15 +979,15 @@ BFS: Just for some clarity, if you are faking a revoked Proxy, you couldn't give KG: You can't either way. It is impossible with a fake revoked Proxy, to create the TypeError, in either the realm of the caller of F or in the realm of F itself. That's just not data which is available to you. -BFS: So, the only way you can do that is if that's what JavaScriptCore does. +BFS: So, the only way you can do that is if that's what JavaScriptCore does. -KG: That's right, but I don't want to have that change on the table. +KG: That's right, but I don't want to have that change on the table. BFS: I need a moment to think about this, sorry. KG: Concretely, I want to exclude that because that would imply changes to other engines. Like that would be relevant even for engines which don't implement tail calls. I don't right now want to consider changes that would affect engines which don't implement tail calls. -BFS: Yeah, I think it's fine. I had to take a moment, sorry. +BFS: Yeah, I think it's fine. I had to take a moment, sorry. KG: All right, so I would like to ask for consensus for this change that is on the screen right now. @@ -972,14 +995,12 @@ YSV: Do we have Apple on the call and some Moddable folks? PHE: I'm here. XS only has one realm so that's why you couldn't figure it out because it's not there. Yeah, so no problem. -MLS: I think we’re fine with this. +MLS: I think we’re fine with this. KG: OK, so Moddable and Apple are ok with this. -RPR: I think we've had no objections and positive sentiment. So congratulations. +RPR: I think we've had no objections and positive sentiment. So congratulations. ### Conclusion/Resolution -* Change is approved - - +- Change is approved diff --git a/meetings/2021-08/sept-01.md b/meetings/2021-08/sept-01.md index 0003c008..98603f44 100644 --- a/meetings/2021-08/sept-01.md +++ b/meetings/2021-08/sept-01.md @@ -1,9 +1,10 @@ # 1 September, 2021 Meeting Notes + ----- **In-person attendees:** None -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -20,8 +21,8 @@ | Philip Chimento | PFC | Igalia S.L. | | J. S. Choi | JSC | Indiana University | - ## BigInt Math for Stage 1 + Presenter: J. S. Choi (JSC) - [proposal](https://github.com/tc39/proposal-bigint-math) @@ -43,7 +44,7 @@ JSC: There’s some gray zones. Like, I don’t know about `imul`, I don’t kno JSC: [slide 6] The bigger problem is with variadic functions. There are three variadic functions, `min`, `max`, and `hypot`, and `min` and `max` especially are extremely common, but they currently have a definition when you give them no arguments. Imagine if you’re giving `max` an array of BigInts and it possibly could be empty. And when it’s empty, it unexpectedly gives you a Number value. Not a BigInt value. To me, that is effectively an unexpected and implicit type conversion from an array of BigInts to a Number. So hopefully we can all agree that that’s a problem and something to be avoided since our invariant is no implicit type conversion. -JSC: [slide 7] So, right now there is a solution to spec has is having three separate methods for each of the three number variadic methods. This might not be popular with you. It is certainly ugly to me. This is less bad than having `min` implicitly return Numbers sometimes when you might give it an array of BigInts. But perhaps we could put them on the BigInt constructor instead. I don’t know: That raises other questions. Like, do we put everything else on BigInt’s constructor? I don’t know. This kind of bikeshedding shouldn’t block Stage 1. Stage 1 is for exploring stuff: whether Math stuff for BigInts is worth it and should be explored, whether we put everything on the `BigInt` constructor, whether we continue putting stuff in `Math` and have it determine whatever. It’s those questions that hopefully you can hash out with me in the issues on the repository. But the variadic thing is my biggest question but right now it shouldn’t block Stage 1 +JSC: [slide 7] So, right now there is a solution to spec has is having three separate methods for each of the three number variadic methods. This might not be popular with you. It is certainly ugly to me. This is less bad than having `min` implicitly return Numbers sometimes when you might give it an array of BigInts. But perhaps we could put them on the BigInt constructor instead. I don’t know: That raises other questions. Like, do we put everything else on BigInt’s constructor? I don’t know. This kind of bikeshedding shouldn’t block Stage 1. Stage 1 is for exploring stuff: whether Math stuff for BigInts is worth it and should be explored, whether we put everything on the `BigInt` constructor, whether we continue putting stuff in `Math` and have it determine whatever. It’s those questions that hopefully you can hash out with me in the issues on the repository. But the variadic thing is my biggest question but right now it shouldn’t block Stage 1 JSC: [slide 8] The specification right now does overload the `Math` operations. It uses the same machinery that was written up by the hard work of everyone who worked on BigInt. There are abstract numeric-type operations already. They’re already used for things like the exponentiation operator. So we reuse that machinery; we extend it for a bunch of other stuff. And so and then we just change the original `Math` function properties to use those abstract numeric operations. @@ -73,7 +74,7 @@ WH: I would not want `exp`. That requires you to have e to arbitrary precision i JSC: All right, I will plan to drop that. -JHD: I think the general concept is safe. Strong support for. I think it’s just bizarre that intuitive stuff doesn’t work as far as `max`/`min`. You can greater than `>` or less than `<` a BigInt. And Number mixing is only a problem when there’s precision loss and that doesn’t apply to comparisons. So it just seems absurd to me that `Math.max` doesn’t just accept BigInts. And I haven’t gone through and audited `Math` methods, but I suspect that there’s a few where it just should work simply because there’s *no* good reason why it *shouldn’t*. And then there’s a bunch where it shouldn’t work because there *are* good reasons why it shouldn’t. And I think Stage 1 is absolutely the time to explore that. +JHD: I think the general concept is safe. Strong support for. I think it’s just bizarre that intuitive stuff doesn’t work as far as `max`/`min`. You can greater than `>` or less than `<` a BigInt. And Number mixing is only a problem when there’s precision loss and that doesn’t apply to comparisons. So it just seems absurd to me that `Math.max` doesn’t just accept BigInts. And I haven’t gone through and audited `Math` methods, but I suspect that there’s a few where it just should work simply because there’s _no_ good reason why it _shouldn’t_. And then there’s a bunch where it shouldn’t work because there _are_ good reasons why it shouldn’t. And I think Stage 1 is absolutely the time to explore that. JSC: I would like to second JHD’s thing: I would like all the help that I can get, when it comes to auditing each function, from engine implementers. And from anyone: anyone who knows any mathematicians, engineers, scientists, with regards to what their needs are and what the cost would be. I would err on the side of dropping early. All the transcendentals I will drop in the next week. As for `max` and `min` we yeah, so like the problem being when you have zero arguments—someone or me can open an issue on that and we can bikeshed it there. But yeah, that’s hopefully for Stage 1. @@ -99,11 +100,11 @@ JSC: So, with regards to—are you talking specifically about possible functions YSV: Yeah, for example. -JSC: Okay, so that’s what I was getting into when I was talking about, like, formal guarantees: like guaranteeing monotonicity, for instance, or guaranteeing that, for some values, if there’s an integer mathematical value for them, then return them. That’s only for the case for functions that could return irrationals like square root, if we did. So for instance, if we input `101n`, presumably that would be implementation approximated. But should we guarantee that it couldn’t be the same as `100n`? And should *that* be guaranteed to be `10n`? Things like that, those are issues that labelled cross-cutting concerns. +JSC: Okay, so that’s what I was getting into when I was talking about, like, formal guarantees: like guaranteeing monotonicity, for instance, or guaranteeing that, for some values, if there’s an integer mathematical value for them, then return them. That’s only for the case for functions that could return irrationals like square root, if we did. So for instance, if we input `101n`, presumably that would be implementation approximated. But should we guarantee that it couldn’t be the same as `100n`? And should _that_ be guaranteed to be `10n`? Things like that, those are issues that labelled cross-cutting concerns. -JSC: I mentioned that’s *if* square root ends up in the list. I think there probably are use cases. I don’t have them myself. If there’s a lot of input implementation complexity and like even square root, I’d be happy to drop them. But otherwise we could hash this out in the repository. I consider implementer complexity to be a very high priority in the absence of clear use cases. +JSC: I mentioned that’s _if_ square root ends up in the list. I think there probably are use cases. I don’t have them myself. If there’s a lot of input implementation complexity and like even square root, I’d be happy to drop them. But otherwise we could hash this out in the repository. I consider implementer complexity to be a very high priority in the absence of clear use cases. -YSV: In the absence of clear use cases I have some concerns. And I would like to see—like we can always add methods later. But introducing spec text that behaves one way for something that doesn’t have a clear use case…people may start to rely on it and we won’t be able to roll it back. +YSV: In the absence of clear use cases I have some concerns. And I would like to see—like we can always add methods later. But introducing spec text that behaves one way for something that doesn’t have a clear use case…people may start to rely on it and we won’t be able to roll it back. JSC: We can search for the cases we can find. I would be happy to drop whatever methods whose cases we cannot find and defer them to later. We could do this piecemeal. @@ -134,15 +135,15 @@ SFC: Seems like a worthwhile problem. BT: That’s consensus for Stage 1. So thank you for that. And great job managing your own queue. ### Conclusion/Resolution + Stage 1 for a more limited set of math functions than originally proposed ## Get Intrinsic for Stage 1 + Presenter: Jordan Harband (JHD) - [proposal](https://github.com/ljharb/proposal-get-intrinsic) - - JHD [showing proposal explainer]: I’d originally hoped to ask for Stage 2 but realized that I have some unanswered open questions, that really would be inappropriate to wait until Stage 2 to resolve. So I will only be asking for Stage 1 today. JHD: The problem here was brought up in the previous meeting, essentially that when you write some code you generally—I mean you have to assume that the environment in which it first runs is good. Meaning nobody has maliciously screwed with any of the built-ins or the environment. So everything you can access the first time your code runs is safe or as expected. So this could mean, you know, I’ve run polyfills or I’m in a certain browser or I’ve locked things down with ses or whatever. As long as it’s matching your expectations, your code is good. @@ -165,9 +166,9 @@ SYG: I’ll start with the implementation concerns. So there was a bit you said SYG: And last time there was a solution that was proposed that required architecture: which is, you know, [that] probably V8 should move to some kind of lazy-loading thing. Anyway, instead of trying to have everything on the global to begin with. And that’s still probably the best chance going forward to recoup the memory costs—to not punish, you know, every every context. The thing I would like to caution is that this approach means that this might not get implemented in a timely fashion, because the re-architecture is significant. But, that is to say, the implementation concerns still exist. Independently [the] lazy-loading thing is probably good anyway for the codebase to do, but I can’t promise any kind of timelines there. I know this is just Stage 1, but, right, I just wanted to set expectations. -JHD: And I’ll just say that if the, if implementations as a group are confident they can eventually ship it and they plan to and that they have no, you know, and as long as some implementations or implementations can ship it obviously because of the requirements for advancing stage for I’m comfortable with the personally, because once it’s part of the specification, I can build a polyfill and then that thing can just fall out of usage naturally over time. So I’m thinking about, you know, the next ten years. Not the next you know, ten months. So it’s totally fine to me if there’s a delay, but thank you for sharing that concern. +JHD: And I’ll just say that if the, if implementations as a group are confident they can eventually ship it and they plan to and that they have no, you know, and as long as some implementations or implementations can ship it obviously because of the requirements for advancing stage for I’m comfortable with the personally, because once it’s part of the specification, I can build a polyfill and then that thing can just fall out of usage naturally over time. So I’m thinking about, you know, the next ten years. Not the next you know, ten months. So it’s totally fine to me if there’s a delay, but thank you for sharing that concern. -JWK: So, I have a question for an engine like XS. They need to be small. I think `getIntrinsic` is good. But it seems like we need to add too many strings into the engine because those intrinsics are created by the strings like `%ArrayPrototype%.slice`. I guess that might add too much size to the XS. Maybe we can only add intrinsics that can only be reached by syntax to get the list smaller. +JWK: So, I have a question for an engine like XS. They need to be small. I think `getIntrinsic` is good. But it seems like we need to add too many strings into the engine because those intrinsics are created by the strings like `%ArrayPrototype%.slice`. I guess that might add too much size to the XS. Maybe we can only add intrinsics that can only be reached by syntax to get the list smaller. JHD: I mean, so there’s a few things to respond to there. So as far as the excess concern, certainly, I’d love to hear from those implementers and confirm. But the individual parts of the dotted string all already exist in the engine. It’s just a question of, you know. I don’t know if that cancels out the concerns. They would have to speak to that. @@ -179,9 +180,9 @@ KKL: From SES and the lockdown perspective, this is great and it would be wonder JHD: So that’s an interesting thing worth exploring. My understanding Is that the current only-syntax-reachable intrinsics are considered by some delegates to be a mistake, and that they have explicitly said that they will work hard to prevent any new ones from being added. So it seems like it’s a finite set that will not grow in the future. So I’m not sure if an enumeration approach is necessary, but it’s certainly something worth looking into. -KKL: No, I agree. Either invariant needs to be preserved, or this feature needs to exist. +KKL: No, I agree. Either invariant needs to be preserved, or this feature needs to exist. -JHD: Sure. +JHD: Sure. MM: Yeah, I just want to say that there’s something between the only-reachable-by-syntax and all the intrinsics, which for most of the intrinsics just (you know, most by total numbers) they can be reached by dotted path enumeration using get, you know, getOwnPropertyNames or starting from the Object, the ones that can’t be reached by dotted path enumeration, but are also not, but also can be reached by means other than syntax are the ones that can only reached procedurally. There is no generic way to discover them. So if you don’t know, the procedural magic formula, like “create a map and then create an iterator of the map to get the iterator prototype” and things like that. So having an enumeration that covers all of the intrinsics that cannot be reached through a generic procedure, like dotted-path enumeration, is still very important. @@ -209,9 +210,9 @@ JHD: So where you say `uncurryThis`, I call it `callBind`. And I have a package KKL: Yeah, agree with this point [?]. -DE: Yeah, so it’d be great to see this kind of package of proposals laid out. So we can see a broader vision for how integrity can be exposed. +DE: Yeah, so it’d be great to see this kind of package of proposals laid out. So we can see a broader vision for how integrity can be exposed. -PFC: This is really important for other software that embeds a JavaScript engine for scripting for the purposes of people writing plugins. A big example of this that I’m involved with is GNOME. People write plugins for the GNOME desktop in JavaScript, and I can say from my experience that this sort of defensive programming where you have to grab the intrinsics beforehand is just not a concern for people writing these plugins—although it should be! Because you can easily crash your GNOME desktop by deleting built-ins off of prototypes. I think if we had a facility for this built into the language, that would bring it to the attention of people who don’t usually think about what happens if you delete an intrinsic off of a prototype. The fact that this facility exists makes it easier for them to think about it. I have a feeling that they’ll use it if it exists, and if it doesn’t, they just won’t realize it’s a problem. +PFC: This is really important for other software that embeds a JavaScript engine for scripting for the purposes of people writing plugins. A big example of this that I’m involved with is GNOME. People write plugins for the GNOME desktop in JavaScript, and I can say from my experience that this sort of defensive programming where you have to grab the intrinsics beforehand is just not a concern for people writing these plugins—although it should be! Because you can easily crash your GNOME desktop by deleting built-ins off of prototypes. I think if we had a facility for this built into the language, that would bring it to the attention of people who don’t usually think about what happens if you delete an intrinsic off of a prototype. The fact that this facility exists makes it easier for them to think about it. I have a feeling that they’ll use it if it exists, and if it doesn’t, they just won’t realize it’s a problem. JHD: Thank you. And as KKL mentioned as well, Node does this. They have a primordial pattern which is basically: they pre-create call-bound versions of all the intrinsic functions, and then they laboriously write all their code to use them. Not all of it, but much of it. And that’s because they don’t want the platform to crash if someone types `delete Function.prototype.call`. Thank you for that support. Onto immutability of `getIntrinsic` return values? @@ -231,20 +232,22 @@ SYG: Okay, I think it might still be the case I want to say this in. And it’s JHD: So I completely agree, but I think more users will be impacted by this than ever are affected by Atomics, for example, which is also a very niche use case, even on the Web Platform. I mean, I’m not, I’m not hearing to be hostile with that, but I just think that like, the amount of transitive code that depends on this pattern is very large. -SYG: I think the point I’m trying to make is, I guess, the same point that I made earlier: that if a large part of this is so large, part of this is ergonomic. You want to cache one thing instead of _n_ things. I get that and I hear you there. The other cost is you don’t want to ship this heavyweight Library around, while you might not get around that cost. Anyway, even though you push it to the engine, and that is a thing, I would like a better handle on if it, in fact, has [?] effects on loading time. That might not be acceptable if it’s just a memory and we want to re-architect around that; maybe that’s okay. Thanks. +SYG: I think the point I’m trying to make is, I guess, the same point that I made earlier: that if a large part of this is so large, part of this is ergonomic. You want to cache one thing instead of _n_ things. I get that and I hear you there. The other cost is you don’t want to ship this heavyweight Library around, while you might not get around that cost. Anyway, even though you push it to the engine, and that is a thing, I would like a better handle on if it, in fact, has [?] effects on loading time. That might not be acceptable if it’s just a memory and we want to re-architect around that; maybe that’s okay. Thanks. -JHD: That’s very well understood. +JHD: That’s very well understood. -JWK: I suppose it to Stage 1. It’s a worth problem to solve. +JWK: I suppose it to Stage 1. It’s a worth problem to solve. JHD: All right. Do you have any objections to Stage 1 here? BT: I don’t hear any objections. That sounds like Stage 1. Thank you, JHD. Thank you everybody. ### Conclusion/Resolution -Stage 1 + +Stage 1 ## RegExp Feature Parity + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-regexp-features) @@ -258,21 +261,21 @@ RBN: [slide 4] Part of the reason to investigate is support for things like Text RBN: [slide 5] Some of the features that we’ve been investigating include things like the explicit-capture mode. This is a feature that’s available in Perl, PRCE, and .NET among the engines that I’ve currently been investigating. This affects capturing behavior, such that normal capture groups such as just those with parentheses are treated as non capturing groups and only named captured groups are returned as part of the match result. For cases where your project is primarily using named capture groups, this helps reduce memory overhead and reduces the complexity of a regular expression by dropping the `?:` that’s used for what is normally a non capture group. -RBN: Another flag that we’ve been investigating is extended mode, which is the `x` mode. This allows you to treat unescaped white space within a regular expression as insignificant. So all white space either needs to use the `\s` or `\ ` to escape a space. This is useful for introducing comments and for creating multi-line regular expressions with the RegExp constructor. There’s a couple notes here, in that Perl has the `x` flag, but it did not treat white spaces in a character class as insignificant in Perl 5.26. They added the `xx` flag, which does not enable multiline regular expression literals. The only way that you could support multiple line regular expressions currently would be to use a template literal within the regular expression constructor, for example. And this is something that’s available in Perl, PCRE, and pretty much every engine I’ve observed, with the exception of ECMAScript. +RBN: Another flag that we’ve been investigating is extended mode, which is the `x` mode. This allows you to treat unescaped white space within a regular expression as insignificant. So all white space either needs to use the `\s` or `\` to escape a space. This is useful for introducing comments and for creating multi-line regular expressions with the RegExp constructor. There’s a couple notes here, in that Perl has the `x` flag, but it did not treat white spaces in a character class as insignificant in Perl 5.26. They added the `xx` flag, which does not enable multiline regular expression literals. The only way that you could support multiple line regular expressions currently would be to use a template literal within the regular expression constructor, for example. And this is something that’s available in Perl, PCRE, and pretty much every engine I’ve observed, with the exception of ECMAScript. -RBN: [slide 6] Other features that have already been investigated are things like possessive quantifiers, which are similar to regular or greedy quantifiers, but prevent backtracking if capture fails. This is useful for performance because of how poorly performing certain regular expressions can be. Especially those that might have a significant amount of backtracking. If you look at the discussion on the repository linked below, you can see some examples of a relatively small regular expression that takes exponential amounts of time based on how many characters are within the pattern, or within the text that you’re trying to match. One of the advantages of this is that those that are looking to achieve better performance in regular expressions would have the ability to control this behavior. This could be used in current regular expressions regardless of flag as it’s already introducing the plus character as part of a possessive quantifier, which doesn’t conflict with any existing syntax. And again, this is a feature available in almost every single regular expression engine that I’ve investigated. +RBN: [slide 6] Other features that have already been investigated are things like possessive quantifiers, which are similar to regular or greedy quantifiers, but prevent backtracking if capture fails. This is useful for performance because of how poorly performing certain regular expressions can be. Especially those that might have a significant amount of backtracking. If you look at the discussion on the repository linked below, you can see some examples of a relatively small regular expression that takes exponential amounts of time based on how many characters are within the pattern, or within the text that you’re trying to match. One of the advantages of this is that those that are looking to achieve better performance in regular expressions would have the ability to control this behavior. This could be used in current regular expressions regardless of flag as it’s already introducing the plus character as part of a possessive quantifier, which doesn’t conflict with any existing syntax. And again, this is a feature available in almost every single regular expression engine that I’ve investigated. RBN: [slide 7] Another feature that we’re looking into is atomic groups. These are non-capturing groups that are matched independent of neighboring patterns, so it prevents backtracking similar to possessive quantifiers and that allows you to again write regular Expressions that have better performance in specific cases. This again, has no conflict with existing syntax because `?>` is currently considered illegal within a regular expression as it’s not a valid group. RBN: [slide 8] Some other features we’ve been looking at are buffer boundaries. These are similar to the `^` and `$` anchors, but in this case, they’re not affected by the multi-line flag. In most engines that support this the `\A` matches start of input, `\Z` matches end of input. Actually, I should say that all engines that have this that I’ve seen, support `\z`, this `\Z` assertion differs in at least one engine where it supports any number of optional new lines at the end of input. But most engines currently support only a single trailing new line. -RBN: [slide 9] Line-ending escapes. This is an escape character sequence that is not supported with a new character class, but it’s supported outside of character class, and it’s designed to match any line ending escape character. So it matches CR+LF, Carriage Return or Line Feed on its own, as well as Unicode line terminators. There is a PR against the repository recently, discussing whether or not this should also indicate that this should match the UTS #18 specification for `\r` within a character class. This just would be an escape for the capital R. That’s usually the case in every engine that’s been tested. This is a feature that, if we considered investigating, would require something like the Unicode `u` flag, as it would be breaking for existing regular expressions. +RBN: [slide 9] Line-ending escapes. This is an escape character sequence that is not supported with a new character class, but it’s supported outside of character class, and it’s designed to match any line ending escape character. So it matches CR+LF, Carriage Return or Line Feed on its own, as well as Unicode line terminators. There is a PR against the repository recently, discussing whether or not this should also indicate that this should match the UTS #18 specification for `\r` within a character class. This just would be an escape for the capital R. That’s usually the case in every engine that’s been tested. This is a feature that, if we considered investigating, would require something like the Unicode `u` flag, as it would be breaking for existing regular expressions. RBN: [slide 10] One feature that I’ve definitely been interested in introducing is modifiers. As I mentioned earlier in the motivations, one of the motivating use cases is the ability to support syntax colorization and TextMate grammars within the browser. TextMate grammars use string-based regular expressions, since they’re primarily written either in YAML or JSON and the PList format that’s also used. All three of these don’t actually support a literal regular expression, so you can’t actually provide regular expression flags to control behavior, such as whether there’s case insensitivity, multi-line, etc. Every single regular-expression engine that I have surveyed, with the exception of ECMAScript, has this capability. So it’s definitely one that I think is useful and powerful. And again, it’s heavily used within TextMate grammars today. So, some of the examples of this are being able to set, which is `?` and then a series of flags; and then unset, which is a `-` and then one or more of those flags. That turns those flags on or off for that pattern until the closing parenthesis. So that happens for all alternatives within the pattern or the end of the regular expression itself. There’s also a variation of this that supports specifying it with a colon that uses then a sub expression. This has no conflict with existing syntax. Certain flags would not be supported with it. You would not be able to control certain flags in regular expressions, such as global sticky or the has-indices modifier. RBN: I do want to address a couple comments I’ve seen going through here, what I’m looking for as part of this proposal. It is not specifically wholesale adoption of all of these. It’s an investigation into the individual features that we’re discussing and that I’m bringing up as showing disparity and whether we can take some of these—as I believe BT coined it when I was talking with him—as RegExp Buffet v2. Some, we may break out into individual proposals. Some, we may choose not to advance at all. But primarily what I want to do is open up discussion on all of these possibilities and features that are common so that we can determine which ones we want to move forward with. -RBN: [slide 11] Getting back into the presentation, another feature that it’s been very useful I found in other engines and other languages is the ability to introduce comments into regular expressions. Regular expressions by nature are very terse and opaque to many users. The syntax is extremely complex and as a result, it can be very difficult to understand exactly what’s going on within a regular expression at times, especially complex ones. Comments, at least in this specific feature, are designed around introducing a comment in line with a regular expression, in that the `(?#` symbol indicates that this is the beginning of a comment group and it ends at the next `)` and allows you to write text that is not considered part of the pattern. This can be used in a regular expression literal or it can be used within the RegExp constructor using a multi-line template literal. Again, this is supported in every single regular expression engine that I’ve tested or investigated with the exception of ECMAScript up to this point. This also would have no conflict with existing syntax. +RBN: [slide 11] Getting back into the presentation, another feature that it’s been very useful I found in other engines and other languages is the ability to introduce comments into regular expressions. Regular expressions by nature are very terse and opaque to many users. The syntax is extremely complex and as a result, it can be very difficult to understand exactly what’s going on within a regular expression at times, especially complex ones. Comments, at least in this specific feature, are designed around introducing a comment in line with a regular expression, in that the `(?#` symbol indicates that this is the beginning of a comment group and it ends at the next `)` and allows you to write text that is not considered part of the pattern. This can be used in a regular expression literal or it can be used within the RegExp constructor using a multi-line template literal. Again, this is supported in every single regular expression engine that I’ve tested or investigated with the exception of ECMAScript up to this point. This also would have no conflict with existing syntax. RBN: [slide 12] Another interesting feature are line comments. This is something that is supported within all engines that support the `x` mode flag. It’s not supported within a regular expression literal. Well, it would be, but essentially the rest of the regular expression literal would be considered a comment, because you again can’t can’t have multiple lines. It would be best used with something like a template literal, especially if you’re using String.raw, so that you don’t have to double escape your character escapes, but it does significantly improve readability for complicated expressions. When X mode is on within that regular expression, again, all whitespace is treated as insignificant and the hash character is considered the beginning of a comment when outside of the character class, which means inside of `x` mode the hash character would need to be escaped. @@ -288,7 +291,7 @@ RBN: [slide 17] One of the other capabilities of subroutines is they allow recur RBN: [slide 18] What I’m looking to do is request Stage 1 for investigating the feasibility. I had considered an approach with some others about the possibility of creating a RegExp-specific TG. At the time, it seemed like there wasn’t enough interest in that from the folks that I was talking with. What I decided to do was put together some interesting features that I think we should pursue or investigate, based on the research that I’ve had. I expect that some of these features won’t be adopted for Stage 2. Some features might require syntax changes and some things that we haven’t listed we might consider adding. I also believe we may eventually break this down into more individual features or more individual proposals. But quite a number of these proposals have specific tie-ins to each other, such as conditionals having cross-cutting concerns with subroutines. The goal with presenting them all together was to ensure that we had the ability to see how they work cohesively. And again, a lot of these features are heavily motivated by the TextMate grammar use case, which was where I started with the RegExp match Indices. Trying to reach a point where editors like VSCode or other code colorizers or parsers in general that use regular expressions have more flexibility and more capabilities that are currently available in other engines, so that they don’t have to fall back to native bindings or Wasm builds of native engines, like Oniguruma. -RBN: [slide 19] At this point, I will go back to the queue and we can discuss any questions that people have. +RBN: [slide 19] At this point, I will go back to the queue and we can discuss any questions that people have. WH: There are a lot of things here. Some of these I think are fairly reasonable. Some are really experimental. Some of the places where you said that these would not break existing grammar, that’s inaccurate in that they would, and I can give some examples. Some of these are really unmotivated. I don’t see much of a motivation to support multi-line regular expressions if you can’t do it for literals, and there are good reasons why you can’t do it for literals. @@ -322,7 +325,7 @@ RBN: But again, what I’m looking for for Stage 1 is investigating these featur MF: Okay, I think it’s a possibly slightly inappropriate use of the Stage process here. I agree with you that this probably as a whole would never advance past Stage 1, but I do see as your overarching goal saying we will—typical Stage-1 thing—we will commit committee resources to investigate that. I think that is appropriate. Whether we actually call that a proposal or not—is up to the chairs. -RBN: I have had offline conversations with a number of individuals about whether or not we should consider chartering a technical group to specifically focus on the regular expression sublanguage. Most of the feedback that I received was, I would say, either disinterested or negative about that. But if the committee is more interested in having a specific TG chartered for this, I’m not sure what the process is to do that, but I can also investigate that as well. +RBN: I have had offline conversations with a number of individuals about whether or not we should consider chartering a technical group to specifically focus on the regular expression sublanguage. Most of the feedback that I received was, I would say, either disinterested or negative about that. But if the committee is more interested in having a specific TG chartered for this, I’m not sure what the process is to do that, but I can also investigate that as well. DE: I want to make a process suggestion. I think a formal TG would be a little bit too heavyweight because this is an effort that we’re ramping up, then it will eventually reach a point that we’re happy with—rather than having a standing set of responsibilities forever. What if we made a regular call on the TC39 calendar? We could think of it as an ad-hoc subgroup of TC39 people who are interested in regular expression features. Then this group can propose things for Stage 1. I agree with Michael that this is a little bit of a funny—It’s more like a work area than a proposal. Maybe we could record, in the proposal’s repository, the kind of calls and work areas that we have. @@ -338,7 +341,7 @@ DE: Okay, one one last point. The goal would be to work towards parity, and I do RBN: I can agree with that again. My goal isn’t specifically 100% parity. It’s a parity that I’m looking for on common features. There are things that I have researched and [here is a website](https://rbuckton.github.io/regexp-features/) that I’ve been putting together for a while. This originally started as an Excel spreadsheet and it was a comparison of common features between engines, the differences that each engine maintains. It’s not 100% accurate. I’ve been going in and filling in what I can, and I have probably about twelve more engines on my list to eventually go through and add in a lot of these features that I’ve been looking at. For example, there’s features like call-outs which I’m definitely not proposing. That’s the ability to execute code in the middle of a regular expression, backtracking control verbs. -RBN: There’s a lot of these features that I’m not looking for that are in a number of engines, but I’m definitely looking for features that have support across a significant number of engines and are commonly used in practice and would definitely improve developers’ lives. So again, not looking for 100% parity, but I am looking for improving the support we have within our regular expression language so that we can get the same types of, in some cases, brevity, or in some cases, additional power that a lot of other engines employ and are commonly used in the motivating use cases I had around: specifically things like TextMate grammar support, balanced bracket parsing, and improving documentation and readability. And improving performance. +RBN: There’s a lot of these features that I’m not looking for that are in a number of engines, but I’m definitely looking for features that have support across a significant number of engines and are commonly used in practice and would definitely improve developers’ lives. So again, not looking for 100% parity, but I am looking for improving the support we have within our regular expression language so that we can get the same types of, in some cases, brevity, or in some cases, additional power that a lot of other engines employ and are commonly used in the motivating use cases I had around: specifically things like TextMate grammar support, balanced bracket parsing, and improving documentation and readability. And improving performance. RBN: I’ll make a quick note that recently on TypeScript, one of my co-workers has a peer from university that had built a tool to analyze the complexity of regular expressions. We were using it on our engine source code to find patterns we had that were poorly performing. As a result we have been making changes and fixes. A lot of these issues that we found would have been addressed through things like possessive quantifiers because backtracking was a significant performance problem. And instead because these don’t exist, we’ve had to rewrite regular expressions and change how we parse a number of things in the compiler to improve performance. @@ -350,23 +353,23 @@ CM: First of all, what I’m about to say might sound snarky and I want to apolo BSH: So first sort of opposite reasons for what CM said, I think a TG might be a good idea. I think mainly because I think there’s not a lot of clarity on what is the bar for—what should—what do we want to change? Make [?] just a regular expression language [?], which I think we need. It’d be good to have some sort of a consensus, from interested parties on what kind of features are we interested in adding, and which ones are we not? I think [there’s?] this sort of a consensus on the whole language—but regular expressions, not so much. And yeah, I agree with what was said, earlier, almost made the same statement—I don’t think parity with other languages isn’t a tractable design goal, but we don’t have any clear design goals. So, defining those would be great. -WH: We already have a subgroup working on regular expressions. Thus I am baffled by the calls to create a subgroup which already exists. The problem with the existing subgroup is that it meets so frequently. It meets every week, which makes it hard to follow. +WH: We already have a subgroup working on regular expressions. Thus I am baffled by the calls to create a subgroup which already exists. The problem with the existing subgroup is that it meets so frequently. It meets every week, which makes it hard to follow. -RBN: I wasn’t aware there was an existing subgroup discussing regular expressions outside of the group discussing the RegExp Set Notation proposal. +RBN: I wasn’t aware there was an existing subgroup discussing regular expressions outside of the group discussing the RegExp Set Notation proposal. WH: Yeah, that’s what I was referring to. RBN: I looked at that as more of a specific feature—as a matter of fact, I had been planning, in researching this and planning, to put something on the agenda. Once I’d finished the majority of the research that I was doing right around the time, then that proposal was added. And I’ve looked at that more as being a very specific feature and scoped proposal, and again if we were considering breaking these down into more specific and scoped proposals, then it feels like that, that group would wind up expanding in its charter. Or even if it’s not really chartered, but expanding in its scope, which might not be in the interest of the champions of that. I’d have to let them speak to that. -WH: They do interact very strongly. Some of the examples you gave in the slideshow would break under the proposed modernized Unicode semantics. +WH: They do interact very strongly. Some of the examples you gave in the slideshow would break under the proposed modernized Unicode semantics. -MLS: So there’s the other languages and the “if they build it, they will come” kind of thing. I think we need to consider the syntax for regular expressions like we consider the syntax for the language itself, and that we syntax must pay for or the feature must pay for the syntax that uses regular expressions that are no longer regular. They haven’t been for a long time. They’re approaching Turing completeness. +MLS: So there’s the other languages and the “if they build it, they will come” kind of thing. I think we need to consider the syntax for regular expressions like we consider the syntax for the language itself, and that we syntax must pay for or the feature must pay for the syntax that uses regular expressions that are no longer regular. They haven’t been for a long time. They’re approaching Turing completeness. MLS: I think this should be driven by developer desire. And you see this in some other languages, where they’ve added features to regular expression, then they deprecated—and other features when they weren’t, when they’re broken, when the original ones are broken or not useful. So I think we need to be very careful and drive this based upon developer demand. -RBN: And I definitely agree my motivations again, are based on where—the majority of what was presented in these slides—the motivations are based on needs that I’ve seen with in things like the Visual Studio Code editor, or Atom or any of the other editors that use Electron, that have web-based editors that currently rely on TextMate-style grammars that use syntax that JavaScript regular expressions can’t parse. And these are based on—while a lot of these are based on the common denominator between what the engine that’s being used is—in most cases that’s Oniguruma—a lot of these features are very heavily used in other languages, and I found myself constantly having to work around the fact that they don’t exist in regular expressions. And I know I don’t have a precise set of numbers of individuals with specific developer asks, but I know that a number of these features are very useful within day-to-day things. Like atomic groups and possessive quantifiers aren’t going to be used by the majority of developers, but they’re going to be used by the people that need them and have no other option. Things like conditional expressions and modifiers are extremely powerful features. Not having them means that many expressions become more complicated, which means that certain expressions can’t be implemented as a regular expression: they have to be implemented as three or four regular expressions with a lot of complicated logic around them. So the goal of any language is to improve productivity and be terse. I mean there’s multiple other goals. So a lot of these are heavily based on features that are heavily used in other languages that we don’t have. +RBN: And I definitely agree my motivations again, are based on where—the majority of what was presented in these slides—the motivations are based on needs that I’ve seen with in things like the Visual Studio Code editor, or Atom or any of the other editors that use Electron, that have web-based editors that currently rely on TextMate-style grammars that use syntax that JavaScript regular expressions can’t parse. And these are based on—while a lot of these are based on the common denominator between what the engine that’s being used is—in most cases that’s Oniguruma—a lot of these features are very heavily used in other languages, and I found myself constantly having to work around the fact that they don’t exist in regular expressions. And I know I don’t have a precise set of numbers of individuals with specific developer asks, but I know that a number of these features are very useful within day-to-day things. Like atomic groups and possessive quantifiers aren’t going to be used by the majority of developers, but they’re going to be used by the people that need them and have no other option. Things like conditional expressions and modifiers are extremely powerful features. Not having them means that many expressions become more complicated, which means that certain expressions can’t be implemented as a regular expression: they have to be implemented as three or four regular expressions with a lot of complicated logic around them. So the goal of any language is to improve productivity and be terse. I mean there’s multiple other goals. So a lot of these are heavily based on features that are heavily used in other languages that we don’t have. -RBN: But I definitely agree that we should focus specifically on the features that are useful. The line-ending escape one is one that I’ve seen come up quite a bit and was the first thing that somebody had a PR to improve documentation around, because they wanted to make sure it was UTS #18 compatible. Of all of the things I presented here, the one that I find the least useful, that might not make the cut, but is also the simplest, is things like buffer boundaries. I definitely agree that we want to make sure that whatever we’re building is based on things that developers need and not just everybody. +RBN: But I definitely agree that we should focus specifically on the features that are useful. The line-ending escape one is one that I’ve seen come up quite a bit and was the first thing that somebody had a PR to improve documentation around, because they wanted to make sure it was UTS #18 compatible. Of all of the things I presented here, the one that I find the least useful, that might not make the cut, but is also the simplest, is things like buffer boundaries. I definitely agree that we want to make sure that whatever we’re building is based on things that developers need and not just everybody. MLS: Well. I also think it’s based upon the fact that we can always find somebody that wants something, but we’re introducing complexity to regular expressions in the language. Performance. We’re also introducing complications on regular expression processing in a lot of cases. Regular expressions, there are regular expressions in applications that are used all the time. There are other regular expressions that are used infrequently. So the parsing time of the regular expression itself. These we figured out that figured into the performance of that particular expression since it’s used once or very few times. And so, even adding syntax, even though it doesn’t complexity and execution of the regular expression, needs to be figured in. And as we saw with the, the indices proposal that we didn’t think there’d be some performance implications and there were, and had to go back and modify it. Many of these, I have concerns that we will impact the performance of the existing applications that are using the current features. @@ -376,7 +379,7 @@ RBN: So I’m bringing up TextMate as a common use case because it’s not a sta RBN: But these features aren’t specifically geared towards TextMate support. It’s just that it is a very common use case that I see them in. A lot of these other features are very powerful features for doing other types of regular-expression parsing that just again we can’t do today. TypeScript itself doesn’t worry about TextMate. We have a TextMate grammar for VS Code, but we also heavily used regular expressions in a number of cases and again, we suffer from poor performance in regular expressions because of excess backtracking and have had to deep dive into what we’ve we’ve written to find better ways of doing this, given that we don’t have these capabilities in the language. -RBN: So, all of these features are designed for more use cases than just the TextMate case. It’s just the easy go-to because it’s one that I see very often and, well, most developers don’t look at the TextMate grammars. The text developers that I’ve talked to are usually very passionate about the themes that they use in their editors and having support for this in the language that doesn’t require essentially shelling out to another language, because we can’t can’t support these features. At least for the common denominator, features like modifiers and conditionals would be extremely useful. +RBN: So, all of these features are designed for more use cases than just the TextMate case. It’s just the easy go-to because it’s one that I see very often and, well, most developers don’t look at the TextMate grammars. The text developers that I’ve talked to are usually very passionate about the themes that they use in their editors and having support for this in the language that doesn’t require essentially shelling out to another language, because we can’t can’t support these features. At least for the common denominator, features like modifiers and conditionals would be extremely useful. SYG: So I’m not saying that I discount the use case of people who want syntax highlighting. I understand that perfectly. Well, what I’m just asking is the usual PM-ey question of how much of this is a problem with TextMate, if that is what remains the main motivating use case. I also believe that these features are very well designed to be amenable to more use cases, but I want this to be use-case driven, like MLS is saying, and if the use case remains TextMate, is it the problem? Not the “problem”, I guess…but is it more productive or easier to change TextMate? I mean we’re a standards body. TextMate is a de-facto standard, that’s something else to work with. But anyway, I think you’ve adequately answered my question. Thank you. @@ -393,10 +396,13 @@ WH: At some point you will want to make decisions. And the question is which gro RPR: Would you like to start by talking to the Set Notation folk? And then see what between you that you think is the most appropriate: either start your own group or expand that group? RBN: I can do that. And at the very least the repo will live where it is and, if necessary, I’ll break this down into others [?]. And this is more of a personal reason for presenting this all at once: not having to maintain fifteen individual proposal repositories. I think I’ll leave it there, and I’ll talk with some folks offline in the Segmentation group, and if anyone else is interested, they can provide feedback on the repository where it’s at. + ### Conclusion/Resolution -* more discussion offline + +- more discussion offline ## Fixed layout objects + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/syg/proposal-structs/) @@ -412,7 +418,7 @@ SYG: Maybe we want to pack memory layout better, because we want more guarantees SYG: Maybe it’ll give you better predictable performance because we no longer have to have the engine have to learn continuously as the program executes. What the layout of these objects are as you add and remove properties from them. It may help userland data types, like Complex and other stuff maybe together with operators. -SYG: This is a big scope that is possible to explore, but the interest of this proposal is limited to considering the first two use cases, which I already considered to be quite large. But in particular, this proposal considers the first two to be requirements and at the same time seeks to not preclude the other use cases that folks might be interested in. And for this reason, as you’ll see, when I actually get to the presentation of the actual technical parts, this proposal is intended to be pretty minimal, with a bunch of future-proofing added in. Hopefully so that we can move incrementally to enable some new expressivity sooner than later and build on it as a building block. +SYG: This is a big scope that is possible to explore, but the interest of this proposal is limited to considering the first two use cases, which I already considered to be quite large. But in particular, this proposal considers the first two to be requirements and at the same time seeks to not preclude the other use cases that folks might be interested in. And for this reason, as you’ll see, when I actually get to the presentation of the actual technical parts, this proposal is intended to be pretty minimal, with a bunch of future-proofing added in. Hopefully so that we can move incrementally to enable some new expressivity sooner than later and build on it as a building block. SYG: [slide 3] So, to motivate it better. The first one is shared memory concurrency, and I’ve given a vision talk in the past about why that is important to me, and hopefully to the ecosystem. So the basic idea is as always: Let’s use more cores, but why should we do it via shared memory versus something more principled that doesn’t have data races by construction. For example, well, the mega-apps—like GSuite, MS Office, maybe the TypeScript compiler—are running into a performance wall today. And a possible way out could be to give them concurrency sooner than later. These mega-apps and experts will need the expressivity of shared memory, even with something with more guardrails built in. I think this fits with our general approach, with our beginning, to JavaScript language’s general approach to concurrency. @@ -458,9 +464,9 @@ SYG: [slide 21] The stuff that’s going to be hard is obviously the garbage col SYG: [slide 22] The stuff that’s really hard are strings. All the engines have these very complex menageries of string types and string optimizations, such that the string representations mutate in place depending on when things happen. When you flatten ropes, for example, when you concat strings, they get into these rope structures where you don’t actually just copy them, you hook them up into a DAG—but sometimes you need to access the character buffer and when you do, you flatten them. What happens when you flatten these ropes [?] transitions in place to a flat string? Sometimes you cannot apply them, AKA intern, where to duplicate them [?] so that you can compare strings that are duplicated by pointer equality. This gets inserted to a table; that table now needs to be thread-safe when that representation happens in place. Sometimes you even externalize strings, where you move the ownership of the character buffer out of the JS engine into the host, like the HTML engine or something. It’s pretty hard to make these thread-safe and performant. It’s a major challenge. I’ve been working on it for a few months. It’s kind of fun, but it’s actually really hard. This is just to call that out. -SYG: [slides 23–24] And yeah, that’s basically it for the motivation and very rough idea of what the technical solution might look like. And I would like to go through the queue and then ask for Stage 1 with details of what exactly I’m asking for on the right-hand side here. +SYG: [slides 23–24] And yeah, that’s basically it for the motivation and very rough idea of what the technical solution might look like. And I would like to go through the queue and then ask for Stage 1 with details of what exactly I’m asking for on the right-hand side here. -KM: I’m still still on board with this. Don’t know what happened. But yet he didn’t get back to you in time. But yeah, now it’s I still I’m a fan of the idea. And I’m happy to co-champion. +KM: I’m still still on board with this. Don’t know what happened. But yet he didn’t get back to you in time. But yeah, now it’s I still I’m a fan of the idea. And I’m happy to co-champion. SYG: Awesome. Thank you. @@ -470,20 +476,20 @@ ATA: Yeah, it is concerning. So in two slides, there was a slide about what’s YSV: Yeah, that does more or less answer it. Now I can’t speak for our Wasm team, they have very limited time in terms of giving this proposal the amount of time that it needs. So they haven’t been able to come back to me with regards to any further specific issues they have here. But what I would like to say is I’m perfectly fine with going to Stage 1. I want to spend some time working with a few people not only on the direct Wasm team, but also adjacent to it. So ATA would be one person, but I also want to speak with Luke Wagner and a couple of other folks to get a bit more feedback here before we would look at something like Stage 2. The WebAssembly team on Mozilla’s side, on SpiderMonkey, is a little uncomfortable with how quickly this is moving forward. And I will try to get some information soon, but I can’t promise how soon that will be. So if you’re looking to move this quickly to Stage 2, I would ask that we work closely together on the pacing of it. -SYG: I hear you and I intend to work closely with you. To the urgency question and the speed that I’m envisioning. This slide [slide 22, about strings] is I think the actual thing that might block implementations for a significant amount of time, and that is what I’m working on, and that is not blocked by, you know, standards progress. I think I want to get the ball rolling here. There are many interested parties and let’s try to nail down some design. That is amenable to everybody while this part which I think is the hard part is happening. +SYG: I hear you and I intend to work closely with you. To the urgency question and the speed that I’m envisioning. This slide [slide 22, about strings] is I think the actual thing that might block implementations for a significant amount of time, and that is what I’m working on, and that is not blocked by, you know, standards progress. I think I want to get the ball rolling here. There are many interested parties and let’s try to nail down some design. That is amenable to everybody while this part which I think is the hard part is happening. YSV: Yeah, for sure. For this, I’m going to have our GC folks take a look and work with you on that once they’ve got some cycles to do that. -JWK: Currently on the web, the concurrent programming model is based on post messages instead of memory sharing. Adding a high-level abstraction of shared structs… That means we are encouraging the concurrent programming model based on memory sharing. +JWK: Currently on the web, the concurrent programming model is based on post messages instead of memory sharing. Adding a high-level abstraction of shared structs… That means we are encouraging the concurrent programming model based on memory sharing. SYG: I will say no, so one answer there is that SharedArrayBuffers exist. So we already have shared memory and the other part of the answer is that— -JWK: SharedArrayBuffer is a low-level API and it’s hard to use. +JWK: SharedArrayBuffer is a low-level API and it’s hard to use. SYG: I think, at least right now, this is also fairly hard to use even with all the bells and whistles. I imagine that will need here for these to be more ergonomic for power users like function sharing. Opting into this kind of programming is just hard to get right. I’m not seeing the encouragement where, if the encouragement from their syntax were, “Now you can make these objects,” they will run into issues pretty quickly. It is a risk that we might be encouraging a dangerous style of programming, but escape hatches exist. I remain very convinced and I feel strongly about this: escape hatches for these kinds of power app experts [partners?]. Pressure will remain on that front, and this is for them. If you can refer back to the [vision talk I gave about concurrency in general a year ago](https://github.com/tc39/notes/blob/master/meetings/2020-11/nov-16.md#concurrent-js-a-js-vision)—I think the future of concurrency on the web is we need to own up to having just these two concurrency models. This message passing thing, that’s mostly done by race-free construction and shared memory. And it happens. We’re doing shared memory first, but the longer-term vision I have is not this being the primary way to get concurrency on the web where the GSM [?] system, but it is a building where I imagine that we can explain other kinds of objects that can be shared among threads in a safer manner. JWK: Okay, I think it’s fair too. JS should be able to support multiple patterns (like FP and OOP). - + MM: I’m very, very skeptical of this entire direction. The non-shared ones, the struct classes: those actually look very nice for reasons that you didn’t go into at all and seem to be completely outside of your motivations. They actually share a lot with what I was trying to accomplish with defensible classes, and I think you’re succeeding where I wasn’t able to figure out how to succeed because you actually got more restrictive than occurred to me, like the fact that they can only inherit from struct classes and that they're initialized all at once. There’s no partially initialized state that’s visible. So, that’s all great. MM: On the concurrency, on the shared things: I think that this is really about the soul of JavaScript, as a character of the language, and what makes it something that lots of regular application developers are able to use successfully, including using JavaScript’s concurrency successfully. The concurrency, like JWK was mentioning, is the message passing concurrency. @@ -496,11 +502,11 @@ MM: There’s no good solutions to those things. Shared-memory multi-threading i MM: And the argument that experts will use this, and regular users can choose not to, just doesn’t hold once there’s an ecosystem. And people are trying to use some high-performance libraries that were constructed by experts to use these features. There is a contagiousness of complexity on the code that just tries to make use of those libraries. So none of this is an argument against Stage 1, you know. Certainly as for Stage 1, I’m fine. But I want to make it very, very, very clear: I really hope we don’t introduce this level of hazard and footgun into the JavaScript language that will really destroy the character of the JavaScript application program. -SYG: Thank you, Mark, for your perspective. It’s somewhat of a philosophical disagreement. We’re perhaps less misaligned than you might think. I think I want the same future you want. Except I don’t see a way around escape hatches. and we can discuss that offline to see how we can further restrict these. I’m operating also under the design principle that shared memory stuff must be very explicitly opted into. And this contagion, I share that same concern but this contagion I also feel will be here in an even worse pattern If we do not get ahead of this, in the sense that we did with SharedArrayBuffers by WasmGC. +SYG: Thank you, Mark, for your perspective. It’s somewhat of a philosophical disagreement. We’re perhaps less misaligned than you might think. I think I want the same future you want. Except I don’t see a way around escape hatches. and we can discuss that offline to see how we can further restrict these. I’m operating also under the design principle that shared memory stuff must be very explicitly opted into. And this contagion, I share that same concern but this contagion I also feel will be here in an even worse pattern If we do not get ahead of this, in the sense that we did with SharedArrayBuffers by WasmGC. MM: I was reluctant to approve SharedArrayBuffers. And the reason I approved it is that the pressure from games made it seem like it was inevitable that whether TC39 approved it or not. All the browsers were going to implement it and games were going to use it. And then far, the reason that we’re still in a good place is basically because SharedArrayBuffers has been a resounding adoption disaster. People don’t use it. And hopefully they will continue to be an adoption disaster and anything that makes shared memory multithreading usable will make it more adoptable, which will be a strict backward motion from the current state where people could use it and destroy safety properties, in theory. But right now, at least they’re not. -JWK: One of the primary use cases for shared memory and shared structs are for WebAssembly, WebAssembly needs a shared struct because they need to handle the code compiled from C++ or some other languages. I think it’s acceptable to keep the shared struct inside multiple Wasm threads, but not let them leaked into the JavaScript side. Multiple Wasm threads can program by the shared memory and if they want to send the results to JavaScript, they need to go through the message passing. I think that is better to have. +JWK: One of the primary use cases for shared memory and shared structs are for WebAssembly, WebAssembly needs a shared struct because they need to handle the code compiled from C++ or some other languages. I think it’s acceptable to keep the shared struct inside multiple Wasm threads, but not let them leaked into the JavaScript side. Multiple Wasm threads can program by the shared memory and if they want to send the results to JavaScript, they need to go through the message passing. I think that is better to have. SYG: I disagree and I think real products would as well. @@ -510,13 +516,13 @@ SYG: We’re out of time: two minutes. Unfortunately, the memory model question, WH: I’d like to ask a question: In your slides when you access that `x` field, are all of those accesses atomic or not? -SYG: Do you mean sequentially consistent? Or do you mean memory ordering? +SYG: Do you mean sequentially consistent? Or do you mean memory ordering? -WH: Memory ordering. +WH: Memory ordering. SYG: Yes, they are atomic in that they won’t tear, but they are unordered. The current intention is to also extend atomics in this way. I didn’t show this level. If you need GC access, you can do this. -WH: Okay, in this case, I do not believe that this is safe. +WH: Okay, in this case, I do not believe that this is safe. SYG: I think it is, but let’s check. @@ -530,32 +536,34 @@ SYG: Okay. Sorry, [where was?] I? Let’s continue this chat. This needs to be w JWK: I like the non-shared parts, but the shared parts are skeptical. I think Stage 1 is okay though. -WH: Yeah, I do not believe this can be implemented efficiently for the reasons I stated, but you’re welcome to explore it. +WH: Yeah, I do not believe this can be implemented efficiently for the reasons I stated, but you’re welcome to explore it. MM: Yeah, I reluctantly do not object. RPR: Okay, I did hear one positive from DE there and another positive from LEO and a few skeptics that are not blocking. So I’ll conclude that we have Stage 1, congratulations. + ### Conclusion/Resolution -* Stage 1 +- Stage 1 ## Resizable buffers + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-resizablearraybuffer/issues/68) SYG: It's just an FYI of a normative bug that we fixed in the Resizable Buffers proposal that my teammate Mario found during implementation. Resizable buffers allowed the buffers to be resized. So It is possible that you resize the buffers such that the typed array view on top becomes exactly at the bounds that you resize it to. -SYG: So the normative issue we found is that, when you resize and underlying buffers, such that the view becomes zero length, where the bounds of the view on top kind of CIS [?], exactly at the bounds of the underlying buffer, this the spec draft was throwing out of bounds. For a variety of reasons you can read on the issue here, this didn't make as much sense as I had thought. We already allowed zero length. Like the race to begin with. So this is a this is a very small change to basically change a “≥” sign to be a ">" sign such that these kinds of these particular, kinds of typed arrays considered in bounds, even though they have a length of 0 and they don't throw when you should have access them. Because the current idea is that out of bounds raised on top of resizable buffers behave like typed arrays with detached buffers and making these kinds of zero-length typed arrays behave like detached buffers is undesirable. Any concerns with this change? +SYG: So the normative issue we found is that, when you resize and underlying buffers, such that the view becomes zero length, where the bounds of the view on top kind of CIS [?], exactly at the bounds of the underlying buffer, this the spec draft was throwing out of bounds. For a variety of reasons you can read on the issue here, this didn't make as much sense as I had thought. We already allowed zero length. Like the race to begin with. So this is a this is a very small change to basically change a “≥” sign to be a ">" sign such that these kinds of these particular, kinds of typed arrays considered in bounds, even though they have a length of 0 and they don't throw when you should have access them. Because the current idea is that out of bounds raised on top of resizable buffers behave like typed arrays with detached buffers and making these kinds of zero-length typed arrays behave like detached buffers is undesirable. Any concerns with this change? RPR: You have consensus. ### Conclusion/Resolution -* Consensus reached - +- Consensus reached ## Incubation call chartering + Presenter: Shu-yu Guo (SYG) SYG: We actually worked through the backlog of chartered incubation calls from meetings, that we have an empty charter right now. So before I nominate some early stage proposals, does anyone with an early stage proposal want to have an incubator call? For the newcomers, incubator calls are our calls that happen bi-weekly at different times, depending on their scheduled time according to the stakeholders, where we try to get a faster feedback loop between the champions and stakeholders within TC39. We have these calls, where the champions preferably ask for feedback on specific items about the designer concerns of the proposal, and you hash them out in a high-bandwidth setting in a call outside of plenary. The idea is we give some sanctioned time so we free up plenary time for more important stuff. Any interested parties? @@ -570,11 +578,12 @@ SYG: Sounds like a good topic. And the one I was planning to call out, if GB and GB: I can speak briefly to that. Could certainly be worthwhile if there’s things that can be fleshed out. I'd certainly be open to that, and also to DE's point for the WebAssembly-and-JavaScript API. We’d be really grateful to you. -SYG: Thanks GB. I think given our faster cadence we usually have realistically just time for two calls—so with proxy performance and Wasm–JS interaction—that should fill out the time until the next plenary, in which case we can put strings or the well-formedness of strings [?]. If you’ve got a rat [?] interested. Thank you. Look out for the new charter [?] and sticky [?] stuff from the Reflector, scheduling the calls. +SYG: Thanks GB. I think given our faster cadence we usually have realistically just time for two calls—so with proxy performance and Wasm–JS interaction—that should fill out the time until the next plenary, in which case we can put strings or the well-formedness of strings [?]. If you’ve got a rat [?] interested. Thank you. Look out for the new charter [?] and sticky [?] stuff from the Reflector, scheduling the calls. RPR: Thank you for running these incubation calls. I think they’ve been very successful at lightening the load on the plenary, which has been really good this year. ## Conclusion + RPR: We are complete. We got through more items than we originally had planned. Thank you to everyone who got through things earlier than their time box. It’s the end of the meeting. [chat] @@ -582,4 +591,3 @@ RPR: We are complete. We got through more items than we originally had planned. RPR: I will also apologize that I was due to provide an update on scheduling next year. I didn’t get time to prepare the slides on that. I will say that we’ve taken the feedback into account and the one thing I say we are looking to do for next year’s schedule is to reduce our eight meetings to six meetings. You can see the feedback on that is all in the spreadsheet. [chat] - diff --git a/meetings/2021-10/oct-25.md b/meetings/2021-10/oct-25.md index 90e537b5..0a7c723c 100644 --- a/meetings/2021-10/oct-25.md +++ b/meetings/2021-10/oct-25.md @@ -1,7 +1,8 @@ # 25 October, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -17,8 +18,8 @@ | J. S. Choi | JSC | Indiana University | | Surma | SUR | Google | - ## Opening & Meeting Logistics + Presenter: Aki Braun (AKI) AKI: I am proud of all of you who got up early. Hi. I am Aki. I know it's 10:00 a.m. In London, but I'm in San Diego and I haven't gone to bed yet, so I apologize in advance for… everything. I hope the Europeans in the house are enjoying your moment of the West Coast Americans struggling with really terrible time difference. You deserve schadenfreude. I am a delegate of PayPal and I am co-chair of this august body. I am joined by Brian Terlson from Microsoft and Rob Palmer from Bloomberg, and then we have chair emirati, Yulia Startsev and Myles Borins though he is officially done. This is my second last plenary as well. I will be joining this group of former chairs who can't entirely quit you. I assume by everyone's presence here that you have filled out the sign-in form. It a requirement of ECMA's bylaws that we keep attendance at our meetings. If you haven’t, please do. We need to collect information and the one time I had to do it manually it really sucked a lot. Don't do that to me. @@ -39,49 +40,51 @@ Has everybody had an opportunity to review last meeting's minutes? Do we have ap [Slides](https://docs.google.com/presentation/d/1OWUp5kizgC0L3Dkf2IrE2rwgXhlawnMRBCnGtwLJ7YA/edit#slide=id.p) -RPR: In 2021, we did a year of eight remote meetings this year. We actually increased the numbers as I'm sure everyone knows because people wanted to meet more often. This year we received negative comments when we had TC39 two months in a row, and I thank people who gave feedback on the reflector. Thank you, everyone who contributed to this spreadsheet with all the various feedback. That was really detailed, and we saw that the majority of people wanted fewer meetings. So that is the big change that we're going to make for the next year. Even though we tried eight meetings this year, for next year we're going to go back to six meetings. So this is the plan ahead. For remote purposes, and so that people are not on online all day long, we're sticking to the four-day structure with a couple of sessions. I will say we did have feedback to say, could this be flexed, maybe one of the sessions to cope with other time zones, and I think that that's definitely on the cards for next year. We haven't actually specify that precisely yet. We're just doing dates for now, but one of the other things we are going to try for 2022, and I'll say "try" because who knows how the world will go, is that we do have some volunteer hosts for real-life meetings just like we used to do. I think a lot of people here have been to some of those real-life meetings, but we used to do six a year, which was a lot. So we're not proposing that. So this is 2, maybe 3, 1 for each continent, real-life meetings starting in June 2022 with a particular opportunity to join up with OpenJS in hosting event. Obviously we're not going to give up what we've learned about remote meetings. So it is essential that any host must offer high-quality - let's call them hybrid meetings, meetings where you have some people in the room with microphones. So if everyone can hear everyone. And also, even though we're saying we're bringing these back, this does not mean that anyone should feel any pressure whatsoever to attend. I'm quite sure there will be some people who are not comfortable, with the way the world is going, attending in real-life or remotely, that's no problem. I'm going to do everything we can to make sure you have a good time remotely. And an obviously we know that people are going to want to know what will the host will be doing with safety measures so we will make sure that comes out. +RPR: In 2021, we did a year of eight remote meetings this year. We actually increased the numbers as I'm sure everyone knows because people wanted to meet more often. This year we received negative comments when we had TC39 two months in a row, and I thank people who gave feedback on the reflector. Thank you, everyone who contributed to this spreadsheet with all the various feedback. That was really detailed, and we saw that the majority of people wanted fewer meetings. So that is the big change that we're going to make for the next year. Even though we tried eight meetings this year, for next year we're going to go back to six meetings. So this is the plan ahead. For remote purposes, and so that people are not on online all day long, we're sticking to the four-day structure with a couple of sessions. I will say we did have feedback to say, could this be flexed, maybe one of the sessions to cope with other time zones, and I think that that's definitely on the cards for next year. We haven't actually specify that precisely yet. We're just doing dates for now, but one of the other things we are going to try for 2022, and I'll say "try" because who knows how the world will go, is that we do have some volunteer hosts for real-life meetings just like we used to do. I think a lot of people here have been to some of those real-life meetings, but we used to do six a year, which was a lot. So we're not proposing that. So this is 2, maybe 3, 1 for each continent, real-life meetings starting in June 2022 with a particular opportunity to join up with OpenJS in hosting event. Obviously we're not going to give up what we've learned about remote meetings. So it is essential that any host must offer high-quality - let's call them hybrid meetings, meetings where you have some people in the room with microphones. So if everyone can hear everyone. And also, even though we're saying we're bringing these back, this does not mean that anyone should feel any pressure whatsoever to attend. I'm quite sure there will be some people who are not comfortable, with the way the world is going, attending in real-life or remotely, that's no problem. I'm going to do everything we can to make sure you have a good time remotely. And an obviously we know that people are going to want to know what will the host will be doing with safety measures so we will make sure that comes out. RPR Yes, so then for the actual times and dates, so we've got these remote meetings. So remote Seattle, remote New York, and then it's June to coincide with OpenJS world. And when we're suggesting that that's the first time we could consider this. We will obviously reconfirm before that time. And then the other real-life meeting that we've got reserved is Tokyo. And so that's a reinstatement of the one we never had two years ago, and then we're still in few discussions to see if we can get a Europe one. We'll see how that goes. For the June one, obviously, because this is a bit special, the first real life meeting in what will end up being over two years. Here's a few more details we've got at the moment. So this is we've been kindly invited by OpenJS to have our event alongside. I think, you know, we've done this kind of thing before where there are conferences going on and they're expecting to have Safety measures and we'll find out more about that. Daniel also highlighted that there's some overlap with the Jewish festival of Shavuot. So we're going to make sure we try and be compliant with that by serving cheesecake. He's told me that this somehow makes it compliant, and I believe it. I want to believe. And then luckily we can, you know, we get a conference with this as well, so it's nice two-way thing. And quite often with these things we have a panel if you wish to step up, but of course, we're miles away with months and months away from this happening, don't book flights yet! Who knows how the world will go, but the key is that we do appreciate real life meetings. I think we're probably not going to go back *all* real-life. And if anyone has any comments on this, the chair group is eager to hear and so is Jory. Jory is eager to hear what TC39 thinks about this. So we have reflector post for this if you have any feedback there. ## Secretary's Report + Presenter: Istvan Sebestyen (IS) - [slides](https://github.com/tc39/agendas/blob/HEAD/2021/tc39-2021-053.pdf) -IS: Yes, really difficult. Okay. Anyways, I'm looking at the watch yet. So I try to be as quick as possible because also the content is very much similar to the ones that you have seen before. So it is really more for reading. -Next slides: The list of the relevant TC39 and Ecma GA new documents since the last TC39 meeting. I will just show you. -Next slide: status of the TC39 meeting participation. Still very high, 80 remote participants, Next slide: the latest standard download and access statistics. Pretty similar to what we have seen before. So TC39 is definitely the most successful and most searched TC regarding access to the standards and downloading the standards Etc. -Next slide: last but not least. So this is new, it is short status of the project to generate from the master HTML version to a good PDF format of the TC39 standards. So, that's new. +IS: Yes, really difficult. Okay. Anyways, I'm looking at the watch yet. So I try to be as quick as possible because also the content is very much similar to the ones that you have seen before. So it is really more for reading. +Next slides: The list of the relevant TC39 and Ecma GA new documents since the last TC39 meeting. I will just show you. +Next slide: status of the TC39 meeting participation. Still very high, 80 remote participants, Next slide: the latest standard download and access statistics. Pretty similar to what we have seen before. So TC39 is definitely the most successful and most searched TC regarding access to the standards and downloading the standards Etc. +Next slide: last but not least. So this is new, it is short status of the project to generate from the master HTML version to a good PDF format of the TC39 standards. So, that's new. Next slide: And then the regarding the TC39 GA and Execom meeting, that will be very, very fast because you've already seen it regarding the GA meeting and the execom meeting for next year. Nothing has changed. So it is just a confirmation. So this is what we have on agenda for this presentation. -IS: Now, The latest TC39 document, there is for those who are only watching the GitHub and not accessing the Ecma TC39 file server. That is basically a duplication of what is going on there for archival purposes, so it is not really necessary to go all the documents here. The purpose of that TC39 file server is mainly for long-term archival purposes and for information purposes to the other people who are not TC39 delegates. Now regarding the GA document list and I have actually three slides full with them and this has to do with the fact that I have selected those, which I thought it might be of interest to you. These are mostly related to TC39; like the first found on the trademark registration of ecmascript. So this has been, this has been extended again for 10 years. And also we have done the same thing for the UK, etc. Etc. So here you find you can you can you can find also trademark registration effect registration for the European commission country. So this is the third one Etc. +IS: Now, The latest TC39 document, there is for those who are only watching the GitHub and not accessing the Ecma TC39 file server. That is basically a duplication of what is going on there for archival purposes, so it is not really necessary to go all the documents here. The purpose of that TC39 file server is mainly for long-term archival purposes and for information purposes to the other people who are not TC39 delegates. Now regarding the GA document list and I have actually three slides full with them and this has to do with the fact that I have selected those, which I thought it might be of interest to you. These are mostly related to TC39; like the first found on the trademark registration of ecmascript. So this has been, this has been extended again for 10 years. And also we have done the same thing for the UK, etc. Etc. So here you find you can you can you can find also trademark registration effect registration for the European commission country. So this is the third one Etc. IS: Okay, then the TC39 meeting participation. So these entries are for the past meetings. So the latest one is for the August 2021 remote meeting, actually quite good. So 80 people, all of them of course remote. Also, the number of the companies they represent is 28, so I think it is quite good. And now regarding the October 2021 meeting we will see how many people will be there in this meeting. And how many companies. But, as always, it is a great turn out and actually this represents in my opinion 60% of all Ecma current activity. So currently in Ecma we have one huge technical committee - and this is TC39 - and then all the rest are significantly smaller. -IS: Okay, regarding downloads, the tendency is very much the same. Again, the total number of the Ecma standards - so you can see it - is ours are about twenty-seven thousand and more than the half. is coming from TC39 in spite of the fact that the current PDF format has not the best quality, i.e. for ECMA-262. By the way ECMA- 402 is OK now, this has been fixed by hand by the Ecma office. So in total, TC39 downloads represent more than the half of all downloads. Now, what is perhaps interesting to see that the JSON standard (ECMA-404) download is going up. So now it - starting from the beginning of the year - more than 10,000. So this is an interesting trend. Also there is a growing interest in the ECMA-402. And the ECMA-414 is this, this is the one that has been fast-track to ISO/JTC1. We did that a few years ago, in order that we don't have to Fast-Track each and every time, every year, the two important technical standard, the 262 and the 402, so you can see its access and download is much less, but from the tendency point of view, it is the same. -Next slide: Now here it would not go into details you can just read it. On the left hand side is the access for the HTML versions. So this is the master quality and the access is rather high. And as you can see also already for the 12th edition, which is the 2021 edition. So it is also getting up and up. So this is quite good and here on the right hand side are the download figures, still already 70,000 downloads of the not terribly, Good quality format which is what we currently have. Then here next slide: This is the ECMA-402. So, as I mentioned, it is also going up. So that is a good thing. Also the access much higher than the download on the right hand side, the download. +IS: Okay, regarding downloads, the tendency is very much the same. Again, the total number of the Ecma standards - so you can see it - is ours are about twenty-seven thousand and more than the half. is coming from TC39 in spite of the fact that the current PDF format has not the best quality, i.e. for ECMA-262. By the way ECMA- 402 is OK now, this has been fixed by hand by the Ecma office. So in total, TC39 downloads represent more than the half of all downloads. Now, what is perhaps interesting to see that the JSON standard (ECMA-404) download is going up. So now it - starting from the beginning of the year - more than 10,000. So this is an interesting trend. Also there is a growing interest in the ECMA-402. And the ECMA-414 is this, this is the one that has been fast-track to ISO/JTC1. We did that a few years ago, in order that we don't have to Fast-Track each and every time, every year, the two important technical standard, the 262 and the 402, so you can see its access and download is much less, but from the tendency point of view, it is the same. +Next slide: Now here it would not go into details you can just read it. On the left hand side is the access for the HTML versions. So this is the master quality and the access is rather high. And as you can see also already for the 12th edition, which is the 2021 edition. So it is also getting up and up. So this is quite good and here on the right hand side are the download figures, still already 70,000 downloads of the not terribly, Good quality format which is what we currently have. Then here next slide: This is the ECMA-402. So, as I mentioned, it is also going up. So that is a good thing. Also the access much higher than the download on the right hand side, the download. Next slide: Now this is the new message where we are. Now, for the “HTML to PDF good quality conversion project” we didn't get too far so far. There was some exchange of messages between ECMA Geneva office, and some people of the TC39 folk, but as I see it from the outside, no concrete progress yet for the selection of the tool - as far as I know. So parallel to that when I saw I contacted Allen W-B, who is always my very good TC39 historic advisor. Since he was one of the last ones from the editors who has actully produced good quality pdf version, so I asked him what he would suggest now. Allen has proposed that there is a program called PDF reactor. Which is a conversion software tool from a German software company in Saarbrücken and we have already taken contacts with them downloaded the program which is available also for testing. And we discussed with the Ecma office that Patrick Charollais will also try to test it and then if this will work then we could get the software tool for 2,000 Euros for 4 licenses. I think this is a reasonable option. If it works, fine. I have also informed the Ecma Chairs, have sent them the user guide for the program etc. So, according to the user guide it seems to be a good tool. So I hope that something comes out that, but of course, there is no guarantee. So this is the status where we are on that project, according to my knowledge. -IS: Now next slide: this is just a copy of the Robs Reschedule slide for 2021. Actually, we are now in the long run. So this is nothing new. Rob already mentioned the main points. So my additional point, which is my personal opinion, that “local only” meetings of the good old times will likely not come back. For different reasons, companies will be less “travel friendly” but more “working at home”, Etc. So what I am expect that such “local only” meetings that we have had will also not come back. However, the “remote only” meetings that we are exercising now for two years l think we very happy with their quality. The tools are good. In my opinion, we can hear each other well - also for the audio point of view. We can make good progress with the current conferencing tools. So this will in my opinion stay. And actually the question is what about the so-called “mixed meetings”, i.e. we have “local” and also “remote participation”? In my opinion before we start with the first meeting next summer we should do a little bit of tool testing, like we also did in the past. Just as a note, three years ago when we were forced to try to bring in remote contributions because some people could not come and present their stuff in person so that worked well too. But for a remote participant to follow the “in person part” of actual meeting was difficult. Always the audio part is rather poor. Sometimes you hear it, Sometimes not. Or you here only one person, but not an other. So I think we have to test that, so that we have equal quality with what we have today with our “remote only” meetings. And the same thing is also sometimes - but less frequently than with audio - also with different presentations of the associated slides. So my point is that the “mixed meetings” can not be worse than the current “remote meetings”. +IS: Now next slide: this is just a copy of the Robs Reschedule slide for 2021. Actually, we are now in the long run. So this is nothing new. Rob already mentioned the main points. So my additional point, which is my personal opinion, that “local only” meetings of the good old times will likely not come back. For different reasons, companies will be less “travel friendly” but more “working at home”, Etc. So what I am expect that such “local only” meetings that we have had will also not come back. However, the “remote only” meetings that we are exercising now for two years l think we very happy with their quality. The tools are good. In my opinion, we can hear each other well - also for the audio point of view. We can make good progress with the current conferencing tools. So this will in my opinion stay. And actually the question is what about the so-called “mixed meetings”, i.e. we have “local” and also “remote participation”? In my opinion before we start with the first meeting next summer we should do a little bit of tool testing, like we also did in the past. Just as a note, three years ago when we were forced to try to bring in remote contributions because some people could not come and present their stuff in person so that worked well too. But for a remote participant to follow the “in person part” of actual meeting was difficult. Always the audio part is rather poor. Sometimes you hear it, Sometimes not. Or you here only one person, but not an other. So I think we have to test that, so that we have equal quality with what we have today with our “remote only” meetings. And the same thing is also sometimes - but less frequently than with audio - also with different presentations of the associated slides. So my point is that the “mixed meetings” can not be worse than the current “remote meetings”. IS: Now regarding the GA venues and dates. It is exactly the same. The only change is now apparently the meeting here in the December meeting 2021. It is now in the Ecma office. It is planned to be a mixed meeting in the sense that if somebody wants to go - in spite of the outgoing Corona in Geneva - then you can take that challenge. You can go to the meeting, but you have to tell the Ecma office that you want to participate. Because if too many people are taking the challenge, then they have to go to hotel with a larger meeting room, right. Etc. So it will be in Geneva. And this is for the December meeting. For the June meeting it is still the same, but it used to be so Switzerland, but all the details are to be determined, whether it is a face-to-face meeting or not or whatever. The same also for the December meeting. The date is absolutely the same. Now regarding the execom meeting, the Time for the execom meetings, which also review the technical committee work and these meetings the TC chairs are strongly encouraged and invited to participate and And also ask the answer, the question, question, etc. Etc. are completely unchanged. There are also other execom meetings. So which are not involving TC work. Very often they are set up rather on an ad-hoc basis. So we don't know anything about them. Maybe they will happen. Maybe not. IS: And this is the end of the presentation. And so thank you very much for your attention and if you have any question, please ask me now or write me an email and more than happy I will answer them. Or do it over to GitHub if everybody should be informed about the content. So thank you very much. So that's the end. ## ECMA262 Editors' Update + Presenter: Keving Gibbons (KG) - [slides](https://docs.google.com/presentation/d/10jU7wV9AX7ICTu1ewWixuihgg0c6LDhslFbxwSM23yU/edit) -KG: There have not been a lot of major editorial changes since the last meeting. We're starting to use these structured headers, these machine readable headers in little DSL in more places. We marked `with` as legacy per consensus at the last meeting. One thing I wanted to call out is that all the old editions of the specification, which had been hosted on the GitHub for a while and disappeared after a year are now back and should be back permanently. So those old links should be fixed if you happen to run into those. +KG: There have not been a lot of major editorial changes since the last meeting. We're starting to use these structured headers, these machine readable headers in little DSL in more places. We marked `with` as legacy per consensus at the last meeting. One thing I wanted to call out is that all the old editions of the specification, which had been hosted on the GitHub for a while and disappeared after a year are now back and should be back permanently. So those old links should be fixed if you happen to run into those. -KG: Several normative changes. Running through them very quickly. They're just things that got consensus last couple meetings. We made octal literals normative. Object hasOwn and class static blocks were at stage four and have been landed. And then these last two are things that the editors did where we assumed that the intent was clear that the current specification was incorrect. So, 2523 is, strictly speaking the math methods just just said that they returned an implementation defined value and didn't specify that it had to be a Number. We changed that to be a Number on the assumption that was what the committee had intended. And then 2505, we discovered that the `ContainsArgument` sdo which is used to give an early error when you refer to the arguments binding in a class field, that isn't supposed to descend into nested functions, had omitted the async generator cases, so that it did descend into async generators, which meant that it was strictly speaking an early error to refer to arguments in an async generator in a class field, which was kind of silly and it's not what anyone except engine 262 did. So we changed that on the assumption that that was always the committee's intent. But if anyone objects to these two normative changes the editors made on our own, please speak now. And then some other really tiny stuff I'm not gonna to get into. +KG: Several normative changes. Running through them very quickly. They're just things that got consensus last couple meetings. We made octal literals normative. Object hasOwn and class static blocks were at stage four and have been landed. And then these last two are things that the editors did where we assumed that the intent was clear that the current specification was incorrect. So, 2523 is, strictly speaking the math methods just just said that they returned an implementation defined value and didn't specify that it had to be a Number. We changed that to be a Number on the assumption that was what the committee had intended. And then 2505, we discovered that the `ContainsArgument` sdo which is used to give an early error when you refer to the arguments binding in a class field, that isn't supposed to descend into nested functions, had omitted the async generator cases, so that it did descend into async generators, which meant that it was strictly speaking an early error to refer to arguments in an async generator in a class field, which was kind of silly and it's not what anyone except engine 262 did. So we changed that on the assumption that that was always the committee's intent. But if anyone objects to these two normative changes the editors made on our own, please speak now. And then some other really tiny stuff I'm not gonna to get into. KG: There's a bunch of upcoming work, all of the editors are actually working on different projects right now. So I am working on an auto formatter that will just be pedantic about the white space and stuff but instead of just complaining about it, it will fix it. Hopefully that will be ready soon. MF is working on completion record reform. So, this is a thing that we have been wanting to do forever and are finally in a good place to approach it where the spec said that every algorithm returned a completion record, which is basically like the results type from Rust, for example, if you're familiar with that. It represents either the value or a change in control flow such as an exception. A lot of abstract operations, it didn't really make sense for them to do that. They were operations that inherently could not throw. In many cases they were even static operations. But those are still supposed to return a completion record. It was Implicitly wrapped and then implicitly unwrapped. That sort of implicitness is often very confusing. So we're getting rid of that implicitness and making every operation be precise about whether or not it can return an abrupt completion, which is to say, whether or not it can return a completion record or if it just returns a regular value. So that won't actually have that many changes to how to read the spec. But it will affect a few places and get rid of some of the implicitness which confuses people. Otherwise, it's mostly the same except Shu has been working on a ‘can call user code’ annotation, which we'll get to on the next slide. SYG: Yeah, I can talk about this real quick. So it's not quite ready yet, but I wanted to take this opportunity to kind of call out that we have a preview build available. So the other implementers in the room especially, if you could just browse around and give your feedback there, that would be greatly appreciated. The whole point is that there are these abstract operation calls and some of them can end up causing user code to be called by things like proxy traps or tostring. And there is a mode now that you can toggle and it will pop up this little `uc` annotation on those abstract operation costs that can call user code. The styling of that might change, but the screen shows what it kind of looks like right now except without oval outline. That's just to highlight what it looks like. If you want to take a look, it's at number 2548, and you scroll down to the bottom. There's this preview link click on details that will load the preview build. It's not quite ready yet because there's some false positives and there's some false negatives. And if you notice anything that is missing or is needlessly conservative where it says, it can call user code, but in fact cannot because it's for example, calling like Tostring with something that, you know, cannot trigger a user defined tostring. Please comment in the PR. Thanks. -KG: And then the last thing - we are planning to rename the default branch from master to main. This will happen on the last day of the meeting. So, if you have any open PRs, they will automatically get updated but your local copy, you will need to manually tell it that the branch you are working with now is main. This follows Test262, which did this a long time ago. It didn't seem to break anything. Ecmarkup did this a while ago, it didn't break anything. So just as a heads up, that will be happening in a few days. And that's it. +KG: And then the last thing - we are planning to rename the default branch from master to main. This will happen on the last day of the meeting. So, if you have any open PRs, they will automatically get updated but your local copy, you will need to manually tell it that the branch you are working with now is main. This follows Test262, which did this a long time ago. It didn't seem to break anything. Ecmarkup did this a while ago, it didn't break anything. So just as a heads up, that will be happening in a few days. And that's it. AKI: Thank you. There is one question on the queue from Mark Miller. @@ -94,7 +97,9 @@ MM: Thank you. AKI: Also, SDO stands for standards development organizations internationally. YSV: I just wanted to say basically what Mark said. The user code to annotation is awesome. Thank you so much for doing that. I'm looking forward to reviewing it and seeing how much it improves my reading of the specification. + ## ECMA402 Editors' Update + Presenter: Ujjwal Sharma (USA) AKI: Ecma 402! @@ -120,6 +125,7 @@ AKI: The time box is tight, we're over. USA: So let's skip that but the thing to look out for is segmenters going for stage 4 so hopefully that'll happen. If you want to get involved please reach out to us, and thank you. ## ECMA 404 Editors' Update + Presenter: Chip Morningstar (CM) AKI: Next up, We have 404. @@ -127,7 +133,9 @@ AKI: Next up, We have 404. CM: JSON abides. AKI: Great, great. + ## ECMA Recognition Awards + Presenter: Yulia Startsev (YSV) - [slides](https://docs.google.com/presentation/d/1oAvS1kTzCC8YTZvX4z0QRSLrqQLfZPxrZl3FQ30iYi0/edit#slide=id.gc0406dac02_0_5) @@ -159,6 +167,7 @@ YSV: Okay, super, because I need someone to also help me figure out how we're go AKI: All right. I look forward to seeing what y'all come up with. ## TypedArray prototype methods and resize in the middle behavior + Presenter: Shu-yu Guo (SYG) - [issue](https://github.com/tc39/proposal-resizablearraybuffer/pull/75) @@ -166,9 +175,9 @@ Presenter: Shu-yu Guo (SYG) SYG: So this is a normative change. I am asking to make to the resizable buffers proposal from me. So I didn't make slides for this. I think it's best to walk through this example to see where the corner case comes up. It is a corner case that we discovered during implementation. So to set the stage, there's a bunch of Array methods and TypedArray methods that basically behave like where they - they read some length of the source array and then they do some operation by looping over the source array. These are methods like slice copyWithin, fill, slice, ec. So slice, for example, reads the length of the source array, and depending on how you want. Want to slice that into a different array, basically, you know, copies the portion that you want to slice into a new array. There's nothing new there. That's just what those methods do currently. With resizable buffers, those methods still have a defined Behavior, but there is a weird Corner case that comes up, which I will try to explain. So walk you all through this example here. So what this example is doing is the first we're creating a sizable buffer. That's all this hose. And we're creating a float64. So a double view on that buffer. And then we just fill it with some initial values. This doesn't really matter. And then we're going to do something tricky. We're going to make a new TypedArray subclass whose sole purpose is to be very tricky when you do construction by making the Symbol.species getter such that resizes, the source buffer. Is this clear so far? This subclass does nothing else except being tricky when you try to construct a subclass by resizing the buffer during the middle of some operation. If that is clear so far, moving on. What happens in this case is that - say we create a new subclass of this array. The intention here is that I create this subclass of this array. And then I call slice on it. And the idea is that during the call of slice, my intention for this example is that during the call of the slice, is when the source array gets resized during the middle. So what happens is that at the beginning of the call to slice the source buffer is not yet resized, and then in the middle of the call to slice, it is resized because it's creating the subclass instance via looking up Species. So, we now have a situation where the latter half of these methods like fill and slice and copy within our reading from a smaller buffer than what a thought. We thought the length of the buffer was at the beginning of the method. This is fine in that. We have to find to behavior for what this should do. For example, in arrays, You could do this somewhere. Thing in a raised without reciting, right? If this were a regular arrays and not a typed array, this line would be something like, you know, the source array,.length, equals whatever want to resize it to you. Where the corner case comes up is that the current spec behavior is basically, we Loop over the original length that we read. So, the original length is in this case is 4 and then we are changing it to 2 so. So something like, slice will still Loop over four elements and copy those four elements except for elements index 2. And index 3. It will be reading basically undefined. That is fine for arrays, you get undefined. For resizing array buffers on which we are reading floats, undefined coerces to NaN. So if we go with the existing behavior for maximal, consistency, with typed array methods the now out of bounds indices become NaN, which is arguably pretty weird. Whereas I am proposing in this PR. This is PR 75 that we just stop iterating for the typed array methods. When the backing is a resizable buffer. So that in this case you just don't get it right into the you just don't get a rate into the out of bounds areas and then you don't assign, you know, you don't do this out of bounds assignment where you to balance. And in this case, the, the newly-created. Typed array by slice. You don't really assign to it. So they remain 0 and this might be more expected. in any case, this is pretty corner case-y because, you know, just don't resize your buffers in the middle of an operation. You can do that via hooks like this, but that's generally a code smell and really bad practice. Okay, so this is still don't do that, but we do need to spec something and arguably the existing behavior is super confusing and it is more implementation complexity in that you actually have to read out of bounds, you know, it's out of bounds and then you do this NaN conversion thing. so, what I am asking for consensus here today is adopting the behavior where we do not read things that we know to be out of bounds in the slice method, copyWithin method and in the fill method for source arrays that are backed by resizable array buffers. This might have been confusing. Are there questions? -MM: So, how would you specify - how would the specification of the algorithms change so that they don't have this problem and would this change in how the algorithms are specified? observable in, in terms of what proxy traps they cause +MM: So, how would you specify - how would the specification of the algorithms change so that they don't have this problem and would this change in how the algorithms are specified? observable in, in terms of what proxy traps they cause -SYG: I will double check on the, so I don't know if this is the most up to date version of the PR, but basically how the actual spec change had to spec changes is let find an example for the clearest. I don't think any of these very clear from just reading this example, but the basic idea is that at some point, you have to reread the length. Anyway, like the the methods already do And after you reread the length, you have to final length. and there will be some check like this where if you are, in fact out of bounds now just do nothing. And as for does it change the observability of proxy traps? My intention is we would only do this for methods Methods acting on typed arrays, back my resizable buffers, which doesn't exist now, so I'm not sure if the question is this applicable? There is no, I'm not asking so +SYG: I will double check on the, so I don't know if this is the most up to date version of the PR, but basically how the actual spec change had to spec changes is let find an example for the clearest. I don't think any of these very clear from just reading this example, but the basic idea is that at some point, you have to reread the length. Anyway, like the the methods already do And after you reread the length, you have to final length. and there will be some check like this where if you are, in fact out of bounds now just do nothing. And as for does it change the observability of proxy traps? My intention is we would only do this for methods Methods acting on typed arrays, back my resizable buffers, which doesn't exist now, so I'm not sure if the question is this applicable? There is no, I'm not asking so KG: It definitely doesn't change any proxy traps, because these methods aren't ever going to calling any proxy traps because they require their `this` to be an actual typed array not a proxy for one or anything. @@ -200,23 +209,26 @@ SYG: It's always undefined, which is the problem. If it were zero would be fine WH: No, I'm asking, if you skip your assignment, what is the value that would have been overwritten but isn’t? Is that always zero? -SYG: Yes. Because it's always on a newly created typed array initialized to zero. So that the Crux of the changes is, Before change. you can see some NaNs, after the change all you see are zeros. +SYG: Yes. Because it's always on a newly created typed array initialized to zero. So that the Crux of the changes is, Before change. you can see some NaNs, after the change all you see are zeros. WH: OK. Regarding the NaNs, I don't really care. -SYG: I can Next up on the queue you in. so, +SYG: I can Next up on the queue you in. so, YSV: I just wanted to give what our review was of this. We do think that the zero makes a lot more sense than the not a number outcome. So the general direction of this proposal of PR 75 We support although we haven't started our implementation on resizable array buffers. So we'll probably have more comments in the future. -SYG: Sounds good.The queue is now empty. I am once again asking for consensus on pr 75 Behavior, which is Zero's not NaN's caused by reading known out of bounds indices in a source array. +SYG: Sounds good.The queue is now empty. I am once again asking for consensus on pr 75 Behavior, which is Zero's not NaN's caused by reading known out of bounds indices in a source array. + +MM: I like this. -MM: I like this. +AKI: We have consensus. -AKI: We have consensus. ### Conclusion/Resolution + Consensus for PR 75. ## Intl.Segmenter for Stage 4 + Presenter: Ricard Gibson (RGN) - [proposal](https://github.com/tc39/proposal-intl-segmenter) @@ -241,14 +253,16 @@ AKI: That sounds like consensus to me. RGN: I'll take it. Thanks everyone. Thank you. And strong support from Ujjwal. Got on the queue just in time. ### Conclusion/Resolution -* Stage 4 + +- Stage 4 ## Taking over maintainership of structured clone + Presenter: Shu-yu Guo (SYG) - [slides](https://docs.google.com/presentation/d/14PNcWgkd3Ik61b0Fv9qFISfjUfGz4ZThCkyC-XTTC_8) -SYG: So this should be a fairly short thing that really lot of technical content, but you all know structure clone. It is this thing, is this algorithm that's defined by these paired - technically I guess they're not abstract operations because they're defined in HTML, but conceptually abstract operations called structure serialize & structured deserialize. And it's defined in HTML for the purposes of cloning values, including JS values. So this is defined in HTML but it's not just for HTML, it's for web APIs as well. Also for JS, things like objects and maps and sets, its most notably used you interact with this algorithm when you transfer stuff or clone stuff, cross workers with post message, including node. And it's also now directly usable on the web with the structured clone function on globals. so, the problem is that this has a maintenance burden and we've actually seen this play out a few times recently where we added a new thing like AggregateError and then we forget to do the layering part where we need to extend the structured cone algorithm on the HTML side to support the new thing we added, and it's not like an implementation issue, people usually catch this and then the implementers who actually implemented it, but we were leaving the specs in a bad state because there's some It's not that complete, inspection until someone notices and says, hey shouldn't we add this to structure clone? And then someone makes a PR so on and so on. So maintenance burden is the main issue. The proposal here, let's pull structure clone algorithm into ECMA-262. Will Define this pair in 262 with a new name. I suppose, the hosts like HTML will use these algorithms for the court JS values. values. They will continue to define whatever steps that need for the, you know, HTML values that are not 262 stuff. We should own those algorithms, The one point of technical discussion here. Is that HTML already has spec'd - not just spec'd, it's shipping everywhere - it's incompatible to change the types of errors that are thrown when the algorithm throws an error currently. They throw their own exceptions. So, we're not going to, we're not going to pull DOMExceptions in 262. Nor are we proposing to change the kind of Errors being thrown here. This is purely a layering / editorial change I am proposing. And to that when we pull these, these core algorithms for the court JS values in 2262. The idea is to make the errors host to find to be as to what kind of errors are thrown will just say something like throwing error or sorry. Go through a whole to find error. And just from a layering perspective, these are algorithms for core language values. TC39 feels like the right layer. And yeah, like I said, the proposal is strictly editorial about layering. There's really no political fight here, the HTML editors themselves approached us to do this. They feel that maintenance burden as well because they have incorrect or incomplete, spec until somebody notices. It's also not an invitation to take this as an opportunity to make any normative changes for how structure clone works. How structure clone works? Is depended upon on the web platform and on node, and this is I don't think it's really open to change. You can make proposals later if you feel really strongly about it, but this is not that proposal. This is just editorial. Anything in the queue before I ask for consensus? +SYG: So this should be a fairly short thing that really lot of technical content, but you all know structure clone. It is this thing, is this algorithm that's defined by these paired - technically I guess they're not abstract operations because they're defined in HTML, but conceptually abstract operations called structure serialize & structured deserialize. And it's defined in HTML for the purposes of cloning values, including JS values. So this is defined in HTML but it's not just for HTML, it's for web APIs as well. Also for JS, things like objects and maps and sets, its most notably used you interact with this algorithm when you transfer stuff or clone stuff, cross workers with post message, including node. And it's also now directly usable on the web with the structured clone function on globals. so, the problem is that this has a maintenance burden and we've actually seen this play out a few times recently where we added a new thing like AggregateError and then we forget to do the layering part where we need to extend the structured cone algorithm on the HTML side to support the new thing we added, and it's not like an implementation issue, people usually catch this and then the implementers who actually implemented it, but we were leaving the specs in a bad state because there's some It's not that complete, inspection until someone notices and says, hey shouldn't we add this to structure clone? And then someone makes a PR so on and so on. So maintenance burden is the main issue. The proposal here, let's pull structure clone algorithm into ECMA-262. Will Define this pair in 262 with a new name. I suppose, the hosts like HTML will use these algorithms for the court JS values. values. They will continue to define whatever steps that need for the, you know, HTML values that are not 262 stuff. We should own those algorithms, The one point of technical discussion here. Is that HTML already has spec'd - not just spec'd, it's shipping everywhere - it's incompatible to change the types of errors that are thrown when the algorithm throws an error currently. They throw their own exceptions. So, we're not going to, we're not going to pull DOMExceptions in 262. Nor are we proposing to change the kind of Errors being thrown here. This is purely a layering / editorial change I am proposing. And to that when we pull these, these core algorithms for the court JS values in 2262. The idea is to make the errors host to find to be as to what kind of errors are thrown will just say something like throwing error or sorry. Go through a whole to find error. And just from a layering perspective, these are algorithms for core language values. TC39 feels like the right layer. And yeah, like I said, the proposal is strictly editorial about layering. There's really no political fight here, the HTML editors themselves approached us to do this. They feel that maintenance burden as well because they have incorrect or incomplete, spec until somebody notices. It's also not an invitation to take this as an opportunity to make any normative changes for how structure clone works. How structure clone works? Is depended upon on the web platform and on node, and this is I don't think it's really open to change. You can make proposals later if you feel really strongly about it, but this is not that proposal. This is just editorial. Anything in the queue before I ask for consensus? MM: So first of all, I support this. I think that having this under TC39 is definitely the right thing. Also understand this is exploratory. So we don't have to settle the issue that I'm about to bring up - rights as Waldemar just said, consensus on what, you're not asking for a stage yet. But the the issue I want to bring is that we can bring it into TC39 so that we're doing the maintenance on it and the co maintenance on it with Ecma, 262 without bringing it into Ecma 262 and without necessarily making it part of the language. It could, for example, have the kind status that internationalisation has, I'm not saying that we should, but I'm just pointing out that we might. And I certainly have not thought about or examined its suitability for being part of the language. @@ -274,7 +288,7 @@ MM: it's a facto standard for multiple hosts, it's not necessarily a de facto JS SYG: Okay. Next up is WH. -WH: My question is, you asked for consensus, but you didn't say what exactly you're asking for consensus on. +WH: My question is, you asked for consensus, but you didn't say what exactly you're asking for consensus on. SYG: Here's the concrete proposal that we pull the spec text in HTML that defines what the serialization and deserialization ought to do for values defined in 262 into 262. To make this pair of abstract operations in the spec and refactor, the HTML spec to call these new abstract operations for the core JS values. And make errors, host defined. When we pulled the spec text into the into 262 because currently the HTML spec text directly says something like throws DOM exception. We can't have that. So the only change when we refactored the HTML spec out the steps into 262 is to make the are supposed to find. That is the concrete. Proposal for what? I'm arguing to be an editorial change. I'm asking for more consensus to bring that into Ecma 262, the document. @@ -308,7 +322,7 @@ LEO: Do you foresee being these abstractions being reused in ecmascript as like SYG: Now, it's not part of this proposal. This is purely for me. I don't personally have any plans to Introduce proposals in the marshalling space. -LEO: So, the other question is, is this abstraction going to be fully reused by node. Is that like there is a use case in know, but it is possible for node to just reuse this abstraction? +LEO: So, the other question is, is this abstraction going to be fully reused by node. Is that like there is a use case in know, but it is possible for node to just reuse this abstraction? SYG: Node does not have a spec. This is a spec thing. They have an implementation which I have in all of these (?) cells. And if they keep with that implementation, they will directly use this algorithm. @@ -330,11 +344,11 @@ SYG: If the queue is empty, MM it sounds like we don't have consensus then becau MM: I believe we have consensus on somehow bringing it under TC39. But you're correct until I look at the algorithms and think about it in the context of the language. We don't have consensus yet on how to bring it under TC39. And as I said, my preference is the same as your preference. I just need to look before I can agree to that. -SYG: Okay. So we have consensus on the direction. We don't have consensus on the concrete next steps I'm proposing. And can I count on you, Mark, for a timely review of the algorithms? +SYG: Okay. So we have consensus on the direction. We don't have consensus on the concrete next steps I'm proposing. And can I count on you, Mark, for a timely review of the algorithms? -MM: Yes. Well, how big are they? I just have not looked at all at the HTML spec with regard to this. How much reviewing is this? +MM: Yes. Well, how big are they? I just have not looked at all at the HTML spec with regard to this. How much reviewing is this? -SYG: I am loading it. +SYG: I am loading it. MM: Just a loose ball park. @@ -345,10 +359,11 @@ MM: You can count on me reviewing this in a timely manner. SYG: Okay, I'll coordinate with you offline. I guess we'll open an issue and then we can coordinate from there. ### Conclusion/Resolution -consensus for TC39 adopting it but not necessarily putting it in 262 -MM to review the algorithm -discussion to continue in https://github.com/tc39/ecma262/issues/2555 + +consensus for TC39 adopting it but not necessarily putting it in 262 MM to review the algorithm discussion to continue in https://github.com/tc39/ecma262/issues/2555 + ## Clarify validity of negative expanded year 0 + Presenter: Jordan Harband (JHD) - [pr](https://github.com/tc39/ecma262/pull/2550) @@ -359,7 +374,7 @@ SYG: Sure, my preference here is to just leave it, given that I thought the gene JHD: Just to clarify, this PR was prompted because of some discussion in the Temporal proposal where ABL asked, is negative zero a valid extended year, pointing out the difference in web reality. So you're correct that we don't necessarily need to change anything for Date to resolve it in Temporal, but we do need to make a decision. -SYG: I see the potential normative implications for Temporal as well. I missed them. In that case, I'll cast my vote as 'disallowed, but not really informed, and seems nice.' +SYG: I see the potential normative implications for Temporal as well. I missed them. In that case, I'll cast my vote as 'disallowed, but not really informed, and seems nice.' YSV: I can mention what our thoughts are. It actually kind of agrees with what SYG said, but in a slightly different direction. We're not sure it makes sense to fix this for Date.parse() because it's more of a Band-Aid solution for a number of the issues that Date.parse() has, we want to see something a bit more holistic. In addition, it seems like this number, if you divide it by the tropical year length, you will actually get 1970, so that is the positive number story. So it is actually kind of a sensible number that you're ending up with there, it's just negative. We're not exactly opposed to changing our implementation, but we don't really see any specific issue with what it does now in our current implementation. @@ -393,7 +408,7 @@ JHD: Sure. Is there anyone from Moddable or or JSC on the call? MS: This is Michael from Apple. I don't know, part of me is like, yeah, we could do the work. It seems like a such a small area, given all the other incompatibilities already discussed. I'm kind of ambivalent. -BT: I think we're out of time on this item. JHD, do you feel like you have clarity on where to go here? I think we have consensus on Temporal. +BT: I think we're out of time on this item. JHD, do you feel like you have clarity on where to go here? I think we have consensus on Temporal. JHD: It sounds like we could clarify the prose in the spec to disallow it in Date.parse(), but that maybe we should stop short of strengthening the test262 tests there. But if there's hesitation on that Date.parse() clarification, we can discuss it on github. @@ -402,10 +417,11 @@ MM: I like the idea of just clarifying the text so that it's clear what the norm JHD: Alright, then I think we're good. ### Conclusion/Resolution -Temporal should not accept negative year zero -Further discussion to happen on github PR + +Temporal should not accept negative year zero Further discussion to happen on github PR ## Partial function application for Stage 2 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-partial-application) @@ -417,51 +433,51 @@ RBN: So the proposal we've been discussing this several times before, one of the RBN: as I mentioned in the previous slide, partial application is designed to be fixed arity by default. this differs from bind which allows excess argument passing, but we are one of things, we've considering for some time, is the introduction of a bare ellipses, which would allow you to indicate a single place where any remaining arguments will be spread into the call, and I'll have more on that shortly. One final thing that we've been discussing and wanted to advance this part of this proposal in the long term was the ability to introduce ordinal placeholders to control argument order, allowing you to swap arguments to introduce new arguments and to repeat arguments within a partial application. -RBN: I'll provide some examples here that we can look at. So, when I talk about binding arguments in any position this example shows the ability to apply from the right rather than the left. In addition we can show that we can preserve the receiver. So in the example, we show that calling or a partially applying say hello to Bob, will create a function that when called actually maintains this receiver. Another value is the arity is fixed. So excess arguments don't get passed. So you look at the example here, if call are sent via array map. It'll pass not only the elements, but also the index And the array itself since the index is a numerical value percent will interpret that. And then determine that you're actually trying to parse for a different radix. And as rules result will end up with Nonsensical values or in this case, NaN, for the result. Whereas being able to partially apply from the right and having fixed are arity means that we no longer need to worry about the excess arguments, the the password through via the bear. Ellipses allows us to explicitly opt into function.prototype.bind style, excess argument passing, which allows you to do things like, Supply, leading arguments and the ability to reorder arguments or duplicate them. Some of the recent changes we've made to the proposal was the introduction of a prefix token to the arguments, one of the early concerns with partial application was the The Garden Path problem that an invocation at the start of the invocation that might have any number of arguments spreading across across multiple lines that you might not be able to know that, it's a partial call until some point you reach the placeholder that would have indicated the partial call. By introducing a prefix to the expression we now have the ability to indicate early that this is a partial application. and it also gives us a couple of additional capabilities for one. It makes it very explicit. What we are partially applying that it that the Expressions were applying are in the arguments position. It's not any arbitrary expression which matters for the eager evaluation semantics that I discussed before. In addition, it allows you to create partially applied calls that have no place holders. They have a fixed set of arguments previously, you would have had to have had at least one place holder to make the call partial. We also introduced ordinal placeholders, which again, allow you to reorder arguments to swap them swap positions to duplicate an argument, for example, the rest argument placeholder, which provides the finer control over how excess argument are will work. Another change is that we introduced was reintroduced support for new using some slightly more reliable. Semantics, We previously dropped support for it after some early discussion. And there's been some discussion on the repo as whether or not this is still about valuable something that I wanted to bring up and consider. So I do see a clarifying question from Surma. I wanted to point out that this proposal does not handle that. There's a discussion about a proposal from JS Choi that around binding `this` that's intended to have a mechanism to solve that and they actually these two proposals can work together in that fashion to allow you to take a free function. Bind it to an argument and partially apply certain positions, but that's something that think J.S.Choi will describe more in his proposal. One of the other things that we did was we removed, the temporal support. The last time you asked us this, I believe Mark Miller pointed out that it seemed a little bit too confusing that the syntax as so we decided to You remove that for the time being, if that's something, we decided reintroduce, it will most likely be in a separate proposal than this one. So, I wanted to go back and discuss the recent changes around the prefix token. Rip F style pipe lines. We would have preferred to not have a prefix. This would have made the syntax more concise and much more similar to what we're seeing now with the hex tile pipes, with the heck placeholder, now that they would have been essentially eagerly evaluated and even though the design of F sharp was that the right hand side and would have been a function that gets called with the left side argument, the fact that this is essentially a almost like inline. Evaluate the function expression that it gets, even though it's bound with those arguments. that then is called immediately. So it would have essentially seemed like the hack style type approach in certain cases. Now that the proposal for pipeline has advanced to Stage 2 with hack style, having a prefix is no longer a hardwire mints. Now one of the reasons we decided to put the prefix between Colleen arguments is to remove ambiguity when this has been has been investigated a number of different proposals. Both the smart mix and hack style pipeline proposals have considered this, with a partial expression syntax. This is basically a prefix token. The one that we were using at the time was plus greater than which basically marked an expression as being partially applied, that you could use the then hack style. Topic token to indicate argument an argument that would then be pulled out into an arrow. However, this prefix before the callee this prefix that would occur before. The callee doesn't exactly work. Well with eager evaluation semantics. So you can see in the example if you have a prefix token of some kind and then you call do dot and then from that called G with a token which part of this call was partial. Is it after G. Now the smart mix of hex tile were designed using lazy evaluation, which essentially is not much. Then an arrow function. So in specially, when the example here, the main difference is essentially one character plus a space. There's not much of a difference here. It can be a bit confusing from this perspective, looking at an expression that might be partially applied in this way and not realize that if this is lazily. Evaluated that every time you call the function increments, I just like you would with an arrow function. so prefix for a callee, doesn't really work well with eager semantics because the fact we can't really know which function we’re binding. The syntax is ambiguous. So we introduced this prefix token before the argument list so that we can make it very clear that what's being partially, apply as the argument list itself. so, the semantics are that it's similar to function.prototype.bind and it avoids side effects that occur as a result of re-evaluation something that you can't do with arrows and less, you essentially pull. All of the values. We write the arrow in such a way that it cannot have side effects, or you'll with other side effects, in the system that could mutate, the closure scope, or you have to pull out anything that has mutations in two local variables before you create the arrow function, which again creates Closure. One of the other advantages of eager evaluation is that we avoid refactoring hazards. If I had a result that was the result of calling a function that contains side effects in the argument list, and I wanted to pull that into a partial application. So I could use it multiple times with the same set of values. Eager evaluation allows us to ensure that each time. We call G. In this case that we're not incrementing. you've done the initial part of the evaluation, all of the Since become evaluated. And what we end up with is a bounded function that we can just call with the remaining argument and as a result, having the prefix for the arguments can remove this ambiguities and the previous slide saw this partially applied expression of o..f is a method called and then a method call of G. Not sure which is being partially applied. Whereas having the token adjacent to the callee and is part of the argument list, makes it very clear that it's the G method here that's partial. And another value of the prefix between callee is it's not much different than what we're already seeing with additional call-like syntax to proceeding for optional call or tag temporal Expressions. It's essentially just a new call like syntax. When the other recent changes I mentioned before was that we have introduced ordinal placeholders, which allows you to reorder arguments allows you to reuse the same parameter multiple positions. This can be very useful for adapting foreign APIs. So you have two packages that are loosely related that you want to call but they might take arguments in a different order, you have the ability do that type of adaptation, plus the ability to deal with if I need to refactor and introduce a copy of an of an argument to a position that you can just use the argument reference. And again, mention the rest arguments placeholder is another change that we made recently. So the fixed arity by default to be very clear. Shows the example, Parsons where excess values can be passed in Bear. Ellipses allows to have more of a bind like approach to specify that. I spreading in all of the remaining arguments in this position as a result. One of the things we want to avoid and it's we've discussed several times. among the pipeline's chanting route as well, was that we don't want to end up with this arbitrarily strange syntax of: I want to take a parameter in this position and spread the elements in or take the remaining arguments. You create an array here or all those things are essentially too complicated for this proposal. There things that if you need it, you can pull out to an arrow in the meantime, we would just say we only have the bear ellipses. Just is the indication of whether or not we are opting in or opting out of the excess argument passing. +RBN: I'll provide some examples here that we can look at. So, when I talk about binding arguments in any position this example shows the ability to apply from the right rather than the left. In addition we can show that we can preserve the receiver. So in the example, we show that calling or a partially applying say hello to Bob, will create a function that when called actually maintains this receiver. Another value is the arity is fixed. So excess arguments don't get passed. So you look at the example here, if call are sent via array map. It'll pass not only the elements, but also the index And the array itself since the index is a numerical value percent will interpret that. And then determine that you're actually trying to parse for a different radix. And as rules result will end up with Nonsensical values or in this case, NaN, for the result. Whereas being able to partially apply from the right and having fixed are arity means that we no longer need to worry about the excess arguments, the the password through via the bear. Ellipses allows us to explicitly opt into function.prototype.bind style, excess argument passing, which allows you to do things like, Supply, leading arguments and the ability to reorder arguments or duplicate them. Some of the recent changes we've made to the proposal was the introduction of a prefix token to the arguments, one of the early concerns with partial application was the The Garden Path problem that an invocation at the start of the invocation that might have any number of arguments spreading across across multiple lines that you might not be able to know that, it's a partial call until some point you reach the placeholder that would have indicated the partial call. By introducing a prefix to the expression we now have the ability to indicate early that this is a partial application. and it also gives us a couple of additional capabilities for one. It makes it very explicit. What we are partially applying that it that the Expressions were applying are in the arguments position. It's not any arbitrary expression which matters for the eager evaluation semantics that I discussed before. In addition, it allows you to create partially applied calls that have no place holders. They have a fixed set of arguments previously, you would have had to have had at least one place holder to make the call partial. We also introduced ordinal placeholders, which again, allow you to reorder arguments to swap them swap positions to duplicate an argument, for example, the rest argument placeholder, which provides the finer control over how excess argument are will work. Another change is that we introduced was reintroduced support for new using some slightly more reliable. Semantics, We previously dropped support for it after some early discussion. And there's been some discussion on the repo as whether or not this is still about valuable something that I wanted to bring up and consider. So I do see a clarifying question from Surma. I wanted to point out that this proposal does not handle that. There's a discussion about a proposal from JS Choi that around binding `this` that's intended to have a mechanism to solve that and they actually these two proposals can work together in that fashion to allow you to take a free function. Bind it to an argument and partially apply certain positions, but that's something that think J.S.Choi will describe more in his proposal. One of the other things that we did was we removed, the temporal support. The last time you asked us this, I believe Mark Miller pointed out that it seemed a little bit too confusing that the syntax as so we decided to You remove that for the time being, if that's something, we decided reintroduce, it will most likely be in a separate proposal than this one. So, I wanted to go back and discuss the recent changes around the prefix token. Rip F style pipe lines. We would have preferred to not have a prefix. This would have made the syntax more concise and much more similar to what we're seeing now with the hex tile pipes, with the heck placeholder, now that they would have been essentially eagerly evaluated and even though the design of F sharp was that the right hand side and would have been a function that gets called with the left side argument, the fact that this is essentially a almost like inline. Evaluate the function expression that it gets, even though it's bound with those arguments. that then is called immediately. So it would have essentially seemed like the hack style type approach in certain cases. Now that the proposal for pipeline has advanced to Stage 2 with hack style, having a prefix is no longer a hardwire mints. Now one of the reasons we decided to put the prefix between Colleen arguments is to remove ambiguity when this has been has been investigated a number of different proposals. Both the smart mix and hack style pipeline proposals have considered this, with a partial expression syntax. This is basically a prefix token. The one that we were using at the time was plus greater than which basically marked an expression as being partially applied, that you could use the then hack style. Topic token to indicate argument an argument that would then be pulled out into an arrow. However, this prefix before the callee this prefix that would occur before. The callee doesn't exactly work. Well with eager evaluation semantics. So you can see in the example if you have a prefix token of some kind and then you call do dot and then from that called G with a token which part of this call was partial. Is it after G. Now the smart mix of hex tile were designed using lazy evaluation, which essentially is not much. Then an arrow function. So in specially, when the example here, the main difference is essentially one character plus a space. There's not much of a difference here. It can be a bit confusing from this perspective, looking at an expression that might be partially applied in this way and not realize that if this is lazily. Evaluated that every time you call the function increments, I just like you would with an arrow function. so prefix for a callee, doesn't really work well with eager semantics because the fact we can't really know which function we’re binding. The syntax is ambiguous. So we introduced this prefix token before the argument list so that we can make it very clear that what's being partially, apply as the argument list itself. so, the semantics are that it's similar to function.prototype.bind and it avoids side effects that occur as a result of re-evaluation something that you can't do with arrows and less, you essentially pull. All of the values. We write the arrow in such a way that it cannot have side effects, or you'll with other side effects, in the system that could mutate, the closure scope, or you have to pull out anything that has mutations in two local variables before you create the arrow function, which again creates Closure. One of the other advantages of eager evaluation is that we avoid refactoring hazards. If I had a result that was the result of calling a function that contains side effects in the argument list, and I wanted to pull that into a partial application. So I could use it multiple times with the same set of values. Eager evaluation allows us to ensure that each time. We call G. In this case that we're not incrementing. you've done the initial part of the evaluation, all of the Since become evaluated. And what we end up with is a bounded function that we can just call with the remaining argument and as a result, having the prefix for the arguments can remove this ambiguities and the previous slide saw this partially applied expression of o..f is a method called and then a method call of G. Not sure which is being partially applied. Whereas having the token adjacent to the callee and is part of the argument list, makes it very clear that it's the G method here that's partial. And another value of the prefix between callee is it's not much different than what we're already seeing with additional call-like syntax to proceeding for optional call or tag temporal Expressions. It's essentially just a new call like syntax. When the other recent changes I mentioned before was that we have introduced ordinal placeholders, which allows you to reorder arguments allows you to reuse the same parameter multiple positions. This can be very useful for adapting foreign APIs. So you have two packages that are loosely related that you want to call but they might take arguments in a different order, you have the ability do that type of adaptation, plus the ability to deal with if I need to refactor and introduce a copy of an of an argument to a position that you can just use the argument reference. And again, mention the rest arguments placeholder is another change that we made recently. So the fixed arity by default to be very clear. Shows the example, Parsons where excess values can be passed in Bear. Ellipses allows to have more of a bind like approach to specify that. I spreading in all of the remaining arguments in this position as a result. One of the things we want to avoid and it's we've discussed several times. among the pipeline's chanting route as well, was that we don't want to end up with this arbitrarily strange syntax of: I want to take a parameter in this position and spread the elements in or take the remaining arguments. You create an array here or all those things are essentially too complicated for this proposal. There things that if you need it, you can pull out to an arrow in the meantime, we would just say we only have the bear ellipses. Just is the indication of whether or not we are opting in or opting out of the excess argument passing. -SUR: I did notice just now a clarifying question from SUR about remaining arguments. I can address that as well. When I'm saying “remaining arguments”, I'm talking about any arguments that are not applied. So, actual values which are not placeholders. So, every placeholder that you introduce, it creates essentially a parameter binding for that argument any non placeholder excess arguments that you might pass in would then get mapped to wherever that element is. and, Let's see, so I'm not sure if that just clear enough. SUR can reply if he needs. +SUR: I did notice just now a clarifying question from SUR about remaining arguments. I can address that as well. When I'm saying “remaining arguments”, I'm talking about any arguments that are not applied. So, actual values which are not placeholders. So, every placeholder that you introduce, it creates essentially a parameter binding for that argument any non placeholder excess arguments that you might pass in would then get mapped to wherever that element is. and, Let's see, so I'm not sure if that just clear enough. SUR can reply if he needs. -SUR: Yeah, that's my question. Thank you. +SUR: Yeah, that's my question. Thank you. -RBN: One. Semantics requirements are that we only allow a single rest argument placeholder within any partial application. So you can't spread it multiple times. It only occurs. Once again, it's essentially an opt-in to how excess argument passing works. This provides a convenient shorthand that is syntax for Function.prototype.bind without having to call the bind method in the event that it might have been patched by another API. So you can say f Paren, and that's and they bear ellipses. And this essentially saying this calling F by null or o.f with a partial application that has the rest placeholder, which is essentially the same as O.f bind O. This should be the same as oh, same as O,dot f bind. O, it also allows you to specify additional arguments after where the placeholder goes. This is kind of a less valuable feature, but falls out of how the placeholder processing works. So if you allow you to pick an argument from the beginning of the argument list and move it to the end, for example, And another the recent change that I want to bring is reintroduced support for new again. We're trying to preserve capabilities. You have with function.prototype.bind., So the same type of semantics works that you would have seen previously with bind. So if you have a class C and you call C dot bind passing in null, since the receiver won't matter, and two arguments, creating a new instance of that creates an object. That's say that is an instance of D. It's also an instance of C. Since By definition of how bound functions are exotic objects, work with partial application. You can do the same thing. You can invoke. See as if it was a call, and this creates the bound function object. It doesn't actually evaluate the call. then if you call new on that result, it's the same as it does today with bind where it will. Create a new instance. However, because the fact that we have the ability to use semantics we can Reduce the capability to add the new keyword as part of the expression. And as a result receive a funk, a function that when invoked creates the new instance, you can still use new with it, The semantics their don't matter as much because it's essentially the same as doing a function, that returns a new instance of that function. We've already seen examples of this today with Legacy es5 style classes that have a function that tests for whether it's been constructed and then creates a new instance. since essentially, new’ing inside of the function, or this replacement by returning from a Constructor. All of these are cases that where when you return new, you can return. Another thing that's new and the new will be outside, new won't matter quite as much. And then the last thing before I go to the queue was to discuss the hack style pipeline change. So previously we were strongly tied to a F# style pipes as a result. We needed to consider things like how we would handle yield await placeholders for colleagues Etc. The move to hack style pipes removes a lot of these concerns. We no longer have to worry about these types of positions for partial application. One of the other value is that, is that it makes the topic. And placeholder difference very clear visually and still provide some interesting use cases. So an example of a hack style pipeline that is mapping a array. Over a function. That's partially applied. We can see that. There's a difference between the tokens that are in use. And I'll get to the status describe where we're at right now, and then I can go to the queue. So we have the explainers up-to-date, the full, specification text for the proposal is available and I'll go to the queue before asking if stage 2 is something we want to consider. +RBN: One. Semantics requirements are that we only allow a single rest argument placeholder within any partial application. So you can't spread it multiple times. It only occurs. Once again, it's essentially an opt-in to how excess argument passing works. This provides a convenient shorthand that is syntax for Function.prototype.bind without having to call the bind method in the event that it might have been patched by another API. So you can say f Paren, and that's and they bear ellipses. And this essentially saying this calling F by null or o.f with a partial application that has the rest placeholder, which is essentially the same as O.f bind O. This should be the same as oh, same as O,dot f bind. O, it also allows you to specify additional arguments after where the placeholder goes. This is kind of a less valuable feature, but falls out of how the placeholder processing works. So if you allow you to pick an argument from the beginning of the argument list and move it to the end, for example, And another the recent change that I want to bring is reintroduced support for new again. We're trying to preserve capabilities. You have with function.prototype.bind., So the same type of semantics works that you would have seen previously with bind. So if you have a class C and you call C dot bind passing in null, since the receiver won't matter, and two arguments, creating a new instance of that creates an object. That's say that is an instance of D. It's also an instance of C. Since By definition of how bound functions are exotic objects, work with partial application. You can do the same thing. You can invoke. See as if it was a call, and this creates the bound function object. It doesn't actually evaluate the call. then if you call new on that result, it's the same as it does today with bind where it will. Create a new instance. However, because the fact that we have the ability to use semantics we can Reduce the capability to add the new keyword as part of the expression. And as a result receive a funk, a function that when invoked creates the new instance, you can still use new with it, The semantics their don't matter as much because it's essentially the same as doing a function, that returns a new instance of that function. We've already seen examples of this today with Legacy es5 style classes that have a function that tests for whether it's been constructed and then creates a new instance. since essentially, new’ing inside of the function, or this replacement by returning from a Constructor. All of these are cases that where when you return new, you can return. Another thing that's new and the new will be outside, new won't matter quite as much. And then the last thing before I go to the queue was to discuss the hack style pipeline change. So previously we were strongly tied to a F# style pipes as a result. We needed to consider things like how we would handle yield await placeholders for colleagues Etc. The move to hack style pipes removes a lot of these concerns. We no longer have to worry about these types of positions for partial application. One of the other value is that, is that it makes the topic. And placeholder difference very clear visually and still provide some interesting use cases. So an example of a hack style pipeline that is mapping a array. Over a function. That's partially applied. We can see that. There's a difference between the tokens that are in use. And I'll get to the status describe where we're at right now, and then I can go to the queue. So we have the explainers up-to-date, the full, specification text for the proposal is available and I'll go to the queue before asking if stage 2 is something we want to consider. -Sorry, Ron, you wanted to go to the queue now. Yes. All right. First topic from Legend Dakotas. you can hear your little quiet for me, but I can still hear you. +Sorry, Ron, you wanted to go to the queue now. Yes. All right. First topic from Legend Dakotas. you can hear your little quiet for me, but I can still hear you. -CZW: I can see in the example is that is the new syntax preserves receiver, which makes it behave exactly like `someFunc.bind(myObj)`. Does that mean it can be considered as syntactical replacement for function bind and doesn't clear the space for new syntax? Like I'm barbarita since that is a major case for find operator. +CZW: I can see in the example is that is the new syntax preserves receiver, which makes it behave exactly like `someFunc.bind(myObj)`. Does that mean it can be considered as syntactical replacement for function bind and doesn't clear the space for new syntax? Like I'm barbarita since that is a major case for find operator. -RBN: Yeah, there's a I think I mentioned earlier that JSC has a proposal for bind-this. Which we were discussing this a bit off line before Before TC39 these two proposals. Don't share a bit of every bit of overlap with how function binding works, but they actually complement each other in that in this proposal. You would say, o.f and the partial call which would essentially bind the receiver. Oh, in that position if you wanted to use it with a free function, you could use bind-this, which could take a object, the bind-this operator, and I'll let JSC explain this more in their proposal. But then use a partial call with that and then be able to bind the receiver of the function and any specific explicit arguments, you wanted to pass. So there is a use case for a scenario where we could have fulsome tactics support for what's currently evaluated with find. And that think will come as we discuss things like the, bind:operator. Brent said, answer the question. I can go to Sarah. +RBN: Yeah, there's a I think I mentioned earlier that JSC has a proposal for bind-this. Which we were discussing this a bit off line before Before TC39 these two proposals. Don't share a bit of every bit of overlap with how function binding works, but they actually complement each other in that in this proposal. You would say, o.f and the partial call which would essentially bind the receiver. Oh, in that position if you wanted to use it with a free function, you could use bind-this, which could take a object, the bind-this operator, and I'll let JSC explain this more in their proposal. But then use a partial call with that and then be able to bind the receiver of the function and any specific explicit arguments, you wanted to pass. So there is a use case for a scenario where we could have fulsome tactics support for what's currently evaluated with find. And that think will come as we discuss things like the, bind:operator. Brent said, answer the question. I can go to Sarah. -SHO: Hi everybody. I'm Sarah Groff Hennigh-Palermo. I'm a new delegate from Igalia. So if I do something really stupid, please forgive me. Anyway, I'm sort of concerned that changing argument orders is changing really a fundamental JavaScript agreement, right? That like argument order is parameter order and that is a known. There's not named arguments. There's not right typed multi method overloading like that argument order and parameter order are the same as like, a very fundamental agreement and as JavaScript practitioner, which is what I was doing before joining Igalia. I haven't even really seen a lot of cases where people are taking shortcuts around putting arguments in different orders. Like I am, I'm just not certain if this is motivated enough to be such a radical change. And so I would be interested in seeing more examples and more evidence before moving forward, that is a really needed radical change. In this case, you're talking more specifically about the ordinal placeholders, being able to reorder arguments, both the placeholders. And just the fact that partial application, right? If you use by and you still maintain that link between argument order and parameter order, but if you use this, then you can partially bind things out of order and that in and of itself, strikes me as a very radical change that, I'm not sure it's fully motivated in this case. Well, being able to apply arguments. +SHO: Hi everybody. I'm Sarah Groff Hennigh-Palermo. I'm a new delegate from Igalia. So if I do something really stupid, please forgive me. Anyway, I'm sort of concerned that changing argument orders is changing really a fundamental JavaScript agreement, right? That like argument order is parameter order and that is a known. There's not named arguments. There's not right typed multi method overloading like that argument order and parameter order are the same as like, a very fundamental agreement and as JavaScript practitioner, which is what I was doing before joining Igalia. I haven't even really seen a lot of cases where people are taking shortcuts around putting arguments in different orders. Like I am, I'm just not certain if this is motivated enough to be such a radical change. And so I would be interested in seeing more examples and more evidence before moving forward, that is a really needed radical change. In this case, you're talking more specifically about the ordinal placeholders, being able to reorder arguments, both the placeholders. And just the fact that partial application, right? If you use by and you still maintain that link between argument order and parameter order, but if you use this, then you can partially bind things out of order and that in and of itself, strikes me as a very radical change that, I'm not sure it's fully motivated in this case. Well, being able to apply arguments. -RBN: There's two things to this by default partial application as specified is fixed arity, which means that the arguments are in a set order. So, if you're not using portable placeholders, if you're not using the rest argument placeholder in, are just using the question mark and a partial call, the arguments are still bound left to right? They the difference is the arguments that you've supplied are essentially bound with values and it just allows to essentially cases, skip arguments that you want to bind or place arguments that you can't currently can't bind unless you completely rewrite the function or turn it into a arrow that passes arguments and And again the down side of the arrow is that all of those are lazy. The evaluated you're having to create closure. If any of the things that you're closing over are mutable, there's a possibility they could change up from underneath you and part of the value of the partial application proposal was to do these things eagerly. So that the evaluation you're Acting happens. Only once., early in the for anything in the argument list for the callee for the receiver Etc, in the cases of parameter argument reordering where you can use a ordinal placeholder. These are very specific and it's very nice use case that's that's designed around taking existing function that I have whether that's from another library or something that I've written and if I need to apply it in a different order, if I didn't pass the same you multiple times. there's a lot of different cases where or if I can just skip certain arguments that I don't care about the first three and I'm only want to pass just the last one. It saves you from having great, an arrow that passes undefined, undefined undefined, knees, and then finally the arrow argument passed in on the left. there it's designed around shortcuts and to give you a capability that if we didn't have the capability, then it would be that we'd run into later and both in these smart mix proposal and the hack style proposal when they were looking at Partial Expressions. There was this need and discussion around investigating having topics that allowed you to have like a carrot 0, or carrot 1,2 to the same type of thing. And these have been come, these have been investigated from different perspectives and different sources and all arrived at essentially, the same conclusion around. there are certain Niche cases that if you don't provide the ability to allow argue reordering that they can't use the feature for their use case and then have to again fall back to an error function. And there's nothing wrong with arrows. For say it's just that again, are functions release. The evaluated. So, there are certain cases where they're harder to use correctly than a simple and fix syntax. that all makes a lot of sense within the context of a pipeline, the reordering, such as the topic character in the half pipe lines, gives me less concerned because it's in a very limited location that the ordering is a sort of limited process for a limited season. and it's explicit where it's happening. +RBN: There's two things to this by default partial application as specified is fixed arity, which means that the arguments are in a set order. So, if you're not using portable placeholders, if you're not using the rest argument placeholder in, are just using the question mark and a partial call, the arguments are still bound left to right? They the difference is the arguments that you've supplied are essentially bound with values and it just allows to essentially cases, skip arguments that you want to bind or place arguments that you can't currently can't bind unless you completely rewrite the function or turn it into a arrow that passes arguments and And again the down side of the arrow is that all of those are lazy. The evaluated you're having to create closure. If any of the things that you're closing over are mutable, there's a possibility they could change up from underneath you and part of the value of the partial application proposal was to do these things eagerly. So that the evaluation you're Acting happens. Only once., early in the for anything in the argument list for the callee for the receiver Etc, in the cases of parameter argument reordering where you can use a ordinal placeholder. These are very specific and it's very nice use case that's that's designed around taking existing function that I have whether that's from another library or something that I've written and if I need to apply it in a different order, if I didn't pass the same you multiple times. there's a lot of different cases where or if I can just skip certain arguments that I don't care about the first three and I'm only want to pass just the last one. It saves you from having great, an arrow that passes undefined, undefined undefined, knees, and then finally the arrow argument passed in on the left. there it's designed around shortcuts and to give you a capability that if we didn't have the capability, then it would be that we'd run into later and both in these smart mix proposal and the hack style proposal when they were looking at Partial Expressions. There was this need and discussion around investigating having topics that allowed you to have like a carrot 0, or carrot 1,2 to the same type of thing. And these have been come, these have been investigated from different perspectives and different sources and all arrived at essentially, the same conclusion around. there are certain Niche cases that if you don't provide the ability to allow argue reordering that they can't use the feature for their use case and then have to again fall back to an error function. And there's nothing wrong with arrows. For say it's just that again, are functions release. The evaluated. So, there are certain cases where they're harder to use correctly than a simple and fix syntax. that all makes a lot of sense within the context of a pipeline, the reordering, such as the topic character in the half pipe lines, gives me less concerned because it's in a very limited location that the ordering is a sort of limited process for a limited season. and it's explicit where it's happening. -SHO: Why it's happening in this case, without seeing a lot of examples, where users are running into cases, where being able to reorder their arguments. Just to me seems like a big enough change. And I don't think I’ve seen enough evidence that like, in my personal work, I didn't see the evidence. I don't know that I've seen evidence out there that this is a problem. That is so big, that it is worth radical reordering. Yeah. Thanks. +SHO: Why it's happening in this case, without seeing a lot of examples, where users are running into cases, where being able to reorder their arguments. Just to me seems like a big enough change. And I don't think I’ve seen enough evidence that like, in my personal work, I didn't see the evidence. I don't know that I've seen evidence out there that this is a problem. That is so big, that it is worth radical reordering. Yeah. Thanks. -RBN: All right. next up. We'll go to I've got more take, I'm going to promote the various other topics on this proposal versus Arrow functions. But go ahead, Mark. +RBN: All right. next up. We'll go to I've got more take, I'm going to promote the various other topics on this proposal versus Arrow functions. But go ahead, Mark. -MM: Yeah, so I want to echo and amplify some of the concerns that have already been raised. JavaScript is already a huge language, the JavaScript language is one, which we should think of it as has already exceeded its syntax budget. The unique role of JavaScript in the world is that it accommodates people of a great range of expertise for many people. Many non-programmers learn JavaScript in order to do something, on a web page, many people by that route learn JavaScript is their first programming language in gradually become more, professional programmers. Everything that we do, like this makes JavaScript harder to learn and the, answer will just learn the subset you're comfortable with doesn't work, when you're you're reading other people's code. Every time we add syntax, we make the learning burden of before you can understand other people's code, much higher. There's a variety of proposals. I want to raise this, not just for this proposal but as a theme for the entire session. There is lots of proposals in this session. but this one stands out a specialty where it seems like, like the problem of I have to use seven characters instead of instead of four characters to express something is treated as more urgent than the fact that we've already got too much Syntax for people to learn so I don't see that this solves a problem. Problem that needs to be solved. I think that the bar for adding new syntax of the language should be very high. I think that pipes with hack style, did meet that bar. So I'm not always against adding new syntax. I was very skeptical on that. And with the series of examples, they specifically with the hack style, that convinced me that it does meet that bar. Nothing here convinces me that it comes anywhere close to meeting that bar. I would I would find that. I would find it very unlikely that there's any modification to this proposal that would lead me to agree to let it go to stage two. +MM: Yeah, so I want to echo and amplify some of the concerns that have already been raised. JavaScript is already a huge language, the JavaScript language is one, which we should think of it as has already exceeded its syntax budget. The unique role of JavaScript in the world is that it accommodates people of a great range of expertise for many people. Many non-programmers learn JavaScript in order to do something, on a web page, many people by that route learn JavaScript is their first programming language in gradually become more, professional programmers. Everything that we do, like this makes JavaScript harder to learn and the, answer will just learn the subset you're comfortable with doesn't work, when you're you're reading other people's code. Every time we add syntax, we make the learning burden of before you can understand other people's code, much higher. There's a variety of proposals. I want to raise this, not just for this proposal but as a theme for the entire session. There is lots of proposals in this session. but this one stands out a specialty where it seems like, like the problem of I have to use seven characters instead of instead of four characters to express something is treated as more urgent than the fact that we've already got too much Syntax for people to learn so I don't see that this solves a problem. Problem that needs to be solved. I think that the bar for adding new syntax of the language should be very high. I think that pipes with hack style, did meet that bar. So I'm not always against adding new syntax. I was very skeptical on that. And with the series of examples, they specifically with the hack style, that convinced me that it does meet that bar. Nothing here convinces me that it comes anywhere close to meeting that bar. I would I would find that. I would find it very unlikely that there's any modification to this proposal that would lead me to agree to let it go to stage two. -RBN: Well, that's the case. I find that unfortunate. The there's one of the goals for this proposal to provide a mechanism to give you some the ability to do some syntactic capabilities that you can't do with bind today and to deal with the fact that Arrow functions don't allow eager evaluation that there's a lot of complexity with eager evaluation semantics trying to get something that's eagerly evaluated to be something like what you can do with an arrow function. To you have to, again, pull out constants for any state mutations. You have to ensure that your the things you're closing over aren't mutable for a very simple cases. That's generally fine, for more complex cases, as you're building more complex applications, like building routers and express applications, Etc. All of these things in cases, people are using Arrow functions today closing over State and those are perfectly acceptable and very valid use case. As but, there are a number of cases for smaller operations where having to use Arrow functions, could be problematic. We're not the current semantics or current capabilities to do bind for good for binding ‘this’ in a reference either again require an arrow to capture or require. Doing like o.f by know, Etc. So there's we have all these cases where we use bind today that are somewhat limited, because of the fact that bind flies from the left and all of these things are right reasons why I had See if introduced I can find the slide. So this was the case where I was just guessing one of the things that we've seen that I've seen in a lot of applications that are written using not only is JavaScript. So if you're looking at someone that's using rxjs, for example, some of them might be using pipelines in the future. If I'm looking at even the typescript code base itself. We have a lot in typescript. We have a lot of internal functions that are very design. Heavily built compiler around, an FP style of development, especially for our scanner. Parser tree Transformations, Etc. So a lot of these one-off functions that we end up creating five or six different versions of it to pass different parameters. Where being able to do partial application of any kind of capture state would be very valuable. Any cases here is an example. If you're trying to do mapping and do some of Math style operation. I even Envision and have a proposal that's not yet. Been proposed providing operators, but functions for each of various operators. You could use it with in ecmascript to make mapping and reducing filtering, Etc. All much simpler without requiring a closure without requiring, this this possibility of side effects of mutations and simplify a lot of what you're reading. So it's much to read than passing a Arrow of X, comma, X into X. Plus 1. I mean, those are both. Those are readable, but we also have existing functions that are not simple math, operations, that you might want to be able to partially apply. And I find all of those to be valuable use cases for me to consider still approaching this. +RBN: Well, that's the case. I find that unfortunate. The there's one of the goals for this proposal to provide a mechanism to give you some the ability to do some syntactic capabilities that you can't do with bind today and to deal with the fact that Arrow functions don't allow eager evaluation that there's a lot of complexity with eager evaluation semantics trying to get something that's eagerly evaluated to be something like what you can do with an arrow function. To you have to, again, pull out constants for any state mutations. You have to ensure that your the things you're closing over aren't mutable for a very simple cases. That's generally fine, for more complex cases, as you're building more complex applications, like building routers and express applications, Etc. All of these things in cases, people are using Arrow functions today closing over State and those are perfectly acceptable and very valid use case. As but, there are a number of cases for smaller operations where having to use Arrow functions, could be problematic. We're not the current semantics or current capabilities to do bind for good for binding ‘this’ in a reference either again require an arrow to capture or require. Doing like o.f by know, Etc. So there's we have all these cases where we use bind today that are somewhat limited, because of the fact that bind flies from the left and all of these things are right reasons why I had See if introduced I can find the slide. So this was the case where I was just guessing one of the things that we've seen that I've seen in a lot of applications that are written using not only is JavaScript. So if you're looking at someone that's using rxjs, for example, some of them might be using pipelines in the future. If I'm looking at even the typescript code base itself. We have a lot in typescript. We have a lot of internal functions that are very design. Heavily built compiler around, an FP style of development, especially for our scanner. Parser tree Transformations, Etc. So a lot of these one-off functions that we end up creating five or six different versions of it to pass different parameters. Where being able to do partial application of any kind of capture state would be very valuable. Any cases here is an example. If you're trying to do mapping and do some of Math style operation. I even Envision and have a proposal that's not yet. Been proposed providing operators, but functions for each of various operators. You could use it with in ecmascript to make mapping and reducing filtering, Etc. All much simpler without requiring a closure without requiring, this this possibility of side effects of mutations and simplify a lot of what you're reading. So it's much to read than passing a Arrow of X, comma, X into X. Plus 1. I mean, those are both. Those are readable, but we also have existing functions that are not simple math, operations, that you might want to be able to partially apply. And I find all of those to be valuable use cases for me to consider still approaching this. -MM: Okay. So if there are compelling examples that That would be interesting, but none of the examples that you've presented seem compelling, all of them are exactly things where I look at them and say what's the big deal? I would just write if I encountered that I would just write an arrow function. or just use bind? I haven't seen a single case where the example is so compelling that it's worth crippling the attempt of people to learn The Language by introducing new syntax. the you have on here is a perfect example of the, the pipe filter with the up arrow and the tilde, And the question mark. How look at how much new syntax were introduced? Into a language that already has too much syntax. We're talking about a real cost on the ability of novice programmers and people approaching the language. Has to look at others people's code, figure it out and start learning what they're doing and everything and you have to ask for each thing. are the benefits of the thing that you're seeking worth the cost in learnability to the millions of novices that keep coming into JavaScript. +MM: Okay. So if there are compelling examples that That would be interesting, but none of the examples that you've presented seem compelling, all of them are exactly things where I look at them and say what's the big deal? I would just write if I encountered that I would just write an arrow function. or just use bind? I haven't seen a single case where the example is so compelling that it's worth crippling the attempt of people to learn The Language by introducing new syntax. the you have on here is a perfect example of the, the pipe filter with the up arrow and the tilde, And the question mark. How look at how much new syntax were introduced? Into a language that already has too much syntax. We're talking about a real cost on the ability of novice programmers and people approaching the language. Has to look at others people's code, figure it out and start learning what they're doing and everything and you have to ask for each thing. are the benefits of the thing that you're seeking worth the cost in learnability to the millions of novices that keep coming into JavaScript. -we have a few more replies on this topic. I think it would be good to get to those. Yulia. +we have a few more replies on this topic. I think it would be good to get to those. Yulia. -YSV: Yeah, I don't want to I don't want to exactly a pile-on hear my comments. Very similar mark one. One issue I have with when I reviewed this proposal is that the person thought that I had is a number of these examples can be done with arrow functions. And while I like the concept of partial application, and what it does for our language, many of the instances within JavaScript are often using bind to bind this. This is when this is something that you see very often. But I haven't seen examples that are sufficiently complex that would warrant like out in the wild that would warrant this special syntax, especially since we have Three or four, distinct proposals that are tackling this question of how to make a better bind in a sense. The pipeline operator in some ways is also creating a partial application approach for people. As is the bind. And there's another proposal that was Half a year ago, with the double semicolon here. Now, we have the tilde and the question mark, I would be very concerned to see all of those proposals go into the language and I'm already concerned about the complexity that we're introducing by discussing each of them without talking about this larger problem of I guess it's this larger bind problem, or this larger argument application problem. So, at the moment, I'm more confused about how we should really approach this problem or if we can formulate it that properly captures what we're trying to do here. +YSV: Yeah, I don't want to I don't want to exactly a pile-on hear my comments. Very similar mark one. One issue I have with when I reviewed this proposal is that the person thought that I had is a number of these examples can be done with arrow functions. And while I like the concept of partial application, and what it does for our language, many of the instances within JavaScript are often using bind to bind this. This is when this is something that you see very often. But I haven't seen examples that are sufficiently complex that would warrant like out in the wild that would warrant this special syntax, especially since we have Three or four, distinct proposals that are tackling this question of how to make a better bind in a sense. The pipeline operator in some ways is also creating a partial application approach for people. As is the bind. And there's another proposal that was Half a year ago, with the double semicolon here. Now, we have the tilde and the question mark, I would be very concerned to see all of those proposals go into the language and I'm already concerned about the complexity that we're introducing by discussing each of them without talking about this larger problem of I guess it's this larger bind problem, or this larger argument application problem. So, at the moment, I'm more confused about how we should really approach this problem or if we can formulate it that properly captures what we're trying to do here. -RBN: Yeah, I was gonna say that this is something that again J.S.Choi and I have had some discussions around the bind-this which was the original double colon proposal for this binding and how the two proposals can kind of work together around this binding, as well as partial application that they don't collide with each other in any way. I do think that I can understand the need to see more compelling examples, compelling examples are kind of hard to fit in the PowerPoint presentation. I do have a couple examples on the explainer and I'd be willing to look into more examples of what's needed to show compelling, use cases, cases, if that's necessary. Most of the examples here are essentially contrived, example is designed to show how the syntax Works without introducing the significant amounts of additional complexity. +RBN: Yeah, I was gonna say that this is something that again J.S.Choi and I have had some discussions around the bind-this which was the original double colon proposal for this binding and how the two proposals can kind of work together around this binding, as well as partial application that they don't collide with each other in any way. I do think that I can understand the need to see more compelling examples, compelling examples are kind of hard to fit in the PowerPoint presentation. I do have a couple examples on the explainer and I'd be willing to look into more examples of what's needed to show compelling, use cases, cases, if that's necessary. Most of the examples here are essentially contrived, example is designed to show how the syntax Works without introducing the significant amounts of additional complexity. YSV: Right now, complete examples, that sort of show how this is you how this solves a problem in the wild would be really beneficial. I also think that sort of like taking this broader question because the two of you talked about how these two proposals not could be written in such a way that they didn't conflict. I'm wondering if there is a broader question that we should be asking a broader problem that we should be answering that wouldn't require. Well right now we have three syntaxes for this, which is a lot. What if can we reframe this problem? And really tighten it into something that we can associate with JavaScript written in the wild that really solves a user problem? - I'm good. Next up is Shu. So, my topic. +I'm good. Next up is Shu. So, my topic. -SYG: So I personally agree with prioritizing readers over writers here, though. I have a concrete question. I'm still kind of missing at a Level, I guess on what the value, add over arrows. Is, I've heard. eager evaluation like, what is the problem with like if I were to use an arrow for some of these things you showed this example of this refactoring Hazard where you had an actual application and then you want to turn it into a partial application and it had and i++. Yes, exactly this slide. What is the value? Add for using and over an arrow? Like, if I were to make an arrow? I would just move the I plus plus out. What why is that such an issue? +SYG: So I personally agree with prioritizing readers over writers here, though. I have a concrete question. I'm still kind of missing at a Level, I guess on what the value, add over arrows. Is, I've heard. eager evaluation like, what is the problem with like if I were to use an arrow for some of these things you showed this example of this refactoring Hazard where you had an actual application and then you want to turn it into a partial application and it had and i++. Yes, exactly this slide. What is the value? Add for using and over an arrow? Like, if I were to make an arrow? I would just move the I plus plus out. What why is that such an issue? RBN: It's more about. The I'm not trying to describe a hazard of arrows in general, but that's if you were to. Say naive. Idli, just put an arrow function in front of this in front of the You would end up incrementing. ‘i’ on each evaluation. It might, it's less of a very specific case of this is something that happens a lot, but more of a having this specific Syntax for partial application, makes it very clear that it's very clear that the arguments are bound. That's the placeholders, are the specific places. To require you to named arguments that you don't need names for. So instead you'll end up with an arrow that might have a comma, B comma C to in through pipe line arguments, or underscore 0, underscore one that cetera to pass in these arguments that you don't necessarily need names for. And if you needed names within a debugger for the function, they could theoretically be pulled from the function. That's partially applied. again, as was mentioned earlier, we don't JavaScript doesn't really do anything with names of arguments outside of how we handle the destructuring for object literals. So the downside of an arrow is you having to name things that are essentially argument goes in. in. argument comes out with 1 or 2 etcetera, into the function that you're actually applying or you're having to think about an ad in intelligent names and So, these are all decisions. You have to make, whereas, partial application, code takes those decisions out of the equation. You don't have to worry about what the names are for these things. You have to worry about pulling, I plus plus out of the function, call these things all just fall out from eager evaluation follow-up from the placeholders not requiring names. So in essence, it's designed actually simplify some the types of things you might do with an arrow without running into the same caveats of narrow again having to pull out the I plus plus -SYG: I don't see those caveats a I guess, I personally I disagree with that sense because I don't find partial application to be a thing that I would broadly. Apply. I say that. I suppose, I say that more as an implementer like moral hazard is kind of built into all language design, but I am I don't want really a feature who's point is to let new function wrapper proliferate, because it's not going to be free, but I guess that's a, that's beside the point here. +SYG: I don't see those caveats a I guess, I personally I disagree with that sense because I don't find partial application to be a thing that I would broadly. Apply. I say that. I suppose, I say that more as an implementer like moral hazard is kind of built into all language design, but I am I don't want really a feature who's point is to let new function wrapper proliferate, because it's not going to be free, but I guess that's a, that's beside the point here. -RBN: Say if you're going to introduce if you're putting this into an arrow function, you're creating a function wrapper. Anyways, it's not exactly exactly or even if you're using a function.bind, you're not, doesn't for you anything. If anything it closes over less State because it doesn't have to maintain a reference to the environment record, +RBN: Say if you're going to introduce if you're putting this into an arrow function, you're creating a function wrapper. Anyways, it's not exactly exactly or even if you're using a function.bind, you're not, doesn't for you anything. If anything it closes over less State because it doesn't have to maintain a reference to the environment record, SYG: but it encourages creating more functions. If it's like, I don't see what you have said is caveats in how Arrow functions are slightly more difficult to use and having two more thought go into. I don't really see those as issues. I think. Think it's a disagreement here, but I think OK, I've said all I wanted to say here. We move on to none arrows or it. Sorry we can advance The queue. @@ -473,7 +489,7 @@ WH: I would like to echo the concerns about this blowing past the syntax budget JHD: So, the three code blocks here. The “with bind” and “with partial application” blocks make perfect sense to me. They're perfect. They're like straight simple analogues. My concern is over the third block, which kind of captures the new intention in The Binding. This seems really weird for me as a language. We've despite adding `new.target` to account for legacy use cases. Where a function can be called or constructed and still return an instance. We have moved pretty strongly away from things constructing without explicitly using `new` at the call site, and this allows you to create a function that does not just produce an instance which in itself, it can happen all over the place. Right? There is not an issue. It produces an instance. It's an `instanceof` the function you just called without using new and that's weird. So, I love to see more motivation for that, that new capability, like I think, in other words, I think the first two things just kind of naturally already work in the language and that's not introducing a new thing. And it's I would be surprised if anyone concerned about those first two because that's the way `bind` already works. But this last chunk - I think that that is a very large change and I'd love to see more motivation for it, or understand the motivation for it. -RBN: And this is something that we've discussed in the pipeline champions channel as well in the past around. Basically, there's two things that are the reasons why I considered making the new keyword part of the partial application. One is that there is currently no easy mechanism to, if I wanted to call. Map array map on an array of items and passing in a constructor for a class that won't create new instances into the result. Because of the fact that it requires new. So you have to wrap that in an arrow function or have some other mechanism for that. We don't have a way, there's no like function or a function dot. Prototype.new that works like reflect construct or works like new again instance that we could pass that function where we need to do some type of mapping. So instead we have to use an arrow which goes to the second case. If you were to take a expression. was o 2, equals new c 1 comma 2 and then I decided, oh, I want to make this a partial application because I want to add a placeholder. This example doesn't use placeholders because it was trying to illustrate specific differences. But if I want to pass in a place holder, and if I were going to turn this into a narrow, I would say a arrow news, c 1 a for Sample. And again, this is the syntax that we were using for partial application was designed around the removing the argument list of things that you need to name the eager evaluation of the arguments that you're applying so that you can pull out a function that you can then call later. So if you were to take the same thing you were doing and just replace one of these with a placeholder, suddenly, if we don't have the new keyword, now, I have to remove the new keyword and it To make sure I call new somewhere else. Whereas, if I was using Arrow function, calling new on the Arrow function with throw an error because you can't do an arrow function. It doesn't have a valid construct. So the goal here was to emulate bind, but do something you can't do with by hand and with Constructors today, which is give you the ability to evaluate this in a as a call back position. Okay. +RBN: And this is something that we've discussed in the pipeline champions channel as well in the past around. Basically, there's two things that are the reasons why I considered making the new keyword part of the partial application. One is that there is currently no easy mechanism to, if I wanted to call. Map array map on an array of items and passing in a constructor for a class that won't create new instances into the result. Because of the fact that it requires new. So you have to wrap that in an arrow function or have some other mechanism for that. We don't have a way, there's no like function or a function dot. Prototype.new that works like reflect construct or works like new again instance that we could pass that function where we need to do some type of mapping. So instead we have to use an arrow which goes to the second case. If you were to take a expression. was o 2, equals new c 1 comma 2 and then I decided, oh, I want to make this a partial application because I want to add a placeholder. This example doesn't use placeholders because it was trying to illustrate specific differences. But if I want to pass in a place holder, and if I were going to turn this into a narrow, I would say a arrow news, c 1 a for Sample. And again, this is the syntax that we were using for partial application was designed around the removing the argument list of things that you need to name the eager evaluation of the arguments that you're applying so that you can pull out a function that you can then call later. So if you were to take the same thing you were doing and just replace one of these with a placeholder, suddenly, if we don't have the new keyword, now, I have to remove the new keyword and it To make sure I call new somewhere else. Whereas, if I was using Arrow function, calling new on the Arrow function with throw an error because you can't do an arrow function. It doesn't have a valid construct. So the goal here was to emulate bind, but do something you can't do with by hand and with Constructors today, which is give you the ability to evaluate this in a as a call back position. Okay. JHD: Yeah, I mean, thank you. I think as I've said on GitHub I think like `() => new Something`, would be fine there, even if that something is a partially applied function, like your const f here. But yes, thank you. @@ -483,7 +499,7 @@ WH: I read through the proposal. I followed the logic, but the thing that bother RBN: I would say this is more of an example of this is just again falls out of how bound function, exotic objects work. If I were to today, have a function of presents. -WH: No, it doesn't because you're assuming that the line which creates *g* creates something with a constructor. +WH: No, it doesn't because you're assuming that the line which creates *g* creates something with a constructor. RBN: It does @@ -491,7 +507,7 @@ WH: Because if you're wrapping a `new`, it should only be a function and not a c RBN: This is not the same differences, new new function. This is more like if you had a constant g equals a regular function, not an arrow function that returns new C and you call that function G without new, you get an instance of C if you call you call that function with new. [Get an instance of C because whatever you created in the Constructor, it gets replaced with what you actually new’ed. Yes, that is like, that is the line F f&o for I think about G. -RBN: I'm talking about G as well, I if I could write this up, would as an example of where it's not new new functioned. It's not that I've taken this expression and placed it in the position of where G is. So, it looks like new new C 1, comma 2. I've produced a function. It's a function. that returns an object. So, when I call new on that function, it's going to evaluate the function return, the result, which is an object and its, which is not going to be. This that gets created when you when you call new, This falls out of how function evaluation Works. Currently if you're using just regular function and not arrows. +RBN: I'm talking about G as well, I if I could write this up, would as an example of where it's not new new functioned. It's not that I've taken this expression and placed it in the position of where G is. So, it looks like new new C 1, comma 2. I've produced a function. It's a function. that returns an object. So, when I call new on that function, it's going to evaluate the function return, the result, which is an object and its, which is not going to be. This that gets created when you when you call new, This falls out of how function evaluation Works. Currently if you're using just regular function and not arrows. WH: So for *f*, I see this as doing a bind, which you can either call directly or use `new` on. The *g* is not like bind. *g* is like creating something which can only be called. So invoking `new` on it makes no sense. @@ -507,23 +523,21 @@ RBN: I don't think I would use an ellipsis of question marks that it, I don't th JSC: Yeah, fair enough. Just something to consider. Yeah, but I know you got their fish to fry, that's but that's all I wanted to raise. -BT: Right, actually, with that done. The queue is empty. All right. +BT: Right, actually, with that done. The queue is empty. All right. RBN: Well, it from it. Sounds like from the discussion so far. I need to do some more convincing before we can advance to stage two. So I won't ask for advancement at this point. I appreciate the feedback if anyone else has other feedback on the proposal if they could open issues the issue, tracker for the proposal. I'd appreciate if they have compelling examples that they feel would useful if they could add those to the issue tracker as well or in a PR to the readme. I'd appreciate it. And I'll do some additional investigation for some use cases where I would find this useful in various code bases. ### Conclusion/Resolution -Does not advance at this time +Does not advance at this time ## JS Module Blocks Update + Presenter: Surma (SUR) - [proposal](https://github.com/tc39/proposal-js-module-blocks) - [slides](https://drive.google.com/file/d/1jeBsBdiy7wuyak6pQ4aWdhnyzZF8Pxj1/view?usp=sharing) - - - SUR: All right, so this is about module blocks and it's an update. I'm not looking for stage advancement this time around. Dan is on an extended leave. So it will just me championing. Obviously I have worked with many other folks as well. SUR: Quick refresher. Module blocks adds a new syntax for blocked that our modules, which can bear after we dynamically import it to get that module into life, but also sent across to other Realms because they are structured cloneable being a foundation piece for all. All kinds of concurrency patterns, scheduling patterns. There's also Dan's proposal of Now, the name actually escapes me, Module fragments, which is completely decoupled and have decided to not merge them, which then I know then was talking about last time. We are not going to that mall. Your blocks remains completely self-contained and module fragments is Dan's proposal. that will keep will be working on. He will be using model blocks as a as a basis, but we should just be talking about one of the blocks in isolation or at least for now. @@ -532,7 +546,7 @@ SUR: So what is new? Well, the first thing that happened is we have an HTML spec SUR: but that's just, you know, a bit of news from HTML and more specifically if more relevant for this group, I think that we are adding a bit of sugar syntax where the above a module function is sugar for what you see at the bottom, which is a single function module that has just one default export. And the reason for that is that we the more we thought about it, the more we look at the prior art from other languages, lots of the threading paradigms and usage patterns use functions as their fundamental Primitives, rather than modules. for example, on the left, you have Swift on the right hand side, you have Kotlin with the very popular, you know, reactive programming pattern and they always have these individual functions that, you know, do some processing on data and you can schedule these functions to run on one thread or another on a background thread on the UI. thread. And I expect that we actually want similar paradigms to be possible in JavaScript if you imagine this with a full module block syntax, it would get quite noisy while with a module function. This will actually become possible and you know, the observe on or subscribe on functions graph. For example, take a worker or something. And so, yeah, we were kind of thinking that to actually allow all these patterns to be adopted. In JavaScript, even though of course, you know JavaScript can't adopt the exact or shouldn't adopt the exact same paradigms because, you know, shared memory isn’t a thing. And so different paths have to be taken but still the model programming, I think. I would want to see in jobs for that, makes it very easy to let you define a flow for data and then process it on the appropriate thread. So that's why we've decided to add the Syntax for module functions and that has also been added to the spec draft. That is in the repository. So we have module block all those. We have multiple function Expressions module, generator expressions, and their async counterparts. And if want to, you can take a look at those at the new features as it is right now at this URL. -SUR: the thing I'm here for actually is because I got stage two last time, but we kind of forgot to talk about stage three reviewers. I have since then gotten two offers out of bands from Leo and from guy Bedford who have offered and kind of agreed to be stage three reviewers. So I just wanted to put this on here that I have two people, if there's any more people who would want Strong feelings about this and want to review. You are very welcome to but I guess at this point. I actually even come to my stateroom years. years. So this was pretty much already, it and just quick update. +SUR: the thing I'm here for actually is because I got stage two last time, but we kind of forgot to talk about stage three reviewers. I have since then gotten two offers out of bands from Leo and from guy Bedford who have offered and kind of agreed to be stage three reviewers. So I just wanted to put this on here that I have two people, if there's any more people who would want Strong feelings about this and want to review. You are very welcome to but I guess at this point. I actually even come to my stateroom years. years. So this was pretty much already, it and just quick update. JWK: So I want to reject the module function sugar because I have two concerns about this new syntax. In the desugared form, it's easy to see that of all the items inside the module block {}, the curly braces are inside another lexical scope. The module {} isolates everything from the rest of the file, but in this shorthand syntax (module function (A, B = expr) {}), function parameters A and B are outside of the {}. Although we know initializers of A and B are running in another scope, it’s not being scoped visually. And I think that might bring confusion. The second concern is that, if you provide syntax sugar for `exports default function` but you didn't provide syntax `export default expr`. That creates an asymmetry in the language @@ -552,13 +566,13 @@ JHX Yeah, so so I think, I think the make its it may be a some confusion because SUR: So I just want, okay. So with what if you if I use a default, if someone specifies a default parameter That's where actually the confusion could come in that. They default parameter value, which semantically be inside the module block, but it looks like it's outside while within the de-sugared version It would not be the case. Okay, I get that. -JHX : A second comment about it. I understand the motivation of model functions syntax. We want add syntax or but Even we have more syntax it seems still not good enough compared to other languages because it seems on the language that they are syntaxes much close to the arrow function in, but what we provide here is more like traditional. Function. +JHX : A second comment about it. I understand the motivation of model functions syntax. We want add syntax or but Even we have more syntax it seems still not good enough compared to other languages because it seems on the language that they are syntaxes much close to the arrow function in, but what we provide here is more like traditional. Function. SUR: So yeah, I was thinking about also doing module Arrow functions to get even closer to syntax, but then I think someone point out that it might also cause confusion with this. I guess I have rethink what this syntactic sugar can actually be specified in a way that's consistent and meaningful, or if maybe the da the da sugared version is still, Is enough. Maybe I'll do some hypothetical Explorations there, but it definitely seems like there's some concern about the sugar. JHX: Thank you. -MM: Last time, when module blocks and module fragments were thought to be reconcilable, there was some confusion about how static is the semantics of the thing we're talking about. There was one extreme which I think corresponded to the original module blocks. It's completely static. It's not linked. It's not initialized. At another extreme. You've got linked and initialized module instances. And then with the fragments there was there we seem to be dancing around the possibility of something that's linked but not initialized, which seems very confusing to me. So are these purely static, are these is ‘m’, hear something that in which no decisions about what it links to has been made. +MM: Last time, when module blocks and module fragments were thought to be reconcilable, there was some confusion about how static is the semantics of the thing we're talking about. There was one extreme which I think corresponded to the original module blocks. It's completely static. It's not linked. It's not initialized. At another extreme. You've got linked and initialized module instances. And then with the fragments there was there we seem to be dancing around the possibility of something that's linked but not initialized, which seems very confusing to me. So are these purely static, are these is ‘m’, hear something that in which no decisions about what it links to has been made. SUR: Yeah, so I can't really speak to module fragments. But as of this, like ‘m’, in this case is, you could think of it as just a string of source code and it will only get properly evaluated by the time you call dynamic import. I think we were talking about the some things potentially being early like syntax errors, Potentially. Actually, that's what you act as a spec now. The body gets parsed at declaration time, but it doesn't get evaluated only. @@ -582,21 +596,21 @@ SUR: Yeah, I'm definitely not giving up on it yet. I want to explore further and NRO: Happy to be a reviewer, but I'll confirm this by the end of the meeting. -JWK: I opened an issue. It is possible to be a perf footgun that developers create a new module block every time. Even if they don't need to. I posted the example code in chats and you can see every time the user clicked the button, it will create a new module block in the memor. that module will never be never be recycled within an execution context. That problem isn't really resolved. I think, should we choose either side or, or try to support both sides with the solution proposed in the issue. +JWK: I opened an issue. It is possible to be a perf footgun that developers create a new module block every time. Even if they don't need to. I posted the example code in chats and you can see every time the user clicked the button, it will create a new module block in the memor. that module will never be never be recycled within an execution context. That problem isn't really resolved. I think, should we choose either side or, or try to support both sides with the solution proposed in the issue. SUR: yeah, I promise you to get back to the issue and then we can continue discussion there since we're all the time. - ### Conclusion/Resolution -reviewers will be: + +reviewers will be: Mathieu Hofman (Agoric) Jordan Harband (Coinbase) Guy Bedford (OpenJS Foundation) Leo Balter (Salesforce) Nicolò Ribaudo (@babel) - ## DurationFormat + Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-intl-duration-format) @@ -630,11 +644,11 @@ SFC: The other comment I had was to say that we do have the data. We just don't USA: Yeah. Thank you for your view as well. So, I actually had one question if you don't mind before we finish, which is, does the data include data for negative durations, or it just for the positives? -SFC: Number formatting supports negative values. I'm still unclear on exactly what the issue is with regards to formatting of negative durations. +SFC: Number formatting supports negative values. I'm still unclear on exactly what the issue is with regards to formatting of negative durations. -USA: Right. The yeah. Yeah, the individual would just be for. yeah, I see what you mean. Okay. Thank you for your comment. I think. I agreed. Great, and I'd be happy to resolve it. Personally. I have no strong opinions. Can we ask conditional stage 3, on that one thing because a smaller group of people might be sort of more interested in that Niche discussion. +USA: Right. The yeah. Yeah, the individual would just be for. yeah, I see what you mean. Okay. Thank you for your comment. I think. I agreed. Great, and I'd be happy to resolve it. Personally. I have no strong opinions. Can we ask conditional stage 3, on that one thing because a smaller group of people might be sort of more interested in that Niche discussion. -BT: All right. Is there any objection to moving duration for it to stage 3 modulo those, that side discussion, that's going to happen. I'm not hearing any objections. I think we're at stage three conditional on that, that discussion mentioned. Thank you. +BT: All right. Is there any objection to moving duration for it to stage 3 modulo those, that side discussion, that's going to happen. I'm not hearing any objections. I think we're at stage three conditional on that, that discussion mentioned. Thank you. BT: would be good to make sure that anyone who expressed interest in joining knows when that chat is going to happen. Can we put that in the notes? @@ -645,5 +659,5 @@ MLS: I think we're fine with it. SFC: Cool, thank you. ### Conclusion/Resolution -* Stage 3 conditional on discussion of whether fractional digits sets minimum as well +- Stage 3 conditional on discussion of whether fractional digits sets minimum as well diff --git a/meetings/2021-10/oct-26.md b/meetings/2021-10/oct-26.md index 4a2a9606..513cd075 100644 --- a/meetings/2021-10/oct-26.md +++ b/meetings/2021-10/oct-26.md @@ -1,7 +1,8 @@ # 26 October, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Bradford C. Smith | BSH | Google | @@ -16,8 +17,8 @@ | Jack Works | JWK | Sujitech | | Yulia Startsev | YSV | Mozilla | - ## Intl Locale Info update + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-locale-info) @@ -31,11 +32,11 @@ FYT: So, let's go through this red area that the three of the PR that proposed a FYT: So that's one of the change we discussed in order to reflect the change. So this is a new spec. The red area is the one who chose, which would that change used to be too thin and there too. each one of those are integer now become an array of integer, which actually this stage is return a list of integer and next page we'll show we changed this to an array? So here is the change. We get this from the weekend and we change it to create an array from the list and then we return. So this is the first change, PR 44 to responds to Andrea's finding of the need of supporting non-continuous weekend and TG2 discuss that we think this is a right thing to do. That was reported and we hope TC39 members can also support or reach consensus of adopting this. -FYT: The second part, which is pretty trivial. we saw how in the before stage 3, we didn't adding to appendix A - 402 has an appendix about implementation dependent behavior - the new objects. We somehow missed out who we didn't add it there for now which has been amended it is part to have the whatever the item. That should be implementation-dependent. So this is attacked by adding in the green area. Of course it is. There's appendix is have a larger party, which it doesn't show anything that we didn't change. This is the car. We change the Locale. So that's the second PR we change in PR 38. +FYT: The second part, which is pretty trivial. we saw how in the before stage 3, we didn't adding to appendix A - 402 has an appendix about implementation dependent behavior - the new objects. We somehow missed out who we didn't add it there for now which has been amended it is part to have the whatever the item. That should be implementation-dependent. So this is attacked by adding in the green area. Of course it is. There's appendix is have a larger party, which it doesn't show anything that we didn't change. This is the car. We change the Locale. So that's the second PR we change in PR 38. FYT: So the other thing is a trivial one. We just didn't show the text here, is that a juror requires us with it? Would you know that the identifier we return our canonicalized? Basically, we just in a spectacular we just added canonical that works before the identifier to make sure when we returned the identifier, I think we assumed that it was canonical and I think Andrea wanted to try to want to make sure is more explicit somehow if some implementation internally have a some uncommon identifier it should be converted to a canonical identifier. So we make it clear. That is the case. But I didn't show the spec text here. Here is basically adding a word canonical in several places. -FYT: so, one of the issue that we talked about, we cannot reach consensus. And is that is if somehow the Unicode extension - sorry, we reach the consensus that would not going to change it. If a calendar, whether that should impact the week info to exclusive and a decision. We're in the You to discuss that all the Unicode extension that Locale will have the effect that we don't want to make it explicit A which kind of part of making impacting the week info in the all possible. Sometimes they may be the same but all possible extension may change that. So that will be have leave us some freedom in the future. Maybe, in some cases, we do need to adding that right now is also only ISO 8601, calendar have such an impact. +FYT: so, one of the issue that we talked about, we cannot reach consensus. And is that is if somehow the Unicode extension - sorry, we reach the consensus that would not going to change it. If a calendar, whether that should impact the week info to exclusive and a decision. We're in the You to discuss that all the Unicode extension that Locale will have the effect that we don't want to make it explicit A which kind of part of making impacting the week info in the all possible. Sometimes they may be the same but all possible extension may change that. So that will be have leave us some freedom in the future. Maybe, in some cases, we do need to adding that right now is also only ISO 8601, calendar have such an impact. FYT: There are other issues that we somehow have not been able to address which still under discussion and work on that one is someone bring out about Direction and textInfo. What will happen if they are vertical writing system, right? So we know Mongolia which script even written. so, let's create, you know, currently Mungo is using Mongolia written Surly script, but inner Mongolia in China, actually, they have Mongolia script. Sorry, Mongolia written, Mongolian script, and then they have to direction which are vertical or if Chinese or Japanese Korean and Tibetan. If they are written in vertical how to deal with the tax directionality. thing we Is kind of additional feature requires, I originally, where we put the text over there where you only eat and to support our vertical. sorry, horizontal line layout thing issue, but they are some discussion about that whether we should address it or not is still under discussion. The other is whether we do need to define the behavior where no time zone is used within a region. And should we do with that? That is tried to bulletproof the spec and we haven't resolved that issue yet is kind of edge cases, but we I think we probably need to do with it before the event stage four what we have and be able to address it yet. @@ -45,7 +46,7 @@ FYT: So, a particular, thank Shane and ZB to enrich our for the review and many FYT: the second is that if you can join us on our discussion one hour from now, is a 45 minutes from now and discuss the guideline about order of return collation arrays. Any question, comments? -RPR: There is no one on the queue at the moment. Would anyone like to go on the queue? Okay, so no questions or comments so far. +RPR: There is no one on the queue at the moment. Would anyone like to go on the queue? Okay, so no questions or comments so far. FYT: so, if there are no comment or question. My request is that can we have consensus of the retrospectively approve of those 3 PRs to make them official? @@ -58,12 +59,17 @@ FYT: Yep. Thank you. Yeah, I don't have probably scheduled too much time origina RPR: Anything more you'd like to - ? FYT: Thank you for your time, Frank. + ### Conclusion/Resolution + Consensus on all three PRs: + - changing `weekend` to a list - adding new items to 402 Annex A - canonicalizing identifiers + ## JSON.parse source text access update + Presenter: Richard Gibson (RGN) - [proposal](https://github.com/tc39/proposal-json-parse-with-source) @@ -83,7 +89,7 @@ RGN: Just to run through them really quickly. We had one, the biggest one for ad RPR: There is no one on the queue. -RGN: Okay, leaving this for now then and moving on. Serialization was deemed as an important use case. It was actually originally absent, but round-tripping was so compelling that it got added to the proposal. And it was since that time actually, just in the past few days that we have a bikeshed issue number, 18 opened up for how to do it. We've got a couple possibilities that have already been identified. The lower one was actually just put in as a placeholder where the replacer function could be provided with, for instance, a one time use symbol and then if the output value is an object and includes that symbol as a property key then the corresponding string becomes the raw text. Another possibility that you see above here is providing it essentially a one time use function. Where if the return value is the output of that function. Then that is serialized to a string. It becomes the raw text. And I think there, if we go with this, with this top point, of course, the question is, what does that output value look like? Is it a special symbol? Is it an object with an internal slot that indicates the particular JSON.stringify invocation, it corresponds with? Or it something else altogether. And I'm going to pause here again because this is the bulk of the conversation that I'm looking for in this meeting. There is a topic of the queue. So take it away in Mathieu. +RGN: Okay, leaving this for now then and moving on. Serialization was deemed as an important use case. It was actually originally absent, but round-tripping was so compelling that it got added to the proposal. And it was since that time actually, just in the past few days that we have a bikeshed issue number, 18 opened up for how to do it. We've got a couple possibilities that have already been identified. The lower one was actually just put in as a placeholder where the replacer function could be provided with, for instance, a one time use symbol and then if the output value is an object and includes that symbol as a property key then the corresponding string becomes the raw text. Another possibility that you see above here is providing it essentially a one time use function. Where if the return value is the output of that function. Then that is serialized to a string. It becomes the raw text. And I think there, if we go with this, with this top point, of course, the question is, what does that output value look like? Is it a special symbol? Is it an object with an internal slot that indicates the particular JSON.stringify invocation, it corresponds with? Or it something else altogether. And I'm going to pause here again because this is the bulk of the conversation that I'm looking for in this meeting. There is a topic of the queue. So take it away in Mathieu. MAH: Yeah, so one issue I raised on GitHub with outputting raw JSON like this. We should make sure that the value returned parses as valid JSON, so that the produced output cannot be changed unexpectedly, e.g. you cannot insert new keys or something like that this way. You cannot have a new, For example, close curly, start a new property, then open curly to inject a new property. @@ -115,7 +121,7 @@ MF: So I'm not really in favor of having raw produce kind of special objects. Li RGN: It sounds like you probably have a similar position to me. You can't articulate - well, can you articulate exactly why the one-time use approach seems better? Or is it just kind of a gut feeling? -MF: Like symbols being fresh for every invocation of this callback? Yeah. It's kind of just a gut feeling. I'm trying to avoid patterns where somebody where the symbol escapes the callback, and then people use and construct objects and pass those around and don't realize it has the symbol on it. That doesn't have to be fresh for every invocation of the callback, but it would have to be fresh for every invocation of stringify at least. So at the very least, I would like to go that far. And if there's no extra cost to every invocation, I think that would be even less risk of things like that. +MF: Like symbols being fresh for every invocation of this callback? Yeah. It's kind of just a gut feeling. I'm trying to avoid patterns where somebody where the symbol escapes the callback, and then people use and construct objects and pass those around and don't realize it has the symbol on it. That doesn't have to be fresh for every invocation of the callback, but it would have to be fresh for every invocation of stringify at least. So at the very least, I would like to go that far. And if there's no extra cost to every invocation, I think that would be even less risk of things like that. RGN: Right. Yeah, so and those are the three options. It's every invocation of stringify, every invocation of the callback, or it's just a global capability. That is, you know, shared across everything. @@ -125,17 +131,17 @@ SYG: Michael, could you expand on why? You said you rather not but I don't reall MF: of the three options. I have identified an issue with that final option. That issue was the way the symbol being used on not during an invocation of this callback and being put on objects and without people realizing that it's on those objects. They participate in JSON stringify and unexpected behavior happens for these developers. It's just a pattern we don't want to see. I don't think there's any -- -SYG: What is the expected Behavior? You're envisioning because it only applies to report as it only applies in the context of stringify, to begin with right on. How did they misuse it? +SYG: What is the expected Behavior? You're envisioning because it only applies to report as it only applies in the context of stringify, to begin with right on. How did they misuse it? MF: So, instead of getting a behavior where the normal JSON stringify algorithm is applied to that object, it bypasses, it uses the field. The field where the symbol is installed. -RGN: it's the pattern there would be you know, objects could be constructed which will serialize as an entirely different object and If that's passed between code, that was authored by different parties than is not clear from intra-property enumeration that this Behavior will be the case. And that seems concerning. I don't, I don't know if I can put anything stronger on it than “concerning”. +RGN: it's the pattern there would be you know, objects could be constructed which will serialize as an entirely different object and If that's passed between code, that was authored by different parties than is not clear from intra-property enumeration that this Behavior will be the case. And that seems concerning. I don't, I don't know if I can put anything stronger on it than “concerning”. SYG: Yeah, I see the argument. I don't find it compelling, but I think I see the argument, I guess you want the author to always, for each invocation of stringify, you want the author to type out intent by having to use the new raw tag? MF: Yes. Yes. I want the intent expressed in this callback. -SYG: I don't. Okay, so where is not compelling to me as I don't see why it's important to really Express intent for each invocation of stringify. Like I don't the possible (?) that's very compelling. Interesting to hear other thoughts because I seem to be the odd one out so far. +SYG: I don't. Okay, so where is not compelling to me as I don't see why it's important to really Express intent for each invocation of stringify. Like I don't the possible (?) that's very compelling. Interesting to hear other thoughts because I seem to be the odd one out so far. MF: How would you express the intent? How would you force that the intent is expressed without having some sort of freshness to the symbol? @@ -143,10 +149,9 @@ SYG: I mean, if it's not fresh you would use it as if it were fresh but you can MF: Im not concerned about somebody keeping the function around, +SYG: aving a function that returns an object with that symbol, This Global special symbol, seems intent enough for me. I don't know why you need to type it in new each time. He's that mean, invite me. -SYG: aving a function that returns an object with that symbol, This Global special symbol, seems intent enough for me. I don't know why you need to type it in new each time. He's that mean, invite me. - -MF: Authors intent that maybe somebody else's else's intention and +MF: Authors intent that maybe somebody else's else's intention and SYG: so but you have to call this function, right the author, put like you stop to pass this function that that creates this object with the tag into stringify like Like that seems intent enough for me. fact that someone was unable to didn't know for example, that transitively, the function of the passing of stringify ultimately create some objects with this tag. Like that seems not a problem to me. They should have known that I guess if they thought this was an issue. Like it happened to calling and if you don't know, that doesn't seem right for the language to restricted in that sense. @@ -158,17 +163,17 @@ MF: I feel like we're getting on a tangent. RGN: Yeah, maybe a little bit. So as I said, I will open an issue for this. And I think the biggest question is going to be, is there. Is there a problem with minting new symbols, for instance, on every invocation of stringify? Because if there is not, if that is acceptable it is going to be less surprising because code like they've got in the bottom here would never serialize this object in a different fashion than expected. Because this new capability isn't used and so the behavior here would always match current behavior. As opposed to if it is used it is valid across invocations than this object could have somewhere in its graph, a use of that symbol which would have surprising results because raw output intent has not been expressed in this stringify. But we'll discuss it more in GitHub I think. -SYG: Okay, as for I also want to respond to Michael's question earlier about The implementer question I suppose, there's the object case, with this symbol puts more strain on the GC, and that is creating many, many small objects. You have a generational GC which everybody does? Probably fine. I imagine these things are very short-lived. So they were just get scavenged pretty quickly. But it is, it does seem like more allocations, whereas, if you have a special exotic thing, you maybe you can tag a bit or something. Like it probably still is another allocation, but maybe slightly smaller. Seems like you and +SYG: Okay, as for I also want to respond to Michael's question earlier about The implementer question I suppose, there's the object case, with this symbol puts more strain on the GC, and that is creating many, many small objects. You have a generational GC which everybody does? Probably fine. I imagine these things are very short-lived. So they were just get scavenged pretty quickly. But it is, it does seem like more allocations, whereas, if you have a special exotic thing, you maybe you can tag a bit or something. Like it probably still is another allocation, but maybe slightly smaller. Seems like you and -MF: you lose somebody like a getter or something for the rawTag field of this object so that you don't pay that cost unless we're using the feature. +MF: you lose somebody like a getter or something for the rawTag field of this object so that you don't pay that cost unless we're using the feature. -SYG: Pay the cost of how would they get her? Get, get around the cost of the allocation. You still need to create the object. It would still need to have something. That's like there's a getter. +SYG: Pay the cost of how would they get her? Get, get around the cost of the allocation. You still need to create the object. It would still need to have something. That's like there's a getter. -MF: Well, you already have to pay that. Or for this whole proposal like this proposal requires you to pay that and prepare instructing, object. What I'm saying is to avoid the construction of the fresh symbols the whole time. Would it be like getter this symbol? Help with that. +MF: Well, you already have to pay that. Or for this whole proposal like this proposal requires you to pay that and prepare instructing, object. What I'm saying is to avoid the construction of the fresh symbols the whole time. Would it be like getter this symbol? Help with that. -SYG: I don't know what the getter would do. What's the get? +SYG: I don't know what the getter would do. What's the get? -MF: If you look at this second example, across the broad tag would so he worried about a lot of new small objects that they getter for had would be a getter for a fresh symbol. So you wouldn't have to create the fresh symbol to construct this object that's passed to the callback. You just have a getter which can be used in addition to the callback. +MF: If you look at this second example, across the broad tag would so he worried about a lot of new small objects that they getter for had would be a getter for a fresh symbol. So you wouldn't have to create the fresh symbol to construct this object that's passed to the callback. You just have a getter which can be used in addition to the callback. SYG: No, I think you misunderstood my concern. It was the small objects of the many small like actual objects with a symbol slot in it, like, with a property of the real tag that holds the strength that small object, the okay. @@ -188,129 +193,131 @@ RGN: If it is then that I think a well-known symbol is a clear front runner in h KG: [Serialisation] Yeah, just briefly. I'm the one who opened this issue 18. And in terms of, without getting into what the representation of the sort of raw thing is, this function-based approach seems like it is much nicer for users. So, I would much prefer for it to work like the first approach regardless of whether it's just an object with a particular symbol keyed value or an opaque object. I also think it should create an opaque object rather than an object with a symbol keyed value because the opaque objects don't require an additional look up. Internal slots are not things I think we should be scared of adding more of I think and in this particular case, it could even be a completely frozen, null prototype object that basically behaves like a primitive in the sense that you can't be adding new fields to it, but just has an internal slot with the string in it. -NRO: So we could avoid this question of how should the special object work look like that. By making the function store the value in the JSON.stringify closure. +NRO: So we could avoid this question of how should the special object work look like that. By making the function store the value in the JSON.stringify closure. CZW: I just would like to know, that you mentioned `raw` should mapping a single value to a single value with valid JSON notation. So I'd like to know what a value meanings in this context. Can `raw` accepts complex structure results, like mapping a bigint to an object with bigint property with descriptive keys? -RGN: Yes, so value would be anything that is valid input for JSON.parse. So it's not necessarily A Primitive value. It would just be a singular language value or the representation of a singular language value. +RGN: Yes, so value would be anything that is valid input for JSON.parse. So it's not necessarily A Primitive value. It would just be a singular language value or the representation of a singular language value. -RPR: okay, so at the end of the time box, in the end of the queue, Richard, is there any final wrap-up you want to just State? +RPR: okay, so at the end of the time box, in the end of the queue, Richard, is there any final wrap-up you want to just State? RGN: look for something at the next meeting. That is hopefully going to be ready for advancement. And check GitHub for conversation on this issue or any other. RPR: Thank you. Thanks. Thank you very much Richard. + ### Conclusion/Resolution -* Discussion to continue on github + +- Discussion to continue on github ## Specifying order of lists returned from Intl APIs + Presenter: Shane Carr (SFC) - [slides](https://docs.google.com/presentation/d/1tDvpl99axNaZQWm1VItYhztMMj3avV8jc8uvvXQLRI4/edit#slide=id.p) -SFC: So hello. My name is Shane F car and I'm going to be presenting this question. This is a question that has come up in the context of multiple INTL proposals in Ecma 402. We've discussed it multiple times in our task group, in TC39 TG2, we failed to reach consensus and decided to bubble it up to this group to hopefully, give some guidance and help us form a style guide to answer this question so that we can proceed and unblock. These proposals that are blocked on this question. So here's the problem space. +SFC: So hello. My name is Shane F car and I'm going to be presenting this question. This is a question that has come up in the context of multiple INTL proposals in Ecma 402. We've discussed it multiple times in our task group, in TC39 TG2, we failed to reach consensus and decided to bubble it up to this group to hopefully, give some guidance and help us form a style guide to answer this question so that we can proceed and unblock. These proposals that are blocked on this question. So here's the problem space. SFC: The problem spaces that we have various INTL, APIs that return lists things. and we're trying to establish the order in which these things should be returned right now. These items are returned in an implementation dependent or these are three examples on the right of where this is currently coming up. and INTL Plural rules. We have a getter that returns a list of plural categories. And you can see what it's currently returning there. The second example is, in the INTL enumeration proposal. We return a list of for example, calendars, or it could be any other things. The third is in the Intl Locale info proposal. That's my colleague. Frank Tang. just presented at the beginning of the hour and it returns a list of coal. since as you can see here, so these are three examples of where places where we return lists of things. So there's two questions. need to answer, which I want to decouple so that we can discuss them separately. -SFC: The first question is, should we specify what the order is or should we leave the order up to be implementation dependent? If we specify the order, there's pros and cons to To this. The pros are sort of, they're sort of two types of advantages. One is a philosophical Advantage. The idea that we should constrain all behaviors. Unless there is a reason not to constrain it. But in general, like if there's behavior in that and implementation could have, then that should be a well, specified behavior. It's not good to have undefined behavior in the spec in general. And then the practical advantage is that developers can depend. Do write code. That depends on any behavior that they can get from the JS standard library. And it's our duty to make sure that browser engines are consistent. Because if to browser engine to return the same data in a different order than that, could break assumptions that developers right into their code +SFC: The first question is, should we specify what the order is or should we leave the order up to be implementation dependent? If we specify the order, there's pros and cons to To this. The pros are sort of, they're sort of two types of advantages. One is a philosophical Advantage. The idea that we should constrain all behaviors. Unless there is a reason not to constrain it. But in general, like if there's behavior in that and implementation could have, then that should be a well, specified behavior. It's not good to have undefined behavior in the spec in general. And then the practical advantage is that developers can depend. Do write code. That depends on any behavior that they can get from the JS standard library. And it's our duty to make sure that browser engines are consistent. Because if to browser engine to return the same data in a different order than that, could break assumptions that developers right into their code SFC: the disadvantages of specifying the order, order, is that Helpers should not emphasize should not because they can but they should not depend on the order because the order in all of these cases is a set. It's not, the order isn't is not supposed to matter because it's supposed to be a set and they should treat it as a set, not as an ordered list. The second is that we shouldn't, implementation should not need to waste CPU Cycles. To sort things into a particular order, if that order is not really important or meaningful. It just contributes to global warming. And the third is that, there are unclear use cases, sometimes around the ordering and since we don't know exactly why we're doing the ordering, you know, it's better to have those clear is better, Have those these cases more clear. So these pros and cons are all things that came out of when we had this discussion in TC39 task group 2, these are all pros and cons of various people on the team brought up and I translated them here into the slide. -SFC: Okay. I want to go ahead and finish the rest of my presentation before going to the queue. The second question is what order should be specified. So if we decide not to specify an order, then we don't need to answer question 2. So we need to answer. in one before we answer question 2, but if we were to answer, yes, on question one. Yes, we should specify an order. Then question 2 is what order should be specified. And I want to lay out three options here. +SFC: Okay. I want to go ahead and finish the rest of my presentation before going to the queue. The second question is what order should be specified. So if we decide not to specify an order, then we don't need to answer question 2. So we need to answer. in one before we answer question 2, but if we were to answer, yes, on question one. Yes, we should specify an order. Then question 2 is what order should be specified. And I want to lay out three options here. SFC: One is to specify web reality's order of plural rules because plural rules is currently the only API that is web reality, that is stable that has this problem, The other two are Whistles in stage 3. So those will be shipping very soon. But as of right now from the examples on the first slide, what our plural rules is the only one that's currently shipped. So the advantage that if we were to do this, it's web. web. It's a web compatible change. The disadvantage. that the current order does not make much sense. It's not human friendly. friendly. It's not computer friendly. It's just a bit. A fairly arbitrary order. -SFC: Option two is to specify that we always return things in lexicographic sort order also known as alphabetical order. Alphabetical according to the less than / greater than operator In ecmascript, that is. So the advantages of specifying that we should return things always in lexicographic order that it's simple. Clear and easy to specify. It’s future proof to new entries and the set as well as two new sets that we have to return lexicographic order machine friendly in the general case. It enables things like binary search in the general case. It's been pointed out that this is not relevant for short lists, but it's you know, it's relevant for longer lists and the fourth, is it it avoids future debates in which sort order is better? If we were just return set if you to specify that we When we return sets, we always return them in lexicalographics order. Then we can kind of say, Okay, case closed, and then just always do this from now on without having to like revisit this question. Every time we have a list to return, the disadvantages are that in general non lexicographic. Sort orders are more human friendly and locking Us in to the lexicographic. President removes, our ability to choose a better sort. Her in cases where there is a better sort order. +SFC: Option two is to specify that we always return things in lexicographic sort order also known as alphabetical order. Alphabetical according to the less than / greater than operator In ecmascript, that is. So the advantages of specifying that we should return things always in lexicographic order that it's simple. Clear and easy to specify. It’s future proof to new entries and the set as well as two new sets that we have to return lexicographic order machine friendly in the general case. It enables things like binary search in the general case. It's been pointed out that this is not relevant for short lists, but it's you know, it's relevant for longer lists and the fourth, is it it avoids future debates in which sort order is better? If we were just return set if you to specify that we When we return sets, we always return them in lexicalographics order. Then we can kind of say, Okay, case closed, and then just always do this from now on without having to like revisit this question. Every time we have a list to return, the disadvantages are that in general non lexicographic. Sort orders are more human friendly and locking Us in to the lexicographic. President removes, our ability to choose a better sort. Her in cases where there is a better sort order. SFC: Option three is to just use a human friendly, sort order. instead of lexicographic. So in this case, the advantage is that we can use the sort order as a mechanism to drive use cases or to drive education, one potential use case that was brought up was well, maybe we want to have a drop down menu of the different fields in correlations or plural rules. Or whatever. And the order that we choose to put them in the drop down, menu. Maybe we want to maybe as specifiers. We actually want to control over that, maybe we actually want to say, well, there's one sort order. It's actually better than the others for educational reasons. Or for use case reasons, because we expect that our users are expecting to have these values in a particular order and we should use this. You should use this opportunity to put them in that order. The second is that we can still use lexicographic order when it does, make the most sense. I think there's General consensus that and intl enumeration. For example, lexicographic order is the best option to use here, so we can still do that. The disadvantage is that it gives inconsistent with During again, in different parts, the specification, when you do receive a list as a return value, the order that list it is specified by, you need to go and open up the specification PDF in order to go and see what the order is. The other disadvantage is that for every time we do return a list, We need to have a philosophical debate about what is the best order. And we need to make that decision. In contrast with lexicographic order where that decision is always, always made for us for Better or For Worse. -SFC: So that's my presentation. I'm hoping that we can have a discussion on both points on both question. One. Should we Define an order and then if we agree that we should Define an order, what order should we use And my hope is that, if we can, that we can get a clearer guidance from this group, on both of these points, so that we can bring this back to task group 2, and hopefully right up the answer in style guide. +SFC: So that's my presentation. I'm hoping that we can have a discussion on both points on both question. One. Should we Define an order and then if we agree that we should Define an order, what order should we use And my hope is that, if we can, that we can get a clearer guidance from this group, on both of these points, so that we can bring this back to task group 2, and hopefully right up the answer in style guide. -MM: So I feel strongly that an order should be specified. The spec should be more deterministic rather than less deterministic. The argument that programmers should not depend on an order, is a misleading, is a bad heuristic for the Language design, we're doing here. If implementers are free to, if the spec does not pin down an order, and there's any apparent consistency in the order that implementations accidentally happened to obey and webpages happen to depend on that accident. Then browser Game Theory leads you into a very bad game. We've seen this over and over. Over again, in JavaScript early history and as a result of that, we had agreement to try to make the for-in iteration as deterministic as we could, even though we didn't particularly care about what the order is and we haven't been able to get that fully deterministic. But We made progress. We had a huge debate, historically about the iteration order of Maps and Sets. And we ultimately decided that that was going to be a deterministic function of the history of insertions and deletions. And once again, it didn't matter what deterministic function it is. It just mattered that it be deterministic when you got a deterministic spec then programs are portable, even if they did not carefully distinguish between what seemed to happen to work versus what we specified and testing benefits tremendously from determinism and determinism across platforms lets testing have that benefit also across platforms. Now, as to, what order it should be. I feel much less strongly about that, but I think lexicographic sorted order is the best choice there. It's simple to specify. It's simple to know what it is. And the complex policy issues about what order things should appear in the menu to be human friendly. Those can be a separate matter that libraries decide separately in deciding how to render a menu. And that's going to be the kind of never ending policy debate. That shouldn't be in the foundational mechanisms. That's all I had to say. +MM: So I feel strongly that an order should be specified. The spec should be more deterministic rather than less deterministic. The argument that programmers should not depend on an order, is a misleading, is a bad heuristic for the Language design, we're doing here. If implementers are free to, if the spec does not pin down an order, and there's any apparent consistency in the order that implementations accidentally happened to obey and webpages happen to depend on that accident. Then browser Game Theory leads you into a very bad game. We've seen this over and over. Over again, in JavaScript early history and as a result of that, we had agreement to try to make the for-in iteration as deterministic as we could, even though we didn't particularly care about what the order is and we haven't been able to get that fully deterministic. But We made progress. We had a huge debate, historically about the iteration order of Maps and Sets. And we ultimately decided that that was going to be a deterministic function of the history of insertions and deletions. And once again, it didn't matter what deterministic function it is. It just mattered that it be deterministic when you got a deterministic spec then programs are portable, even if they did not carefully distinguish between what seemed to happen to work versus what we specified and testing benefits tremendously from determinism and determinism across platforms lets testing have that benefit also across platforms. Now, as to, what order it should be. I feel much less strongly about that, but I think lexicographic sorted order is the best choice there. It's simple to specify. It's simple to know what it is. And the complex policy issues about what order things should appear in the menu to be human friendly. Those can be a separate matter that libraries decide separately in deciding how to render a menu. And that's going to be the kind of never ending policy debate. That shouldn't be in the foundational mechanisms. That's all I had to say. -MF: Michael. so, before I go into this topic, I just wanted to ask Shane to clarify the goals of this ordering. And in particular, if compatibility plays any part in those goals. +MF: Michael. so, before I go into this topic, I just wanted to ask Shane to clarify the goals of this ordering. And in particular, if compatibility plays any part in those goals. -SFC: So, the goal, the goals in specifying the order, I think the question was raised because, you know, because there are potential concerns over. Programmers, writing code that assumes certain orders and you know, we are also concerned about the non determinism that Mark just described. So basically we want to establish a best practice. You know, right now in the spec as well as in these two proposals, I think Ansel enumeration might specify in order to but in at least, in, the preferable rules, as well as into local info, this is currently an unspecified behavior. And basically we want to establish a best practice. So this is more driven by us wanting to do the right things. And Is like a specific, you know, problem that we have right now. that makes sense. +SFC: So, the goal, the goals in specifying the order, I think the question was raised because, you know, because there are potential concerns over. Programmers, writing code that assumes certain orders and you know, we are also concerned about the non determinism that Mark just described. So basically we want to establish a best practice. You know, right now in the spec as well as in these two proposals, I think Ansel enumeration might specify in order to but in at least, in, the preferable rules, as well as into local info, this is currently an unspecified behavior. And basically we want to establish a best practice. So this is more driven by us wanting to do the right things. And Is like a specific, you know, problem that we have right now. that makes sense. MF: Okay so the topic I wanted to get into is, if you're looking for this deterministic behavior, as programmers are expecting certain consistency in this data over time. And remember that the data is changing, evolving data. They were possibly the options. Added two options With whether will be like, new locales added to it. This will cause ordering, by ordering them talking about the relative positioning of like two elements is not the important part for stability there. The important part is that the indexes of a particular value are stable. So if we have something in lexicographic order today, like we have like, say 26 elements, a through z, and we want to add another ‘b’. We should not add it in the third position and push everything out. We should add it at the end. And I guess it's a very unpalatable solution there, but that is the solution that can actually get us the kind of stability you want. As an example. If somebody is like choosing some sort of locale information and they store in a cookie or like session data an index to that Locale, the browser gets updated and next week somebody's locale info changes. Website breaks. We can no longer use it because it has different behavior. I think that's like that would violate the goals here, especially for the kinds of things that MM was talking about. So lexicographic ordering something like that is if as the data evolve and keep returning it in lexicographic ordering doesn't actually solve the stability issue. I think it all boils down to using an ordered data structure to represent unordered data. So you have to impose some order on it. YSV: Can you go to the first option of how we could do this? Okay. So am I reading this right? That currently all implementations implement the same order, but that it isn’t the ideal, right? -SFC: That's that's my understanding And if the sort order is you can see it from the Slide. It's basically sort it appears to be sort first by length, and then alphabetically within a particular length, which is +SFC: That's that's my understanding And if the sort order is you can see it from the Slide. It's basically sort it appears to be sort first by length, and then alphabetically within a particular length, which is -YSV: And we also want to add sorts to these other two. +YSV: And we also want to add sorts to these other two. -SFC: The other two are in currently stage three proposals. +SFC: The other two are in currently stage three proposals. -YSV: Oh, those are stage three. We don't have implementations for those, right? +YSV: Oh, those are stage three. We don't have implementations for those, right? -SFC: I think they're flagged implementations. +SFC: I think they're flagged implementations. -YSV: Okay, I would be really curious to know if we've got alignment there already. Because if all of the implementations are aligned then effectively we don't have a web compatibility issue. I consider web compatibility breakage to be of higher urgency. So, that basically means that we've got time to think about this. I don’t have a personal preference. I do think that determinism is good and we’ve made changes like this before. I think Michael made an excellent point about web compatibility and lexicographic ordering. I do think that that will end up being a web reality issue and we will have to make a really ugly fix or something like that. But I believe this issue will apply to any ordering that we choose. So I think it's good if we think about it now. And beyond that. I don't really have any comments. I just think we've got time right now to discuss this, because it looks like we also don't have enough information to make a decision right now. +YSV: Okay, I would be really curious to know if we've got alignment there already. Because if all of the implementations are aligned then effectively we don't have a web compatibility issue. I consider web compatibility breakage to be of higher urgency. So, that basically means that we've got time to think about this. I don’t have a personal preference. I do think that determinism is good and we’ve made changes like this before. I think Michael made an excellent point about web compatibility and lexicographic ordering. I do think that that will end up being a web reality issue and we will have to make a really ugly fix or something like that. But I believe this issue will apply to any ordering that we choose. So I think it's good if we think about it now. And beyond that. I don't really have any comments. I just think we've got time right now to discuss this, because it looks like we also don't have enough information to make a decision right now. -WH: Like YSV, I also believe that we do not have enough information presented to make an informed decision on this. The thing that's missing is descriptions of what things we’re ordering. Lexicographic ordering sounds tempting as a general rule, but I have no idea if this means that a list of days of the week will be presented alphabetically instead of in calendar order. Right now I don't know what we're trying to decide here. +WH: Like YSV, I also believe that we do not have enough information presented to make an informed decision on this. The thing that's missing is descriptions of what things we’re ordering. Lexicographic ordering sounds tempting as a general rule, but I have no idea if this means that a list of days of the week will be presented alphabetically instead of in calendar order. Right now I don't know what we're trying to decide here. SFC: I'll take those two comments, and we also require those two comments. SFC: So, first from Yulia, the urgency I see that we do have these two stage three proposals and you know, if we don't specify what the order should be, before we ship these proposals, then basically whatever ordering that ICU, currently has is going to become the web compatible, web reality order and basically, they're letting ICU make the decision for us, which is not necessarily a decision that we, once to defer to. It's ideally a decision that we should make proactively rather than doing what we currently have with Interpol. No rules where we have an order, that no one really likes because it just happens to be what I see returns and it's all the implementations use. ICU, they won't be using ICU necessarily in the future. For example, they might be using IC 4 x. And in that case, you know, this this order of is not necessarily going to automatically be consistent anymore. So that's what I see as the urgency. so that is the response to Yulia . -YSV: So we don't yet know if all the implementations align on the other two APIs, which are in stage 3. I think that the step there would be to determine whether or not they do and there's a good chance that they do. So that means that there's no web compatibility risk with the current implementations. For something to be a web compatibility risk or something to already be a broken web compatibility issue. That first off makes the design space smaller and their freedom to choose what to do much more difficult, but it also means that we are risking breaking websites for various people, which has a bigger knock-on effect than deciding to take our time. I don't think that if a stage three proposal gets pushed back from being released is as much of a risk or as costly as something like web compatibility might be. +YSV: So we don't yet know if all the implementations align on the other two APIs, which are in stage 3. I think that the step there would be to determine whether or not they do and there's a good chance that they do. So that means that there's no web compatibility risk with the current implementations. For something to be a web compatibility risk or something to already be a broken web compatibility issue. That first off makes the design space smaller and their freedom to choose what to do much more difficult, but it also means that we are risking breaking websites for various people, which has a bigger knock-on effect than deciding to take our time. I don't think that if a stage three proposal gets pushed back from being released is as much of a risk or as costly as something like web compatibility might be. SFC: Yes, So basically Yes, I see. option one is if we were to basically take the current web reality sort order and then specify for rules only then we can defer the decision on the sort order for the other two stage. three proposals that's sort of the idea with option one. sort of kicks, the can down the road bit. The road that we're kicking the can down is not a very long road and we will have to answer this question for the other two proposals. Yeah, but it, you know, sort of decouples, these two questions, it can take plural rules out of the top. The it takes web compatibility out of the it eliminates it as a factor. If we were to just establish the plural rules, how it currently is. Yeah. It's the WH’s question, These three things that we're looking at. Here is the three concrete examples, where this is currently mattering. So these are the only three example for looking at. These are the only three examples currently have. And we're looking for a solution for these three examples that, you know, could guide us when future, such examples appear but These are the three we know we currently have. - WH: For plural rules at least looking at the example you have on the slide, it seems to go from the fewest to the most. -SFC: Except it did not actually because traditionally the CLD are ordering, it will be one too. Few, other few is bigger than 2 is smaller than other. So the this ordering that we currently have is not as good graphic and it's not the semantic ordering either because huge should be ashamed. After two and before other. +SFC: Except it did not actually because traditionally the CLD are ordering, it will be one too. Few, other few is bigger than 2 is smaller than other. So the this ordering that we currently have is not as good graphic and it's not the semantic ordering either because huge should be ashamed. After two and before other. -WH: Ah, so ordering by size should be `one`, `two`, `few`, `other`. In that case `few`, `one`, `two`, `other` is what you’re proposing or is it web reality? +WH: Ah, so ordering by size should be `one`, `two`, `few`, `other`. In that case `few`, `one`, `two`, `other` is what you’re proposing or is it web reality? SFC: This is the web reality order that the first is currently on browsers. -WH: Ordering in terms of size, smallest to largest, seems to be the best, if we can do it. +WH: Ordering in terms of size, smallest to largest, seems to be the best, if we can do it. -SFC: That’s correct. And for plural rules, the debate basically comes down to do. We stick with this ordering, that no one really likes, because it's web compatible, or do we do ordering and move on? Or do we apply a human friendly? Ordering in which case, we would most likely adopt that Unicode technical standard, 35 ordering, which is ["one" "two" "few" "other"]. +SFC: That’s correct. And for plural rules, the debate basically comes down to do. We stick with this ordering, that no one really likes, because it's web compatible, or do we do ordering and move on? Or do we apply a human friendly? Ordering in which case, we would most likely adopt that Unicode technical standard, 35 ordering, which is ["one" "two" "few" "other"]. -WH: Yeah, for this one, I would prefer ordering them by size. Lexicographical order would be the worst. It's like trying to alphabetize numbers spelled out as words. The preferred one would be ordering by size “one” to “other”. Or web reality if we can’t do it by size. +WH: Yeah, for this one, I would prefer ordering them by size. Lexicographical order would be the worst. It's like trying to alphabetize numbers spelled out as words. The preferred one would be ordering by size “one” to “other”. Or web reality if we can’t do it by size. -MLS: So it seems to me which I think you guys have somewhat made cases. There's a semantic order. That makes the most sense and that should be used. Well, unfortunately, for this plural categories, I guess there's a web reality that may have more precedence than changing to some kind of semantic order for other list values. If there no like order of it's it's it's obvious more appropriate. Then think there should be some kind of default order. Maybe that's lexical. I do disagree with MF. That index I don't think is preservable as items are added in their order makes sense to put them earlier in a semantic list or if they're it'll default lexical. It also doesn't make sense to put things in a fixed index as it were for two reasons, one, they may not be valid for a particular return from a call and two if lexical or something like that as default, it's also hard, you know, it's impossible to maintain an index and when new values are added. +MLS: So it seems to me which I think you guys have somewhat made cases. There's a semantic order. That makes the most sense and that should be used. Well, unfortunately, for this plural categories, I guess there's a web reality that may have more precedence than changing to some kind of semantic order for other list values. If there no like order of it's it's it's obvious more appropriate. Then think there should be some kind of default order. Maybe that's lexical. I do disagree with MF. That index I don't think is preservable as items are added in their order makes sense to put them earlier in a semantic list or if they're it'll default lexical. It also doesn't make sense to put things in a fixed index as it were for two reasons, one, they may not be valid for a particular return from a call and two if lexical or something like that as default, it's also hard, you know, it's impossible to maintain an index and when new values are added. -FYT: Yeah. So SFC say the only did three, but I just want to make sure there are actual additional one. Week info. We can't thing in the week info since we're the approved like couple minutes ago, will be an array of integer indicating the weekday that also have a request of our order. For example, in US, Seven represents Sunday, which is the first day of the week. Should that be seven-six or six-seven to indicate the weekend? That's a question. So as so and I think that it just want to point out that we can info which link also have an array. Okay. +FYT: Yeah. So SFC say the only did three, but I just want to make sure there are actual additional one. Week info. We can't thing in the week info since we're the approved like couple minutes ago, will be an array of integer indicating the weekday that also have a request of our order. For example, in US, Seven represents Sunday, which is the first day of the week. Should that be seven-six or six-seven to indicate the weekend? That's a question. So as so and I think that it just want to point out that we can info which link also have an array. Okay. RPR: And SFC, you've got about a minute left. Do you want to wrap up? There's nothing on the queue? Yeah. -SFC: Okay. So thanks for the feedback. So basically what I heard is that, you know, in general we like the idea of specifying the order in terms of what order to specify that this questions still seems to be, you know. Then, it doesn't necessarily seem to be consensus here. I heard, Mark, Miller say well, we should just do looks good graphics. So we can always know what the order is going to be and just move on that. That's, you know, I think I think there are sort of camps ends. That's one camp. And then the other Camp is what we should choose the semantic order when there is a semantic order. Like, in the of pluralrules. There may be a semantic order, and we should go ahead and choose that one. And that's the sort of should use and if there is no semantic order then maybe we could do lexicographic or something else and that's sort of the other Camp. So basically, favor the human readability versus versus just favor, you know, something that's algorithmically pure. So those are if I had to categorize the two camps that those are also the two camps that came up when we discuss this in TG2. It seems like those are also the two camps here. It does seem that there is definitely consensus on the first question. that yes, we should absolutely specify the order then I'll go ahead and take this feedback back to TG2 and thank you for the discussion here. And yeah, I hope to continue this discussion further. so, No one else in the queue. Thank you. I'll turn it back over to Rob. +SFC: Okay. So thanks for the feedback. So basically what I heard is that, you know, in general we like the idea of specifying the order in terms of what order to specify that this questions still seems to be, you know. Then, it doesn't necessarily seem to be consensus here. I heard, Mark, Miller say well, we should just do looks good graphics. So we can always know what the order is going to be and just move on that. That's, you know, I think I think there are sort of camps ends. That's one camp. And then the other Camp is what we should choose the semantic order when there is a semantic order. Like, in the of pluralrules. There may be a semantic order, and we should go ahead and choose that one. And that's the sort of should use and if there is no semantic order then maybe we could do lexicographic or something else and that's sort of the other Camp. So basically, favor the human readability versus versus just favor, you know, something that's algorithmically pure. So those are if I had to categorize the two camps that those are also the two camps that came up when we discuss this in TG2. It seems like those are also the two camps here. It does seem that there is definitely consensus on the first question. that yes, we should absolutely specify the order then I'll go ahead and take this feedback back to TG2 and thank you for the discussion here. And yeah, I hope to continue this discussion further. so, No one else in the queue. Thank you. I'll turn it back over to Rob. ### Conclusion/Resolution -* - +- ## Tightening host restrictions to improve testing + Presenter: Jordan Harband (JHD) - [example](https://github.com/tc39/test262/pull/3054#issuecomment-882741949) -JHD: [intro lost] you know, and there's probably a lot that I'm unaware of and the, the first category new globals is fine. The second category, new like prototype things is quote, fine. In other words. It's not a real issue. Neither of these categories really is a problem for users or for us until we go to add new things. That conflicted have the same name, and that's just part of the process, right? We that's that is acceptable the category is the I'm gonna give a concrete example, there's more than one, but this is the only one I've preserved in my brain, the error caused the place on which this meeting I believe, is going for stage for the specification for it says that `cause` is an own property on Error instances. It's not supposed to be present on `Error.prototype`. This was an intentional part of the design (the reasons for that aren't worth debating here). However, test262 has a completely reasonable policy to only include tests for things that the specification describes - things that it requires, prohibits, or permits explicitly. So test262 has a test that Error instances have an own `cause` property when expected - great! It does not, however, have a test that there let’s say `Object.prototype` does not have a `cause` property or `Error.prototype` does not have a `cause` property. This is unfortunate in this particular engine-specific case, but in the general case, that's a reasonable position for test262 to have, which is that it cannot test for the infinity of things that aren't in the spec - that's just not sustainable. As a result, every engine that I'm aware of that has shipped `.cause`, `cause` is an own property on instances as it’s supposed to, whether because the specification says it or test262 enforces it who knows, but some combination of those has created the correct behavior. That's great! However, Chrome 93 and node 16.9 and 16.10 shipped with `cause` property on `Error.prototype` as well, which is not supposed to be there. It's technically allowed by the spec, but that was not the design of the feature or the intention and it's not what the other engines have chosen to do. And so, of course, that obvious bug was fixed in Chrome 94 and node 16.11. Test262 however, based on their reasonable policy, cannot add a regression test for this obvious, actual bug that happened. This has happened, many times in the past with similar cases around the exact placement of properties. Before I continue to mitigations, I wanted to make sure that everyone's on the same page about understanding the problem. I don't see any questions on the queue - now would be a great time to stick yourself on it, if there's anything to clarify, or if I haven't made the problem clear. +JHD: [intro lost] you know, and there's probably a lot that I'm unaware of and the, the first category new globals is fine. The second category, new like prototype things is quote, fine. In other words. It's not a real issue. Neither of these categories really is a problem for users or for us until we go to add new things. That conflicted have the same name, and that's just part of the process, right? We that's that is acceptable the category is the I'm gonna give a concrete example, there's more than one, but this is the only one I've preserved in my brain, the error caused the place on which this meeting I believe, is going for stage for the specification for it says that `cause` is an own property on Error instances. It's not supposed to be present on `Error.prototype`. This was an intentional part of the design (the reasons for that aren't worth debating here). However, test262 has a completely reasonable policy to only include tests for things that the specification describes - things that it requires, prohibits, or permits explicitly. So test262 has a test that Error instances have an own `cause` property when expected - great! It does not, however, have a test that there let’s say `Object.prototype` does not have a `cause` property or `Error.prototype` does not have a `cause` property. This is unfortunate in this particular engine-specific case, but in the general case, that's a reasonable position for test262 to have, which is that it cannot test for the infinity of things that aren't in the spec - that's just not sustainable. As a result, every engine that I'm aware of that has shipped `.cause`, `cause` is an own property on instances as it’s supposed to, whether because the specification says it or test262 enforces it who knows, but some combination of those has created the correct behavior. That's great! However, Chrome 93 and node 16.9 and 16.10 shipped with `cause` property on `Error.prototype` as well, which is not supposed to be there. It's technically allowed by the spec, but that was not the design of the feature or the intention and it's not what the other engines have chosen to do. And so, of course, that obvious bug was fixed in Chrome 94 and node 16.11. Test262 however, based on their reasonable policy, cannot add a regression test for this obvious, actual bug that happened. This has happened, many times in the past with similar cases around the exact placement of properties. Before I continue to mitigations, I wanted to make sure that everyone's on the same page about understanding the problem. I don't see any questions on the queue - now would be a great time to stick yourself on it, if there's anything to clarify, or if I haven't made the problem clear. -LEO: JHD, just won't think to appreciate you, explain that. I don't consider that is actually a policy, but I consider that like following ecmascript rules. Like the policy is just ECMAScript norms. And where the norm say the language is extensible - not as an object, but as the language as the API and syntax, so we cannot test 262 cannot create anything that is crossing that boundary. so we try our best to avoid. This is this has been like starkly setting test 262. otherwise like, everything seems very reasonable and you be, I talked about this with Rick Waldron and as like we have been involved with Test262 for a long while we support we support, this support changing. +LEO: JHD, just won't think to appreciate you, explain that. I don't consider that is actually a policy, but I consider that like following ecmascript rules. Like the policy is just ECMAScript norms. And where the norm say the language is extensible - not as an object, but as the language as the API and syntax, so we cannot test 262 cannot create anything that is crossing that boundary. so we try our best to avoid. This is this has been like starkly setting test 262. otherwise like, everything seems very reasonable and you be, I talked about this with Rick Waldron and as like we have been involved with Test262 for a long while we support we support, this support changing. JHD: Thank you. I haven't discussed the change just yet, but I do appreciate that. We've all spoken and test262’s maintainers are on board with it. Thank you, Leo. I see MM on the queue as well. -MM: Just wanted to mention that this relates closely to a concern that SES has, which is, which is that additional property that are not in the spec. I think we should tighten up the spec to say that if there are any such additional properties. They must deletable, not just configurable, but if you if if you delete them that they actually get deleted. because then a white listing mechanism such as the SES initialisation when seeing a property it doesn't recognize can remove it if it SES sees a property it doesn't recognize and it cannot remove it, and the property does something dangerous that says doesn't account for then SES simply cannot enter a secure State and and that's that's true for several of the places where the spec is. Just too loose with implemented with implementation freedom. and I think that a way to approach this from a compatibility perspective, is to say, is to is via the invariants that you only. And I have talked about. What is the implementation Freedom that in fact, zero implementations make use of then the absence of making use of it becomes observed in variant that we can now discuss whether we want to enforce it. +MM: Just wanted to mention that this relates closely to a concern that SES has, which is, which is that additional property that are not in the spec. I think we should tighten up the spec to say that if there are any such additional properties. They must deletable, not just configurable, but if you if if you delete them that they actually get deleted. because then a white listing mechanism such as the SES initialisation when seeing a property it doesn't recognize can remove it if it SES sees a property it doesn't recognize and it cannot remove it, and the property does something dangerous that says doesn't account for then SES simply cannot enter a secure State and and that's that's true for several of the places where the spec is. Just too loose with implemented with implementation freedom. and I think that a way to approach this from a compatibility perspective, is to say, is to is via the invariants that you only. And I have talked about. What is the implementation Freedom that in fact, zero implementations make use of then the absence of making use of it becomes observed in variant that we can now discuss whether we want to enforce it. JHD: So I completely happen to agree with you, in this case, MM and you're right. It's the exact same category, deleteability, the exact same category as what I'm talking about, which is a host freedom that hosts don't actually need. That we would like to explicitly call out. I want to make sure that I don't kind of attach the two things together. -MM: Yeah, I agree. I agree. I understand this distinct. I just wanted to mention it because there's overlap. +MM: Yeah, I agree. I agree. I understand this distinct. I just wanted to mention it because there's overlap. -JHD: Absolutely. Yeah, so what you're talking about: unless everyone's, like super, on board with it, it would be a different kind of topic. But that said, it has like 90 percent overlap, and I completely agree and support that as well. +JHD: Absolutely. Yeah, so what you're talking about: unless everyone's, like super, on board with it, it would be a different kind of topic. But that said, it has like 90 percent overlap, and I completely agree and support that as well. -LEO: Yeah, mark. About this. I think like if we have something that is just for as like configurability, if we have, if we have this on anything, there is that be ordinary as ordinary objects. It should be like fine as a generic approach. I am afraid if we try to address everything like deletability. We might get like two specific to in the we weeds in order like Like we have two steps here. I think like talking about extensions and everything. I like how we actually restrict that. There is a generic approach. I think what JHD was/is proposing and then like going to this specific Parts where deletability is going to be. another case. I fully understand and comprehend that I just want to, to make sure I like this, find some light in the end. At least like, step by step. +LEO: Yeah, mark. About this. I think like if we have something that is just for as like configurability, if we have, if we have this on anything, there is that be ordinary as ordinary objects. It should be like fine as a generic approach. I am afraid if we try to address everything like deletability. We might get like two specific to in the we weeds in order like Like we have two steps here. I think like talking about extensions and everything. I like how we actually restrict that. There is a generic approach. I think what JHD was/is proposing and then like going to this specific Parts where deletability is going to be. another case. I fully understand and comprehend that I just want to, to make sure I like this, find some light in the end. At least like, step by step. JHD: Thank you, MM and LEO. Okay, so it sounds like everyone understands the problem I've laid out. So now I want to go to my suggested mitigation. There's a number of options here. One would simply be like, let's make some exceptions to, you know, and have some Test262 tests, but that's sort of just “allow these test262 tests, even though they're technically something that the implementations are allowed to deviate from” but another alternative, which both permits tests and also matchesweb reality, and what implementations philosophically and spiritually already do - the way I phrased it is: "Any property on a given object mentioned in the specification, must ONLY appear in the locations specified on that object or its prototype chain" JHD: So `cause` - that would only be allowed on Error instances as own properties and not anywhere else on its prototype chain, but, because `cause` is not specified on anything but Errors, then someone could stick a cause property on an Array - that's not governed by this. Similarly, the `message` property must an own property on Error instances and also on `Error.prototype`, but nowhere else in that prototype chain - but it could still be on any other random object.. Another consequence of this is that `call`, `apply`, and `bind` must only be on one location on any function, `Function.prototype`. So this is something that's trivial to test. There's an npm library that I have in mind when I'm thinking about this, but essentially you just kind of recurse up the prototype chain and grab for a given property name, all the locations. And then you assert what those locations are, and I believe it matches what implementations already do. If an actually implementation violates it and that violation isn't an obvious bug that they're willing to fix, then we should definitely come back and reevaluate and discuss it, but I'm relatively confident that that won't happen. So the intention here is that that will reduce implementation deviation/differentiation, and will carve out a kind of a safe place where, which doesn't impinge in particular, the web, but other implementations’ ability to innovate, but it does prevent correctness bugs that nobody wants to happen anyway. -MM: So I support this proposal. You just want to mention some interesting cases to be aware of as this goes forward our error stack proposal. We're proposing that stack property exists on Error.prototype that combined with this proposal, would have it be only on Error.prototype which I which is I think the correct. [function??].. that currently implementations many implementations do have stack as a known property on error instances in particular V8, so that would so this topic definitely focuses that as issue to come to agreement on moving Stacks forward. The other one is Function.prototype.(callee?) and Function.prototype.caller. I don't remember, Even if both of those are still around, but they were around 2.2. make sure that sloppy functions are poisoned and neither sloppy functions nor built-ins door. Strict function should have their own properties that and I don't know what the current implementation status is on that either. I think when I last looked some Implementations had sloppy functions with their own arguments and caller Properties. +MM: So I support this proposal. You just want to mention some interesting cases to be aware of as this goes forward our error stack proposal. We're proposing that stack property exists on Error.prototype that combined with this proposal, would have it be only on Error.prototype which I which is I think the correct. [function??].. that currently implementations many implementations do have stack as a known property on error instances in particular V8, so that would so this topic definitely focuses that as issue to come to agreement on moving Stacks forward. The other one is Function.prototype.(callee?) and Function.prototype.caller. I don't remember, Even if both of those are still around, but they were around 2.2. make sure that sloppy functions are poisoned and neither sloppy functions nor built-ins door. Strict function should have their own properties that and I don't know what the current implementation status is on that either. I think when I last looked some Implementations had sloppy functions with their own arguments and caller Properties. SYG: So I agree with the motivation that we make test 262 more useful here and I propose we do that directly with test 262, rather than changing the spec to Limit host from adding additional properties of the same name in this way. @@ -322,9 +329,9 @@ JHD: I mean. If there is no, if there's nothing concrete in mind… would you ha SYG: Not in the short term. Currently. That's not a compelling argument to need to waive that right now -JHD: I guess to me, the right that hosts have here is not a right that they should have right? Like, that then we did obviously the ability to like add new globals is important. +JHD: I guess to me, the right that hosts have here is not a right that they should have right? Like, that then we did obviously the ability to like add new globals is important. -SYG: But like, no just add globals, but also properties to other objects. I think I agree that it's probably rarer. That should that that we want to add a property to an object. That is the same name as right also, +SYG: But like, no just add globals, but also properties to other objects. I think I agree that it's probably rarer. That should that that we want to add a property to an object. That is the same name as right also, JHD: I suppose but that's the subset that I'm really focused on because like if you know you want to add a like `yogurt` property to all instances. Like okay, fine, go nuts, right? But like I mean, I personally don't think that's a good idea anyway, but I think like the spec. defines what describes, “what instances are” and it's weird when, it would be weird if there were instances that had more stuff than that. It's fine to allow that possibility because it's good to account for unknowns. But I just can't conceive of any case where shadowing or a name used in the specification is something anybody would want at any point, so it seems strange to me for hosts to try to reserve a right that nobody can think of a reason they want to exercise. @@ -332,27 +339,27 @@ SYG: Mark raised the the concrete case of Stack, right? JHD: And I think that if they'd in the stack proposal case (which I wasn't thinking about for this topic) - it's already been stated by the Chrome team that the Error stacks proposal has to describe what already happens. So, any change that Chrome is unwilling to make specifically, this stack proposal has to allow for already. If that means, let’s say the Chrome wants `stack` to be an own property on instances then I don't see how we would be able to get it to a point where the proposal conflicts with that in the first place. -SYG: I can see that forward where we keep writing like web browser, specific language, where it's like for web browsers. It must remain an own property for everything else. Do this other than the Prototype. I'm not usually a fan of that. I understand that sometimes it's necessary, but I also don't see a need to like have this carve out here like you want to have this carve out. Because it feels weird to you - that I disagree with as a compelling reason. What I agree with is compelling reason is expanding the test case and that's why would like to solve for that problem directly, right? +SYG: I can see that forward where we keep writing like web browser, specific language, where it's like for web browsers. It must remain an own property for everything else. Do this other than the Prototype. I'm not usually a fan of that. I understand that sometimes it's necessary, but I also don't see a need to like have this carve out here like you want to have this carve out. Because it feels weird to you - that I disagree with as a compelling reason. What I agree with is compelling reason is expanding the test case and that's why would like to solve for that problem directly, right? -JHD: You're correct that the presented motivation is something that your alternative solves and the reason I presented that motivation is because that's the objective one, that this has caused actual bugs, and we need a way to mitigate that problem. The subjective thing that I was not headlining the topic with is “I think it's weird”, and I think it's a bad thing to create this sort of deviation and that I haven't actually seen anyone use it to good effect. I then separately think that if an implementation has a persuasive use case, then the spec should of course should explicitly allow it, and if the implementation’s use case is something that the rest of the committee does not want to allow than like for some reason then like that's a discussion we should be having - it shouldn't just be happening in the isolation of a single browser teams’ engineering department. +JHD: You're correct that the presented motivation is something that your alternative solves and the reason I presented that motivation is because that's the objective one, that this has caused actual bugs, and we need a way to mitigate that problem. The subjective thing that I was not headlining the topic with is “I think it's weird”, and I think it's a bad thing to create this sort of deviation and that I haven't actually seen anyone use it to good effect. I then separately think that if an implementation has a persuasive use case, then the spec should of course should explicitly allow it, and if the implementation’s use case is something that the rest of the committee does not want to allow than like for some reason then like that's a discussion we should be having - it shouldn't just be happening in the isolation of a single browser teams’ engineering department. MM: I have a quick clarifying question for Shu. I don't understand what it means to test for it, if it's not a normative thing in the spec, if the test fails, how is the test failure indicated? Given that it doesn't indicate non conformance to the Spectrum, -SYG: A concrete suggestion that floated was something like assume no extensions. It would be something the test render option to assuming the host does not add any extensions at all with these tests fail. It was, is that strictly speaking still conformant, but the outcome is. I mean, the, the signal you get from tests is not just conformant non-conforming, but How likely is this to be a bug? And you could have something that is technically conformant but still points to the likelihood of a bug. Ugh, and these are test in that category. +SYG: A concrete suggestion that floated was something like assume no extensions. It would be something the test render option to assuming the host does not add any extensions at all with these tests fail. It was, is that strictly speaking still conformant, but the outcome is. I mean, the, the signal you get from tests is not just conformant non-conforming, but How likely is this to be a bug? And you could have something that is technically conformant but still points to the likelihood of a bug. Ugh, and these are test in that category. MM: are there tests 262 tests that are in that, that that have such signals right now? -SYG: Weak ref stuff [??]. +SYG: Weak ref stuff [??]. -MM: Ah, okay. Good point. Okay, that's all I had. +MM: Ah, okay. Good point. Okay, that's all I had. -LEO: nice to have my grains of salt pork [??]. We Crepes happens there. So. II think the, the way we consider these tests to be useful. I have a okay. I don't have any. I'm not in position to give technical rejection to just create a policy test 262. I am personally against it because I really am reading favor. I've have been actual Norm. Saying that what we can extend it in making test follow what actually are written as Norms in the spec text. That helps a lot indicating like a back door and what you do. When the test fails, I think we WeakRefs are like a very specific scenario. We're like, things are optional such as Annex B is also optional but planning test with Test 262 in the middle of the sweet 16 [??]. I kind of like you can separate with refs s. you can separate any speed, you can separate until they are kind of like optional but if you if we do have these tests I see them as Blended in I think one thing to try to mitigate what Shu say. we could try to actually have some tasks, like they're a of a massive for the size of Ecmascript, but they are not hard to write it and we can do have some of these tests and try to connect to see if anything is actually extending what we are saying like well the objects in JavaScript, maybe we can try to do some effort and Get some tests ready? Just to see what happened. what would happened, but I'm still in favor of having some normative Direction, but what to do with these tests, tests, I really I would not say it's a good thing to have test Blended in test 262, the mirror of test-262 you that you cannot bring a separate maybe like by the file name that is just like following the test 262 policy, exclusive policy. And this would be like a new thing for test262. you because like, for nxp, have optional so it's still like The spec seeing this is optional for Intl same thing for each of the same things for WeakRefs. Where you have like plenty of implementation-dependent, but this daily tasks, you will be a policy, like, we're just going to verify something for you. This is all to Latino. Not a test. This is a verification. If something happens, it does just excuse so far. Just say just can say, They like this fails or this passed. I really want to help mitigating these concerns because I really prefer have normative text. +LEO: nice to have my grains of salt pork [??]. We Crepes happens there. So. II think the, the way we consider these tests to be useful. I have a okay. I don't have any. I'm not in position to give technical rejection to just create a policy test 262. I am personally against it because I really am reading favor. I've have been actual Norm. Saying that what we can extend it in making test follow what actually are written as Norms in the spec text. That helps a lot indicating like a back door and what you do. When the test fails, I think we WeakRefs are like a very specific scenario. We're like, things are optional such as Annex B is also optional but planning test with Test 262 in the middle of the sweet 16 [??]. I kind of like you can separate with refs s. you can separate any speed, you can separate until they are kind of like optional but if you if we do have these tests I see them as Blended in I think one thing to try to mitigate what Shu say. we could try to actually have some tasks, like they're a of a massive for the size of Ecmascript, but they are not hard to write it and we can do have some of these tests and try to connect to see if anything is actually extending what we are saying like well the objects in JavaScript, maybe we can try to do some effort and Get some tests ready? Just to see what happened. what would happened, but I'm still in favor of having some normative Direction, but what to do with these tests, tests, I really I would not say it's a good thing to have test Blended in test 262, the mirror of test-262 you that you cannot bring a separate maybe like by the file name that is just like following the test 262 policy, exclusive policy. And this would be like a new thing for test262. you because like, for nxp, have optional so it's still like The spec seeing this is optional for Intl same thing for each of the same things for WeakRefs. Where you have like plenty of implementation-dependent, but this daily tasks, you will be a policy, like, we're just going to verify something for you. This is all to Latino. Not a test. This is a verification. If something happens, it does just excuse so far. Just say just can say, They like this fails or this passed. I really want to help mitigating these concerns because I really prefer have normative text. LEO: Okay, so I'm trying to pursue to give a quick feedback. I think, what do you say here also represents a lot of struggle like, historically from test 262, like, definitely, you're not the first one. I also got feedback from any other delegates like went where actually restrict these. This is very often feedback like for test 262, very fine. Or objects is often seen when we also rename something or remove something like clean up some atomic weight in weight in a week. Things like that. Also like generate this kind of feedback, like people wanted to like, can we verify this thing doesn't exist in ecmascript, in the implementation anymore and we cannot because 262 allows this extension, but we can map all the objects that we have in ecmascript and we can just do some test to see. Like these are the Some things we expect to see in the subject and see what trails or not and we can have method and shoulders last year. Maybe this is just like something that we can overcome and work it out. JHD: Okay, so I guess I could ask for a temperature check on the queue or something, but I think that before that, Shu is this something that Chrome is essentially blocking consensus on for having it be a normative requirement? Or could you be swayed by the feeling of the room? -SYG: Unlikely. I think I am swayed by your utility argument very much, nicely played. by the subjective part if If you know, it's your subjective versus my subjective part. I think how I feels really strongly that we should keep with the status quo. What is afforded to hosts. +SYG: Unlikely. I think I am swayed by your utility argument very much, nicely played. by the subjective part if If you know, it's your subjective versus my subjective part. I think how I feels really strongly that we should keep with the status quo. What is afforded to hosts. JHD: Okay, I guess it's worth asking real quick. Before we go to the next queue item. Is there any other implementations that have a strong opinion here in either direction? It would be great to get that on the record in case because if Chrome is the only one with this opinion and they change their mind, that would be useful to know. Feel free to stick yourself on the queue if you represent an implementation. @@ -360,7 +367,7 @@ YSV: I can just speak on Mozilla. Yes, we reviewed this and I don't see any imme JHD: So Shu, then if we started in this, this is up to test262 Maintainers to some degree. I think, I don't know who, I don't know who makes that decision. But let's say we add the tests and some special “allowed to fail” category. And it turns out that no implementations violate this modulo bugs. Is that still a freedom the Chrome team is likely to want to hold on to even though no one's using it and has no idea of why they want to? -SYG: I don't think it's fair to say that. There's no idea. +SYG: I don't think it's fair to say that. There's no idea. JHD: There's so it will if there some idea I'd like is that something that you could share if not now like in at some point in the future? @@ -378,76 +385,80 @@ JHD: OK, just to make sure I understand you correctly: regardless of whether it' SYG: I don't have any concrete. Use cases, Beyond errors and probably around Stacks like, -JHD: right when the stacks questions get resolved, right? +JHD: right when the stacks questions get resolved, right? SYG: right. I understand. But that question is resolved for the things and under the purview of 262. I don't see any other objects where we may want to naturally extend for like product. Things like air is and stack traces. Its is what I'm mostly aware of and uncomfortable agreeing to this fairly scoped prohibition. Because I don't think that will solve any actual problems and to kind of hammer home what I was saying earlier about test 262 being the one that catch bugs. This is why I also have somewhat of an issue with Statistics test 262, being more the pedantic side of what a test ought to test for, at the end of the day, the target consumers of test262 are mostly implementers. We use them. Not only for interop, which Is by far the greatest value, but also to suss out bugs that we do ourselves through our own fuzzy, more our own test writing and I feel strongly that test statistics should remain like maximally useful in that regard for sussing out sussing out, bugs. So yeah. YSV: I think there are a couple of reasons. Why host might want to preserve this like and they go back to error and some other related objects largely to be able to compete with each other. Now we have had issues with the are objects. And now we do want to standardize it, but historically that has been area where we've been able to differentiate from one another in our invitations and give users give users more developer-friendly tooling. Through extending the built-ins in a way that makes them better for programmers, as they're trying to figure out the code. So that's been a historical reason why we had that. I don't know. -JHD: Just to clarify. I'm sorry. Do you have any examples beyond the stack property itself and its contents that like, I've where you yeah, that's what, come on. +JHD: Just to clarify. I'm sorry. Do you have any examples beyond the stack property itself and its contents that like, I've where you yeah, that's what, come on. -YSV: I think you also brought it up. Also the function to Source the historical thing. I don't know if it's still there, but we had a couple of things. +YSV: I think you also brought it up. Also the function to Source the historical thing. I don't know if it's still there, but we had a couple of things. JHD: To be clear the prohibition, I suggested would not have forbidden `toSource` and would not in the future. The infinite set of names we haven't used would still always remain free for hosts to innovate with. YSV: Okay, consider my comment retracted. Thank you. -BSH: Okay, so what I wanted to point out, is it so far. I've seen basically one data point where we had it actual bugs happened because accidentally, they put the property the wrong place. They put this cause property on the Prototype and because of the way errors objects working that caused actual bugs in practice because we've been look like a thing that didn't have a cause did actually have a cause property because the one On the Prototype, right? That's basically what happened. This is kind of a weird situation. It's one time that this happened. In general, if you happen to have Shadow things on the Prototype, it probably wouldn't matter, because there's always going to be the one that's on the instance. That always Shadows it. So you've never even see it. So, I guess what I'm getting at is, I think in this isn't really a very likely source of bugs. We've only seen it one time. So kind of then trying to say, oh, we just want a blanket say that you never ever allow these shadowing. Things happen to happen. And as part of the spec, it's I think just sort of an early optimization problem. I understand the motivation, but if you think about it, you're actually calling in a lot of requirements in because if that's the requirement then then you end up adding tests for this test262 for all of the properties that are defined on everything in the spec and it might be easy do that. But it also has a lot of execution time, if you're going to do it correctly. Why not just say, oh, know that for this one case for errors, this is a problem. Explicitly saying the spec you're not allowed to have cause property on the Prototype because it goes to this problem and then just leave it at that until we see that there's a general pattern of this is Constable health problems. +BSH: Okay, so what I wanted to point out, is it so far. I've seen basically one data point where we had it actual bugs happened because accidentally, they put the property the wrong place. They put this cause property on the Prototype and because of the way errors objects working that caused actual bugs in practice because we've been look like a thing that didn't have a cause did actually have a cause property because the one On the Prototype, right? That's basically what happened. This is kind of a weird situation. It's one time that this happened. In general, if you happen to have Shadow things on the Prototype, it probably wouldn't matter, because there's always going to be the one that's on the instance. That always Shadows it. So you've never even see it. So, I guess what I'm getting at is, I think in this isn't really a very likely source of bugs. We've only seen it one time. So kind of then trying to say, oh, we just want a blanket say that you never ever allow these shadowing. Things happen to happen. And as part of the spec, it's I think just sort of an early optimization problem. I understand the motivation, but if you think about it, you're actually calling in a lot of requirements in because if that's the requirement then then you end up adding tests for this test262 for all of the properties that are defined on everything in the spec and it might be easy do that. But it also has a lot of execution time, if you're going to do it correctly. Why not just say, oh, know that for this one case for errors, this is a problem. Explicitly saying the spec you're not allowed to have cause property on the Prototype because it goes to this problem and then just leave it at that until we see that there's a general pattern of this is Constable health problems. -JHD: This is so before we go to the queue replies. It has happened, many more than one times, that things were put in the wrong place, whether it's caused actual bugs for day-to-day practitioners. practitioners. I agree causes Well, this specific case of it's supposed to be absent on the Prototype and it's supposed to be a known property and the bug of it. Is that it being a prototype property. Like, I don't think, I don't know if that's specific case has happened before with anything. However, when things are placed in the wrong locations that causes bugs for polyfill authors, which affects a vast number of users. Even if they don't directly know it. So these bugs like, these are active actually cause the bugs in the past. and as far as the comment about execution time and stuff, I mean the, I think the way that those tests are authored is an implementation detail of test 262. And if any tests for correctness are slow then the test uses features feature is used for implementations to like only run the the one, the subsets that they want. I'm I'm not sure if like for Shu's alternative proposal, as well, I think that the tests should be there for correctness, even if they're slow. And in either situation. +JHD: This is so before we go to the queue replies. It has happened, many more than one times, that things were put in the wrong place, whether it's caused actual bugs for day-to-day practitioners. practitioners. I agree causes Well, this specific case of it's supposed to be absent on the Prototype and it's supposed to be a known property and the bug of it. Is that it being a prototype property. Like, I don't think, I don't know if that's specific case has happened before with anything. However, when things are placed in the wrong locations that causes bugs for polyfill authors, which affects a vast number of users. Even if they don't directly know it. So these bugs like, these are active actually cause the bugs in the past. and as far as the comment about execution time and stuff, I mean the, I think the way that those tests are authored is an implementation detail of test 262. And if any tests for correctness are slow then the test uses features feature is used for implementations to like only run the the one, the subsets that they want. I'm I'm not sure if like for Shu's alternative proposal, as well, I think that the tests should be there for correctness, even if they're slow. And in either situation. BSH:I think Gus is on there but I guess he felt get covered what he had to say is he's not up because was basically saying this happened at least three times, right? All right, but we'll so glad the one thing that I what I said that didn't for you completely understood what I was getting at. Was that the time spent executing Tas that compute time spent executing tests that are really testing something that could never cause a real bug, which would be the vast. Already of properties on on objects. Seems like a really bad waste of resources. If, you know, in the long term, that's all I'm saying. It's a sort of contributing to global warming sort of thing. Uh, that's that's the sort of thing I'm talking about. JHD: I mean, the number of the total number of properties is probably covered by. This is probably in the double digits and not triple. So I don't think we're we're talking about two too many. Any this we're not talking about having these tests for everything on the web, you know, which is a much larger set of data, but I take the point. -LEO: I think there are like this many things here. is a this is probably like an umbrella of multiple Solutions. I think, the first thing that I thought was actually restricting like extension at ecmascript, build teams, but there is also, What at some point JHD or Mark, I say on the configurability of extensions to do these Beauty. I think this is a nice path to explore if there is an extension we could set for extensions. So that helps in a controlled, in some sort of control environment. This is used for the web. It's not Directly for Native browser implementation, but like for web development. This is very useful like knowing that all the builtins seems we'll have like some contract on any extensions or that shape. Natively. We beasts to have like some control to be effective. To to pull over these objects. Okay, and this is, this is one of the interesting facts to explore. I think this would be useful and can be accessed. Yeah, deserve more investigation. The other parts is just like I think we are talking mostly about builtin APIs is here, but there is also a lot of historical feedback in test 262 about syntax extensions. And when I talk about syntax extensions, I also talked about like restriction that the syntax like people try to create tests for Test262. This is totally Like the feeling is if I have a new syntax feature to the language, I try to test it out and I create variations of it, There is just not conform to the new spec and try to release something like, like, let's see if this Code works or not, if it's like total garbage, of like letters garbage that you write down and this is like, yeah, we cannot say that actually is invalid test many so often because each can be at some point like some syntactic extension. think this is also a like another field that we can explore what we with syntatic. Restrictions, I don't think there is a, for any of these like even for the builtins, there is there isn't any easy solution. But for all of these like, there is a historical feedback from people who contribute to test 262, expressing the desire to create these restrictions. I wanted to express that part too because I think we're just too focused in builtins so far. Maybe that's the intention JHD and I'm just assuming You also saying any extension, +LEO: I think there are like this many things here. is a this is probably like an umbrella of multiple Solutions. I think, the first thing that I thought was actually restricting like extension at ecmascript, build teams, but there is also, What at some point JHD or Mark, I say on the configurability of extensions to do these Beauty. I think this is a nice path to explore if there is an extension we could set for extensions. So that helps in a controlled, in some sort of control environment. This is used for the web. It's not Directly for Native browser implementation, but like for web development. This is very useful like knowing that all the builtins seems we'll have like some contract on any extensions or that shape. Natively. We beasts to have like some control to be effective. To to pull over these objects. Okay, and this is, this is one of the interesting facts to explore. I think this would be useful and can be accessed. Yeah, deserve more investigation. The other parts is just like I think we are talking mostly about builtin APIs is here, but there is also a lot of historical feedback in test 262 about syntax extensions. And when I talk about syntax extensions, I also talked about like restriction that the syntax like people try to create tests for Test262. This is totally Like the feeling is if I have a new syntax feature to the language, I try to test it out and I create variations of it, There is just not conform to the new spec and try to release something like, like, let's see if this Code works or not, if it's like total garbage, of like letters garbage that you write down and this is like, yeah, we cannot say that actually is invalid test many so often because each can be at some point like some syntactic extension. think this is also a like another field that we can explore what we with syntatic. Restrictions, I don't think there is a, for any of these like even for the builtins, there is there isn't any easy solution. But for all of these like, there is a historical feedback from people who contribute to test 262, expressing the desire to create these restrictions. I wanted to express that part too because I think we're just too focused in builtins so far. Maybe that's the intention JHD and I'm just assuming You also saying any extension, JHD: Yeah, I mean, so the intention here is not to prohibit any extensions, except when they conflict with things we've already specified. What my, you know, my personal philosophy on extensions is sort of a separate item/topic. MM: Just a quick note since people were looking for what other examples, are there bug? That's that's, we're aware of but have not had not bothered to report, that would be detected by JHD’s suggested test. Is that on V8 only for each of the sub classes of error border of the, they each override error.prototype toString() method. Own tostring method and if you remove, the override it makes no, it makes no difference. So the override is completely purposeless. Probably an accident. -GCL: I want to say, I actually removed that in 2019. So and no one noticed. So I guess that goes to show how useless it was. +GCL: I want to say, I actually removed that in 2019. So and no one noticed. So I guess that goes to show how useless it was. MM: Okay, we noticed when it was there. I'm glad I did not notice that, It was removed. Okay, good. Thank you. JHD: Okay, so there's no one else. I looked at the queue. It sounds like from Shu in the Chrome team that they're and partially from Yulia and Firefox that we will not be able to add a normative prohibition - certainly at this time, potentially ever. It seems like the direction desired here is for test262 tests just like it has for WeakRef - to have some allowed-to-fail tests to cover these specific regressions. -YSV: Just want to jump in and say Firefox doesn't have a position on this. +YSV: Just want to jump in and say Firefox doesn't have a position on this. -JHD: Thank you. That's good to know. Okay, so, I guess then there's really nothing to do here since we're not making a normative change will be, you know, I'll discuss with the test uses to maintainers separately and try to have path forward here. If can, you know, if in the future we have evidence that either the violations of my suggested prohibition are more severe than believed or evidence that nobody's actually violating them. Then we will you know, that I may come to try to come with then present that evidence, but +JHD: Thank you. That's good to know. Okay, so, I guess then there's really nothing to do here since we're not making a normative change will be, you know, I'll discuss with the test uses to maintainers separately and try to have path forward here. If can, you know, if in the future we have evidence that either the violations of my suggested prohibition are more severe than believed or evidence that nobody's actually violating them. Then we will you know, that I may come to try to come with then present that evidence, but -SYG: Can I respond to that real quick. Our objection is not. Based on the extent of the current violations. It doesn't it's about future Direction and would lacking a concrete. Use case. Now is that it is true that there is no concrete case now, but also uncomfortable times, closing that door. All right. Yeah, I mean I think you keep because because I think what would end up happening is if we close that door say that it is an allowed thing. Normatively, it's not going to change product decision. Should we like should something happen with error? And then we're like, okay. We actually want a per instance, dot stack, or stack trace or something to make the errors in V8 better in this way. Like, it's not going to change that product decision. So what do I end up happening is there's violations. For a norm that we have had and for a right that we have had that would not have been a normative vibration or willful violation. Otherwise, +SYG: Can I respond to that real quick. Our objection is not. Based on the extent of the current violations. It doesn't it's about future Direction and would lacking a concrete. Use case. Now is that it is true that there is no concrete case now, but also uncomfortable times, closing that door. All right. Yeah, I mean I think you keep because because I think what would end up happening is if we close that door say that it is an allowed thing. Normatively, it's not going to change product decision. Should we like should something happen with error? And then we're like, okay. We actually want a per instance, dot stack, or stack trace or something to make the errors in V8 better in this way. Like, it's not going to change that product decision. So what do I end up happening is there's violations. For a norm that we have had and for a right that we have had that would not have been a normative vibration or willful violation. Otherwise, JHD: right. I mean, prepare stack traces, like the existence of it is not would not be in violation of my proposed prohibition either. SYG: But like, what if we want to do something with it, if it is TC39 and then the devtools product team disagrees, like it's about preserving the future, right? It's not about just freezing what it is today, right?. -JHD: It is theoretically not possible for TC39 to do something that you all disagree with - that's the proposal process. So, that's sort of why I'm still confused about this position because if the current set of delegates is fine with that prohibition, then for any future thing to come up, it would have to be like either an unanticipated use case for an existing property or it would have to be by a new proposal where Chrome, like everyone else, has the ability to participate. And that includes the stack proposal, which has significant hurdles in front of it before advancing. +JHD: It is theoretically not possible for TC39 to do something that you all disagree with - that's the proposal process. So, that's sort of why I'm still confused about this position because if the current set of delegates is fine with that prohibition, then for any future thing to come up, it would have to be like either an unanticipated use case for an existing property or it would have to be by a new proposal where Chrome, like everyone else, has the ability to participate. And that includes the stack proposal, which has significant hurdles in front of it before advancing. SYG: That's a much longer discussion. I think. That I'm not comfortable going into right now. It goes into whether we want the failure mode, if we disagree with something to be, we keep debating in TC39 untill we drop it or nothing happens. Or we come to an agreement for the failure mode to be. Here, is something that is status quo allows for hosts to Diversion on and just move on have that happen. Be close now by this prohibition supposing the the rare case that we want to actually shadow something. So like from a procedural point of view. I'm thinking of failure modes should discussion in TC39 dragged on. For example, I see no reason to close this Avenue of permitted Post diversions. JHD: Okay. All right. Well, I guess we'll wrap it up after Leo’s comment. -LEO: Yeah, like, I don't want to block anything. I'm trying to be positive of any directions. People want to take here. I'm also not trying to take any past tense of as a maintainer of tests to secure more like starkly, I use the project a lot contributing to contributing to to it. I think it's nice to just set up like some, maybe a one-time call with people who has an interest in test262 people who are maintaining and people are going to be attached be attached to it. See what what? What what can be done. There is much. I know. There are so many plans so many things that like I could think and it could say like how to in how we improve, test262 and I think we can probably find a way to tackle this up. +LEO: Yeah, like, I don't want to block anything. I'm trying to be positive of any directions. People want to take here. I'm also not trying to take any past tense of as a maintainer of tests to secure more like starkly, I use the project a lot contributing to contributing to to it. I think it's nice to just set up like some, maybe a one-time call with people who has an interest in test262 people who are maintaining and people are going to be attached be attached to it. See what what? What what can be done. There is much. I know. There are so many plans so many things that like I could think and it could say like how to in how we improve, test262 and I think we can probably find a way to tackle this up. + +JHD: Thank you, everybody. -JHD: Thank you, everybody. ### Conclusion/Resolution -* + +- + ## Extending Null + Presenter: Gus Caplan (GCL) - [proposal](https://github.com/tc39/ecma262/pull/1321) - [slides](https://docs.google.com/presentation/d/1WPB6bPIoCYnD1YPlhcvcuxiGev8aMLCq-bLN2qWadFk/edit?usp=sharing) -GCL: Okay. So basically this is an old topic about basically extending null in classes. This was a behavior that was supposed to be introduced in es2015, to be a class that behaves somewhat normally, except that it does not inherit from object dot prototype or function dot prototype or you know, all the various prototypes. And so it's kind clean in that regard. And basically, the way that it was implemented in ES2015 was not correct. And so when you try to construct one of these classes, they just throw an error. And so there's been you know, a lot of discussion on this topic over the years and there have been a few attempted fixes. That didn't really work. but overall, the, the decision of the committee in es2015, that extends null should be a thing that behaves correctly is still unchanged. And since then, we've started to see new things in classes that are just, you know, further sort of drifting out of alignment here, for example, class fields, which are not Instantiated correctly. When you use extends null because of how it's currently specified. And so, I think moving forward from here. There is interest in this, this feature behaving correctly. Because you know, it you make a null prototype class, which is you know, useful for sort of locking things down and making sure you're not exposed to prototype pollution whatnot. But on the other hand, you know, it hasn't worked for a very long time and it's pretty Niche So I could see, you know, a possible Avenue of discussion here being You know, maybe we should just leave this broken, personally. I think we should fix it. And so I have proposed this change, which is pretty simple, But basically, it just, you know, it allows you to construct these classes and it does this by Basically, allowing for.. it basically changes the class into it base class instead of a derived class. And this has the side effect of not requiring, a super call. And there some, well, there was some contention around the semantics of how super should behave in these classes and previous opinions I've seen that sort of Acted were like, super should always be a valid thing in a class that has Heritage which means it has like the extends clause and then another one was super should throw in classes when they extend null specifically. so I don't know of anybody today who says the it should throw thing, which is why I have brought this presentation here because I believe I have something that is somewhat passable. At this point, but basically, yeah, it's just once the, the class gets past the super call, it works properly. So that's basically what I've done is just patched that to work and you can read the pull request to see how that works. I have some sort of examples here of what the The various weird things you might to mutate a class Look like, which we discuss in detail if anybody wants to, but Yeah, that's basically it. So, I can go to the queue now. +GCL: Okay. So basically this is an old topic about basically extending null in classes. This was a behavior that was supposed to be introduced in es2015, to be a class that behaves somewhat normally, except that it does not inherit from object dot prototype or function dot prototype or you know, all the various prototypes. And so it's kind clean in that regard. And basically, the way that it was implemented in ES2015 was not correct. And so when you try to construct one of these classes, they just throw an error. And so there's been you know, a lot of discussion on this topic over the years and there have been a few attempted fixes. That didn't really work. but overall, the, the decision of the committee in es2015, that extends null should be a thing that behaves correctly is still unchanged. And since then, we've started to see new things in classes that are just, you know, further sort of drifting out of alignment here, for example, class fields, which are not Instantiated correctly. When you use extends null because of how it's currently specified. And so, I think moving forward from here. There is interest in this, this feature behaving correctly. Because you know, it you make a null prototype class, which is you know, useful for sort of locking things down and making sure you're not exposed to prototype pollution whatnot. But on the other hand, you know, it hasn't worked for a very long time and it's pretty Niche So I could see, you know, a possible Avenue of discussion here being You know, maybe we should just leave this broken, personally. I think we should fix it. And so I have proposed this change, which is pretty simple, But basically, it just, you know, it allows you to construct these classes and it does this by Basically, allowing for.. it basically changes the class into it base class instead of a derived class. And this has the side effect of not requiring, a super call. And there some, well, there was some contention around the semantics of how super should behave in these classes and previous opinions I've seen that sort of Acted were like, super should always be a valid thing in a class that has Heritage which means it has like the extends clause and then another one was super should throw in classes when they extend null specifically. so I don't know of anybody today who says the it should throw thing, which is why I have brought this presentation here because I believe I have something that is somewhat passable. At this point, but basically, yeah, it's just once the, the class gets past the super call, it works properly. So that's basically what I've done is just patched that to work and you can read the pull request to see how that works. I have some sort of examples here of what the The various weird things you might to mutate a class Look like, which we discuss in detail if anybody wants to, but Yeah, that's basically it. So, I can go to the queue now. -YSV: We took a look at this proposal as a team and one of our Engineers implemented it. One of our worries Was that the way that super works is we need to do the lookup and the determination that it’s still special casing that behavior. Etc. dynamically. And from our experimentation. This turns out to be true, which means we're going to have to admit two completely separate sets of byte code to implement. Correctly, and it has cascading effects on the entire engine. We're not super happy about this, but we see why we want something like object extends null. So the use case is definitely there. We talked about a couple of different potential Solutions like instead of null to extend void, which also people weren't super happy about because you know void 0 evaluates to undefined so that Feels a little bit strange, but with something like void we can do the special casing at parse time rather than and like in at the correct bytecode immediately rather than having to it at runtime. I think that the super case hasn't been fully finished yet. And I don't know if others are going to agree with me here. +YSV: We took a look at this proposal as a team and one of our Engineers implemented it. One of our worries Was that the way that super works is we need to do the lookup and the determination that it’s still special casing that behavior. Etc. dynamically. And from our experimentation. This turns out to be true, which means we're going to have to admit two completely separate sets of byte code to implement. Correctly, and it has cascading effects on the entire engine. We're not super happy about this, but we see why we want something like object extends null. So the use case is definitely there. We talked about a couple of different potential Solutions like instead of null to extend void, which also people weren't super happy about because you know void 0 evaluates to undefined so that Feels a little bit strange, but with something like void we can do the special casing at parse time rather than and like in at the correct bytecode immediately rather than having to it at runtime. I think that the super case hasn't been fully finished yet. And I don't know if others are going to agree with me here. -GCL: yeah, that's a very reasonable. Constraint, I think personally, I don't really have an opinion on like, what the, those kind of the requirements on a super call here should be, I just kind of want to make the overall Feature work. So like I don't know if you know, Firefox wants this to grow and other people are not against that, you know, maybe we can move forward with that. +GCL: yeah, that's a very reasonable. Constraint, I think personally, I don't really have an opinion on like, what the, those kind of the requirements on a super call here should be, I just kind of want to make the overall Feature work. So like I don't know if you know, Firefox wants this to grow and other people are not against that, you know, maybe we can move forward with that. -YSV: I'm not 100% sure if throwing will fully address our case. I think so, the thing that would address our issue is, if we could do this at parse time rather than dynamically like it goes statically analyzable, but you could assign a variable to null and then do class extends. And then that is null. So we would still need to do a dynamically Also, if classes had been specified differently. This would be easier in this would be a no-brainer. So we're kind of stuck with the class implementation that we have. Yeah. I don't know if there's a special syntax that we would be comfortable with introducing here, like, rather than class having something like base class and then it doesn't have the possibility to extend. There was the void suggestion, but this is sort of the constraint that we're working with. +YSV: I'm not 100% sure if throwing will fully address our case. I think so, the thing that would address our issue is, if we could do this at parse time rather than dynamically like it goes statically analyzable, but you could assign a variable to null and then do class extends. And then that is null. So we would still need to do a dynamically Also, if classes had been specified differently. This would be easier in this would be a no-brainer. So we're kind of stuck with the class implementation that we have. Yeah. I don't know if there's a special syntax that we would be comfortable with introducing here, like, rather than class having something like base class and then it doesn't have the possibility to extend. There was the void suggestion, but this is sort of the constraint that we're working with. -MAH: I basically wanted to echo Julia’s point. I think, from what I understand, the intent of extend null is to be able to create a class that doesn't inherit from object prototype. In this case, it really means creating a base class (should read root class) that you don't want to inherit from anything. When you write extend with an identifier, and that evaluates to null, a base (root) class is probably not what you meant, which is why I would support extends void instead because that is pure syntax. It's not an evaluation. void 0 does evaluate to undefined but that is no longer the syntax form expressing a base (root) class that doesn't inherit from object. And as you mention, we're starting from a weird situation because the implicitness of no extend clause means that we actually inherit from object prototype. If all classes never did we wouldn't reveal the situation and we would have had to require extends object for everything. I think we're supportive of extend void to support this use case. +MAH: I basically wanted to echo Julia’s point. I think, from what I understand, the intent of extend null is to be able to create a class that doesn't inherit from object prototype. In this case, it really means creating a base class (should read root class) that you don't want to inherit from anything. When you write extend with an identifier, and that evaluates to null, a base (root) class is probably not what you meant, which is why I would support extends void instead because that is pure syntax. It's not an evaluation. void 0 does evaluate to undefined but that is no longer the syntax form expressing a base (root) class that doesn't inherit from object. And as you mention, we're starting from a weird situation because the implicitness of no extend clause means that we actually inherit from object prototype. If all classes never did we wouldn't reveal the situation and we would have had to require extends object for everything. I think we're supportive of extend void to support this use case. JHD: I just wanted to mention my mental model here. is that I'm, this topic has had a lot of things thrown around. The current thing is but any class that has extends in order to use this in it, I have to call Super like, in the Constructor is called super first any class without extends. I can't go super a class has extends that can't call Super is really weird. And the thing I didn't keep squishing into the queue topic, is that? I think it's also weird if there's a special syntactic form for null if I can do class extends X for any expression and null was a valid value, and it seemed really weird to me to not allowed. Null in there. And so like, there's some comments in matrix about null being special. It always going to be a special case, where the why can't can't `super` special case in? @@ -455,37 +466,37 @@ YSV: Because of the impact on the run time, this is a serious concern. Like, thi JHD: So, like optional calls. This doesn't have a concern because that's not every class or something. But this would? -YSV: This would impact I think that this impacts private fields and we would need to introduce a new Field on all objects, which is like we're already running it like low on memory for how we describe objects. So, I mean like things are solvable. solvable. Yes, I think that there's a better way to solve this. Than forcing this to be dynamically determined. +YSV: This would impact I think that this impacts private fields and we would need to introduce a new Field on all objects, which is like we're already running it like low on memory for how we describe objects. So, I mean like things are solvable. solvable. Yes, I think that there's a better way to solve this. Than forcing this to be dynamically determined. JHD: yeah, I can't speak to the implementation difficulties, but it'll be weird I think if it if we do something static. -JHX: Yeah, I really hope we can fix that about. I remember this a service that as some were asking me about the disk extending now Promenade. Inside time. I tell them, I think the committee. What? Well, fix doctor in, maybe two or three years, but now the serious, have our pasta and with we're still in the same place, place, so I really hope can find a way to somehow because programs need that. They need a way to create an object or world without the link. Objects. and I think, I think this is just a super problem. Maybe we could relax the rule to allow silver and in this case and and actually I think for the most programmers that they don't care the this super problem that because you allows you for the two super it just in these cases if they just do nothing. and so I allows super for that doesn't cause any problem. I mean in the, in the, in the map review, I anyway, I really hope we can we can fix that. +JHX: Yeah, I really hope we can fix that about. I remember this a service that as some were asking me about the disk extending now Promenade. Inside time. I tell them, I think the committee. What? Well, fix doctor in, maybe two or three years, but now the serious, have our pasta and with we're still in the same place, place, so I really hope can find a way to somehow because programs need that. They need a way to create an object or world without the link. Objects. and I think, I think this is just a super problem. Maybe we could relax the rule to allow silver and in this case and and actually I think for the most programmers that they don't care the this super problem that because you allows you for the two super it just in these cases if they just do nothing. and so I allows super for that doesn't cause any problem. I mean in the, in the, in the map review, I anyway, I really hope we can we can fix that. BSH: So, it just occurred to me that it, first of all, if you want to just extend the value null then you have to deal with then you probably don't want to have to, you probably need to go with the you always require a super because if you write a class where you do a class A extends, some expression that might result in the null, then, then, well, when you write the body, Function, you can't sometimes have called super as sometimes not that easily. So I would think that you would, if you're gonna do it that way. You probably have to have a call to super but maybe that's just the wrong way of going about it. This is a weird and unusual situation. Do you really want to trigger this sudden Behavior? Because the dynamic expression gave you a different Dynamic like happened to Null. Maybe you really should have to have a syntax difference. If you're going to get this significant difference in. so that would make me lean more to the class A extends void, which is an actual different syntax in order to specify Disturbing Behavior. That way you could tell when reading it that you're getting, this different behavior and know that oh, well, you shouldn't have a goal to Super because you're not extended. -MAH: Yeah, in my opinion extends, extends, null something that every is to know is just wrong and you know, they like it doesn't really mean anything if the intent was to create a base class that didn't inherit from object using something that evaluate to null at runtime is the cause of the the problems. So, I agree. I'll move to my next one. The so I also want to say that extends void can actually be polyfilled. So the transpiler can emit a nil class and the only observable difference would be if you look at the Prototype chain. Of your generated class Constructor, but besides that, it would behave for all intended purposesas extending from null. we're not, adding the object prototype in the book that changed. So, there is a way to actually implement this in transpilers as well. +MAH: Yeah, in my opinion extends, extends, null something that every is to know is just wrong and you know, they like it doesn't really mean anything if the intent was to create a base class that didn't inherit from object using something that evaluate to null at runtime is the cause of the the problems. So, I agree. I'll move to my next one. The so I also want to say that extends void can actually be polyfilled. So the transpiler can emit a nil class and the only observable difference would be if you look at the Prototype chain. Of your generated class Constructor, but besides that, it would behave for all intended purposesas extending from null. we're not, adding the object prototype in the book that changed. So, there is a way to actually implement this in transpilers as well. GCL: Yes, I was just reading something in the, in the, in the chat. Yulia I don't remember if this was intentional or not, but extending function.prototype when you say extends, null, Maybe. Something that's not supposed to happen. YSV: So, you mean modify the spec in order to not need to do this, what you're suggesting? Because it's important like that suspect recording. -GCL: Yeah, I don't remember remember the original. Like, I've basically rebased this PR like 300 times over the last two years and I feel like it's not supposed to extend function.prototype on the Constructor, but I don't remember specifically. +GCL: Yeah, I don't remember remember the original. Like, I've basically rebased this PR like 300 times over the last two years and I feel like it's not supposed to extend function.prototype on the Constructor, but I don't remember specifically. ??: There are some test cases like can go over what you about my it you think it would be okay to have liked it to have two Constructor be an Call or something. -GCL: We call levels have to extend function, like you can set the Prototype of function object to null. You just wouldn't be able to use like.com and stuff on it. All right. +GCL: We call levels have to extend function, like you can set the Prototype of function object to null. You just wouldn't be able to use like.com and stuff on it. All right. SYG: Okay, so just clarifying that that that is on the your proposal. GCL: Yes, that's a available option. I believe, but I'm not a hundred percent sure certain. -YSV: I do have one item on the queue that I just want to really quickly get to. That's all right. So I think that the proposal like the intention of The Proposal, what it tries to enable, which is the object outside prototype of null pattern, that we in non class object creation, or class creation, make that something that's accessible within the class. Syntax would be really great. That's fantastic. And making that something that is clearly communicated to the user would be even better. So, I think that the, the idea of the This great, very much on point. We should fix this. We just need to figure out how to communicate this in the clearest way to the user. And I think MAH gave a really great sum up that basically, you know, this setPrototypeOf null, people do that because they know to do it because they've been working with JavaScript for a long time, but maybe we can make the class syntax somehow better. Just just as a thought like, you know, we talked about extends void, we talked about extends null and the expectation of how super works when there was extends, maybe like in C++. We have virtual classes, which must be extended in order to be used. You can't use them there and this feels like something similar we want to do here. I don't think virtual is a good name, but I would be open to seeing new syntax for this feature because I do think it's useful. +YSV: I do have one item on the queue that I just want to really quickly get to. That's all right. So I think that the proposal like the intention of The Proposal, what it tries to enable, which is the object outside prototype of null pattern, that we in non class object creation, or class creation, make that something that's accessible within the class. Syntax would be really great. That's fantastic. And making that something that is clearly communicated to the user would be even better. So, I think that the, the idea of the This great, very much on point. We should fix this. We just need to figure out how to communicate this in the clearest way to the user. And I think MAH gave a really great sum up that basically, you know, this setPrototypeOf null, people do that because they know to do it because they've been working with JavaScript for a long time, but maybe we can make the class syntax somehow better. Just just as a thought like, you know, we talked about extends void, we talked about extends null and the expectation of how super works when there was extends, maybe like in C++. We have virtual classes, which must be extended in order to be used. You can't use them there and this feels like something similar we want to do here. I don't think virtual is a good name, but I would be open to seeing new syntax for this feature because I do think it's useful. -GCL: Okay, so I guess at this point, I'd just be curious, like, especially like I said in the first slide, slide, do we want basically like I can probably go and look into the function.prototype thing for next time or we could just say let's come up with a new thing, and I'll come back with an actual like proposal for a new syntax or something. +GCL: Okay, so I guess at this point, I'd just be curious, like, especially like I said in the first slide, slide, do we want basically like I can probably go and look into the function.prototype thing for next time or we could just say let's come up with a new thing, and I'll come back with an actual like proposal for a new syntax or something. -RBN: I brought this up in the chat as in Matrix, and I was wondering if there's a possibility and what Yulia and other implementers might think of changing the specification to introduce a built-in Constructor function, that specifically handles the mill case and have these spec change the default prototype for extends, null to use this built-in Constructor, it have, that be the differentiator between whether it's a class extends. Or a class with no extends, Claus versus a class that extends null with this +RBN: I brought this up in the chat as in Matrix, and I was wondering if there's a possibility and what Yulia and other implementers might think of changing the specification to introduce a built-in Constructor function, that specifically handles the mill case and have these spec change the default prototype for extends, null to use this built-in Constructor, it have, that be the differentiator between whether it's a class extends. Or a class with no extends, Claus versus a class that extends null with this -YSV: Would this be used for all class declarations. +YSV: Would this be used for all class declarations. RBN: Just class extends null. So basically, it would insert something in the Prototype prototype chain between the Constructor and function dot prototype that essentially it's a marks the class as having extended null, which is I think when the comments you mentioned is was a concern is that you use the Prototype the Constructor. so differentiate between function prototype and whether the class extended no required more information, So possibly inserting it sits between that only is essentially used as a marker, but it could also, theoretically be a function that does the thing that extends null class might do producing the correct type from Super that you'd expect from extends null using your target cetera. @@ -495,22 +506,26 @@ MM: Yeah, two things. First of all, does the need for this actually come up in p GCL: I have seen use cases demonstrated, but I think the concept is useful. I mean, I can think of code where I would use it. I've seen other delegates present. Code bases, where they do similar patterns, -MM: so, in general, the, the standard of syntax, new syntax requires a very high bar. You have extended -- +MM: so, in general, the, the standard of syntax, new syntax requires a very high bar. You have extended -- -YSV: Yeah, okay. So the from my view, the being able to have something that's a non directly invocable base class. Is that this allows you to, to describe an interface or certain set of behaviors that you want to have disparate classes inherit from, this is how I've seen it used. I think there are probably other ways to use it, but as a concept this thing, which you can't use directly But it's actually super useful anyway for describing shared Behavior help organize code bases in a really nice way. That's what I see as. And also the fact that it doesn't have any relationship to any other prototype. That is really lightweight. We actually use such classes within spider monkey for some of the lightweight constructs that we have. So that's on our JavaScript side of the little spider monkey code base. So, I think there really are a couple of niche use cases, but right now the Syntax for doing that you can't can't use class. You to use objects.set prototype of, and it's a little clunky. It is sort of an expert feature. You have to know what you're doing, but I think that the code pattern is something that can be beginner friendly, and can also really benefit programmers in organizing their thoughts. Okay. Well if that's worth it, then I think extends void does read well. It is the reading that has extends. Nothing is cos is consistent with taking that class to be a base class? +YSV: Yeah, okay. So the from my view, the being able to have something that's a non directly invocable base class. Is that this allows you to, to describe an interface or certain set of behaviors that you want to have disparate classes inherit from, this is how I've seen it used. I think there are probably other ways to use it, but as a concept this thing, which you can't use directly But it's actually super useful anyway for describing shared Behavior help organize code bases in a really nice way. That's what I see as. And also the fact that it doesn't have any relationship to any other prototype. That is really lightweight. We actually use such classes within spider monkey for some of the lightweight constructs that we have. So that's on our JavaScript side of the little spider monkey code base. So, I think there really are a couple of niche use cases, but right now the Syntax for doing that you can't can't use class. You to use objects.set prototype of, and it's a little clunky. It is sort of an expert feature. You have to know what you're doing, but I think that the code pattern is something that can be beginner friendly, and can also really benefit programmers in organizing their thoughts. Okay. Well if that's worth it, then I think extends void does read well. It is the reading that has extends. Nothing is cos is consistent with taking that class to be a base class? SYG: so, your thought about Yulia, your thought about how we communicate. This got me thinking with the structs today. Structs tries to solve several different problems, but their theme is restricting is taking things away from General class declarations to make it more restrictive. So that it's better for some use cases, like concurrency or sealed objects or better memory layout or something like that. This extends null having a root class thing that doesn't derive from anything. is to be looked at it such a restriction. Would it make sense to explore this with structs? And if that and that would be a long ways off. So this is really just more of a thought. And if that were possible via structs with their do much demand for it by a regular class syntax, I guess the, the actual question here is, do you see the kind of restriction where we don't want to inherit from anything. Do you see that? As a, a Restriction. That's The desirable of itself or or would be fine to be as a package deal with other restrictions like a sealed instance. That, that was more of a question to Yulias, is use cases. YSV: What was the question was about using struts? -SYG: So structs have additional restrictions like sealed instances, and I was wondering if the no Base Class restriction with the extents, null is a desirable in and of itself or does it also make sense in conjunction with other restrictions. Like could we This use case purely - drums. +SYG: So structs have additional restrictions like sealed instances, and I was wondering if the no Base Class restriction with the extents, null is a desirable in and of itself or does it also make sense in conjunction with other restrictions. Like could we This use case purely - drums. YSV: That's a good question. I was thinking that it's useful independently. Like you may just want describe an interface and just have it be very similar to the class that inherits from it. I'm not sure how exploring this purely for structs would play with classes. I'm blanking right now on the Class structs proposal. So I can't answer this question very well. Yeah, if we can follow up and because I think that that might also be an Avenue of exploration, but to be completely honest, I think that It may have the same communication problems to users. If we do this with struts because people will expect this to be related to classes as they have for some time. So I think there might be some issues with that, but maybe can discuss it more in committee and watch this out a bit more. GCL: I think this is a good productive discussion. I think moving forward here I'm going to pursue some details about more of these runtime requirements and maybe how other syntaxes or proposals might cover this and maybe come back with something else in the future, or maybe not depending on where that ends up. Thank you, everyone. + ### Conclusion/Resolution -* GCL to explore more details of runtime requirements and explore alternatives + +- GCL to explore more details of runtime requirements and explore alternatives + ## Error Cause for Stage 4 + Presenter: Chengzhong Wu (CZW) - [proposal](https://github.com/tc39/proposal-error-cause) @@ -531,23 +546,25 @@ CZW: Thank you. BT: All right, so I guess Hax is also a plus one. Yeah, there doesn't appear to be any discussion. So unless there are any objections right now, I think we're at stage 4. Congratulations. ### Conclusion/Resolution -* stage 4 + +- stage 4 ## Array.fromAsync update + Presenter: J. S. Choi (JSC) - [proposal](https://github.com/js-choi/proposal-array-from-async) - [slides](https://docs.google.com/presentation/d/1OHfB6rMrv27A2SGOZ-hw0U0t6f5OP1beLVsKKp48-Dw/edit?usp=sharing) -JSC: This is a lightning update. 10 minutes long, very fast. Not much time for plenary questions. Please create our comments in an issue on the repository. +JSC: This is a lightning update. 10 minutes long, very fast. Not much time for plenary questions. Please create our comments in an issue on the repository. -JSC: A rapid review. This is currently a Stage-1 proposal presented in August: Array.fromAsync. It's like Array.from, which creates a new array from an iterable and dumps the iterable into an array. So array.fromAsync is just like that except it also works on async iterables, and it returns a promise. So, lots of people, including myself, do this manually right now with for await. We want to dump streams or async iterables into some synchronous data structure so we can inspect it, examine it, or print it or whatever. Unit tests, command-line interfaces. +JSC: A rapid review. This is currently a Stage-1 proposal presented in August: Array.fromAsync. It's like Array.from, which creates a new array from an iterable and dumps the iterable into an array. So array.fromAsync is just like that except it also works on async iterables, and it returns a promise. So, lots of people, including myself, do this manually right now with for await. We want to dump streams or async iterables into some synchronous data structure so we can inspect it, examine it, or print it or whatever. Unit tests, command-line interfaces. -JSC: Just a couple of quick updates. Because `for await` supports sync iterables with promises, Array.fromAsync has been changed to support that. So any sync iterables yielding promises can be flattened into an array. +JSC: Just a couple of quick updates. Because `for await` supports sync iterables with promises, Array.fromAsync has been changed to support that. So any sync iterables yielding promises can be flattened into an array. JSC: So there's that just a couple three controversies to touch on real quick. One: There was a question of whether this was redundant with iterator helpers. I think this is resolved, we talked it out with at least one co-champion of iterator helpers. This is about a method in iterator helpers called toArray. I think we should have both or at the very least, we should have Array.fromAsync. We already have a readout from, I think that the champion of the, at least one co-champion agrees. I don't think there's much contention around here anymore. -JSC: There are two other areas of more contention. Maybe not super strong, but it's something that I wanted to float and to invite people to leave comments on their issues. #7: Non-iterable array-like inputs. Array.from supports non-iterable array-like objects as inputs. These are objects that do not use the iterable interface. They instead have a length property and they also support index properties. So one representative said yes, fromAsync should accept them. Another representative is iffy about it, because the use cases are murky or less clear. Basically, though, the question is: Should the inputs of the Array.fromAsync be a superset of Array.from? Because if Array.fromAsync doesn’t accept non-iterable array-like inputs, then it would start being a Venn diagram with an intersection between, rather than being a superset of what Array.from accepts. My current inclination is yes: that it may be surprising when someone switches from Array.from to Array.fromAsync, and they're relying on this behavior. The spec currently reflects that, but since there's a little bit of contention on this, please feel free to leave comments and hash it out in the issue before I present this in the next plenary meeting or whatever. +JSC: There are two other areas of more contention. Maybe not super strong, but it's something that I wanted to float and to invite people to leave comments on their issues. #7: Non-iterable array-like inputs. Array.from supports non-iterable array-like objects as inputs. These are objects that do not use the iterable interface. They instead have a length property and they also support index properties. So one representative said yes, fromAsync should accept them. Another representative is iffy about it, because the use cases are murky or less clear. Basically, though, the question is: Should the inputs of the Array.fromAsync be a superset of Array.from? Because if Array.fromAsync doesn’t accept non-iterable array-like inputs, then it would start being a Venn diagram with an intersection between, rather than being a superset of what Array.from accepts. My current inclination is yes: that it may be surprising when someone switches from Array.from to Array.fromAsync, and they're relying on this behavior. The spec currently reflects that, but since there's a little bit of contention on this, please feel free to leave comments and hash it out in the issue before I present this in the next plenary meeting or whatever. JSC: There's also a debate with whether TypedArray.fromAsync should exist too. It's very similar. I don't really have an opinion on this: whether to punt it or whether it's within scope of this proposal. Please feel free to leave comments on issue #8. @@ -555,9 +572,9 @@ JSC: I'm not asking for Stage advancement or anything, but I would love to hear SYG: So, if fromAsync were to take array-likes, are you thinking it would have a microtask tick? Like it would `await` the thing. Or it would check if it is a sync iterator and then only then do await. -JSC: So I think this the answer comes with how `for await` works with arrays of promises. The answer I think is, yes, I think it would act like `for await`, whether on regular sync iterables with promises, or non-promises, or whatever, it would `await` on each item from the array-like, like a sync iterable. Does that answer your question? +JSC: So I think this the answer comes with how `for await` works with arrays of promises. The answer I think is, yes, I think it would act like `for await`, whether on regular sync iterables with promises, or non-promises, or whatever, it would `await` on each item from the array-like, like a sync iterable. Does that answer your question? -SYG: Yeah, it was an opinion thing. I was just wondering, are you again expecting that to make sense? +SYG: Yeah, it was an opinion thing. I was just wondering, are you again expecting that to make sense? JSC: Thank you. Yeah, I believe that's how the spec is right now on. If it doesn't, that would be a bug. And, as for your TypedArray thoughts for the type of the right thing. @@ -565,7 +582,7 @@ SYG: I'm wondering what the use cases are for TypedArray. The use cases abound [ JWK: Maybe converting a Stream into TypedArray. -SYG: But per-element async for the TypedArray, like you're doing here? I don't know. +SYG: But per-element async for the TypedArray, like you're doing here? I don't know. JWK: Yeah, that's strange. @@ -575,13 +592,16 @@ SYG: Yeah, we're streaming. You probably want to chunk then. JSC: Yeah. So, I don't know. So I haven’t decided whether we should keep it separated or within scope of this proposal? I don't have an opinion. Please feel free to leave a comment on it. My inclination is why I put yes here, but it's very weak. I could easily swap to no. For next, plenary to keep it small, Small, if since I'm getting at least weak signals of negative signals Happy to switch it from you to no. -SYG: Yeah, I think my I'm weak no, because I don't know. how it would be used, and Since the end, there's no reason to add it. Now. We can always add it later. At least, the comment from the core-JS. Maintainer is like, they're going to add it to core-JS. Don't add it to the know. +SYG: Yeah, I think my I'm weak no, because I don't know. how it would be used, and Since the end, there's no reason to add it. Now. We can always add it later. At least, the comment from the core-JS. Maintainer is like, they're going to add it to core-JS. Don't add it to the know. JSC: Okay. I will switch it to "no" for next plenary. That's about it. So right now again my for everyone, my inclination is not adorable array, like inputs. S, each item would be awaited, just like async iterable would be non a sing curable, sync adorable and that's what I will climb to present and explain Airy. Please feel free to leave comments on your stuff on the issues if you have any opinions. Thank you much again. That's it. ### Conclusion/Resolution -* Update given; no TypedArray.fromAsync in this proposal + +- Update given; no TypedArray.fromAsync in this proposal + ## BigInt Math update + Presenter: J. S. Choi (JSC) - [proposal](https://github.com/tc39/proposal-bigint-math) @@ -589,17 +609,17 @@ Presenter: J. S. Choi (JSC) JSC: This is the same thing. It's a very fast lightning update on BigInt Math. This one is a wee bit more complicated, but it's the same. Not much time for plenary questions, but please feel free to leave comments on the stuff. -JSC: Rapid review: BigInt Math seeks to extend a couple of Math functions to accept and return BigInts. I work with BigInt sometimes. I'm glad they're in the language. I think it's weird that Math stuff doesn't work on them while operators do. You know? Basic stuff. abs, sign, pow, max. +JSC: Rapid review: BigInt Math seeks to extend a couple of Math functions to accept and return BigInts. I work with BigInt sometimes. I'm glad they're in the language. I think it's weird that Math stuff doesn't work on them while operators do. You know? Basic stuff. abs, sign, pow, max. -JSC: Not proposing any new functions yet. Instead, laying the groundwork for new functions, like bit length, pop count, that would be polymorphic. Just like how this proposal would make them polymorphic. +JSC: Not proposing any new functions yet. Instead, laying the groundwork for new functions, like bit length, pop count, that would be polymorphic. Just like how this proposal would make them polymorphic. JSC: The philosophy is trying to stay consistent with precedent already set by the language. Although what precedent means can be a little murky. I know from my perspective I think of “operations” as including both operators and functions. And operators are polymorphic, and so we want to match that precedent. JSC: The big change is that most proposed functions were removed. It's because they didn't think there was either too much computational complexity or that there’s just no use case for them. Now, only these five functions would remain. And it reached Stage 1, just to make sure everyone remembers: Stage 1, “worth investigating”, back in August. So I'm planning to present this again sometime but there are a couple of controversies to work out. -JSC: Number one: Whether to make a couple of Math functions polymorphic – versus making a new global like BigMath or whatever while keeping stuff monomorphic. I think I have a weakly strong opinion that I would rather keep stuff polymorphic. I think it would be weird to add a new namespace object. I think that we already have polymorphic operators, and I don't really see much of a distinction between syntax operators and Math functions. But a representative has floated this and so please feel free to leave comments on Issue #14 about this. +JSC: Number one: Whether to make a couple of Math functions polymorphic – versus making a new global like BigMath or whatever while keeping stuff monomorphic. I think I have a weakly strong opinion that I would rather keep stuff polymorphic. I think it would be weird to add a new namespace object. I think that we already have polymorphic operators, and I don't really see much of a distinction between syntax operators and Math functions. But a representative has floated this and so please feel free to leave comments on Issue #14 about this. -JSC: I think the trade-offs are similar for user burden, mental burden perspective. Either the user has to remember which Math functions are polymorphic, or they need to remember which Math functions are also in BigMath. I think that's about even. SHO has a good point that they would also have to be a Decimal or DecMath too. We would have to add BigMath and DecMath versus making a couple of Math functions polymorphic. Either way, there's going to be a table, and it's just whether the columns are going to be global objects versus a polymorphic impotence. +JSC: I think the trade-offs are similar for user burden, mental burden perspective. Either the user has to remember which Math functions are polymorphic, or they need to remember which Math functions are also in BigMath. I think that's about even. SHO has a good point that they would also have to be a Decimal or DecMath too. We would have to add BigMath and DecMath versus making a couple of Math functions polymorphic. Either way, there's going to be a table, and it's just whether the columns are going to be global objects versus a polymorphic impotence. JSC: Another controversy, about sqrt and cbrt. Two representatives have battled in an issue. I spun this out into a new issue. One representative’s perspective is that sqrt and cbrt are useful, and that they are difficult to get right in userspace, and it is weird for them to not be valid while we have exponentiation. They would truncate towards zero, and that representative considers that unsurprising, since it matches BigInt division. Another representative has pushed back on this. And so, they've gone back and forth about userland versus not. I don't have a strong opinion about this. I see both sides. Please feel free to hash it out in the comments and we'll see if there's also time in the plenary. @@ -611,7 +631,7 @@ JSC: All right. Thanks SHO. SYG: I don't quite get the Decimal argument. Is it that using a BigInt or Decimal namespace object is bad for discovery – or bad because you really want polymorphic things? -SHO: Yes, I think it is better and I don't have a more refined way to say this at the moment. I would have to think about it, but I think it is better for users for the Math functions to be polymorphic rather than looking up different objects. As JSC points out, chances are, at some point, there's going to be a table that tells you which ones are polymorphic and which ones aren't. I would prefer they actually [all] be polymorphic and return no-ops. If it doesn't make sense, rather than throwing. But that's a whole different philosophy discussion there. In this case, I think it is better to have a table where you're looking at which Math functions are polymorphic versus having different global objects for each numeric type. +SHO: Yes, I think it is better and I don't have a more refined way to say this at the moment. I would have to think about it, but I think it is better for users for the Math functions to be polymorphic rather than looking up different objects. As JSC points out, chances are, at some point, there's going to be a table that tells you which ones are polymorphic and which ones aren't. I would prefer they actually [all] be polymorphic and return no-ops. If it doesn't make sense, rather than throwing. But that's a whole different philosophy discussion there. In this case, I think it is better to have a table where you're looking at which Math functions are polymorphic versus having different global objects for each numeric type. JSC: The point is that: Either way we're going to have a table. It's just whether the columns are going to be “in this global object or not” versus “accepted by this polymorphic function or not”. Either way, we're going to have a table. @@ -641,7 +661,8 @@ JSC: So the pushback that I've gotten from representatives has been usually from WH: If you get pushback, we'll deal with it then. I don’t want to omit it just due to fear of potential pushback. -JSC: All right. Thanks WH. All right, I think I'm about out of time. Does anyone else want to say anything before the hour? If not, then thank you very much. +JSC: All right. Thanks WH. All right, I think I'm about out of time. Does anyone else want to say anything before the hour? If not, then thank you very much. ### Conclusion/Resolution -* Update given + +- Update given diff --git a/meetings/2021-10/oct-27.md b/meetings/2021-10/oct-27.md index b22adb1f..7bfa6808 100644 --- a/meetings/2021-10/oct-27.md +++ b/meetings/2021-10/oct-27.md @@ -1,7 +1,8 @@ # 27 October, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | | | | @@ -19,8 +20,8 @@ | Jordan Harband | JHD | Coinbase | | J. S. Choi | JSC | Indiana University | - ## Destructuring Private Fields + Presenter: Justin Ridgewell (JRL) - [proposal](https://github.com/jridgewell/proposal-destructuring-private) @@ -50,7 +51,7 @@ AKI: All right. MM to clarify things, I’d like to rephrase to make sure I unde JRL: I'm happy to take it to Stage 2 and then come back two more times. That seems fine with me. -YSV: Yeah, private fields are a bit tricky. We have a couple of open issues on them. So I do support this going through the staging process just so that we can review and clarify any issues that might come up with the fact that we're touching a pretty complex implementation piece. I don't think that there's going to be any issues, but I don't mind taking our time here. +YSV: Yeah, private fields are a bit tricky. We have a couple of open issues on them. So I do support this going through the staging process just so that we can review and clarify any issues that might come up with the fact that we're touching a pretty complex implementation piece. I don't think that there's going to be any issues, but I don't mind taking our time here. JRL: Okay @@ -60,7 +61,7 @@ AKI: I believe that is accurate. I think you can ask for Stage 2 right now. JRL: Yeah, so Stage 2? -MM: I'm happy with Stage 2. +MM: I'm happy with Stage 2. YSV: Yeah, that works. @@ -68,28 +69,32 @@ WH: Me too. ??: Need reviewers -JRL: if there are any new beginners or any newbies to the committee, this is a super small proposal. It might be a good one for someone who's not comfortable yet. +JRL: if there are any new beginners or any newbies to the committee, this is a super small proposal. It might be a good one for someone who's not comfortable yet. RRD: Yeah, I would be interested in reviewing this one. This is Robin Ricard from Bloomberg. SH: And this is Sarah. I would also review it. I would get some help from Igalia people to help me on it, but I'd be glad to review. + ### Conclusion/Resolution -* Stage 2 -* reviewers: - * KG - * RRD - * WH - * SHO - * JHD - * SRV + +- Stage 2 +- reviewers: + - KG + - RRD + - WH + - SHO + - JHD + - SRV + ## Explicit Resource Management Update + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-explicit-resource-management) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-Tkfl3NHqg7QcpUoJcnQ?e=E2FsjF ) -RBN: I'm going to briefly go over some of where we are with the explicit Resource Management proposal. That's been a little bit of time since the last time I've had a chance to present this. And I'm going to take some time to review our, what the motivations for the proposal are and what the current status of the Proposal is. So the motivations for the resource management proposal were primarily based around a number of common, but diverse patterns within in the ecosystem around managing resources, that have either native handles or need some type of semantics around closing, or ending a resource. And those include things such as iterators, which already have a return, but it's another similar case, whatwg string readers, node js file handles. +RBN: I'm going to briefly go over some of where we are with the explicit Resource Management proposal. That's been a little bit of time since the last time I've had a chance to present this. And I'm going to take some time to review our, what the motivations for the proposal are and what the current status of the Proposal is. So the motivations for the resource management proposal were primarily based around a number of common, but diverse patterns within in the ecosystem around managing resources, that have either native handles or need some type of semantics around closing, or ending a resource. And those include things such as iterators, which already have a return, but it's another similar case, whatwg string readers, node js file handles. RBN: So again, there's a number of different existing APIs that have all different approaches and they have a number of footguns that are commonly associated with them. It's too easy to have a resource that's not closed and might leak, especially if it accesses some type of native handle, if you're working with something like node bindings. There's also issues around ensuring that resources are disposed in the correct order or closed in the correct order, and the current system for managing these types of resources if you do have multiple is extremely complex and verbose. The example I have here shows some of the boilerplate might have to use if you want to make sure that these resources are closed properly. @@ -107,7 +112,7 @@ RBN: One of the things I wanted to point out again, was the change between using RBN: So, I'll talk a little bit here about the dispose semantics. So using const declarations are block scoped, This means that the value doesn't escape the block and because they value doesn't escape the block, it gives us a very clear location as to where disposal happens in that it happens at the end of the block, when the value leaves the scope. One of the things that we've discussed several times before, as well, is that the expression can be null or undefined and no error is thrown. This is because writing branching logic where you only use constant declaration if an expression exists is problematic. You essentially have to have an if statement around the entire chunk of code. You might want to execute that might conditionally use an expression, by allowing null or undefined to pass through without error this allows you to have conditional operations where this doesn't wear. You don't need to more complex, branching, strategies. If the result is not null or undefined. We attempt to read the dispose member of the result, if that value doesn't exist or is not callable, we will throw a type error. So you must have a dispose if you are not null or undefined. And we record the result of the expression and the disposed essentially, we record the reference for this disposed method in the current lexical environment, in a stack, so that we can release these resources in reverse order. -RBN: One of the things that we've also been looking into and investigating is how to deal with exceptions and aggregating exceptions from dispose and how to handle suppressing exceptions from dispose so that we're showing the expression for the exception rather raised by a user, and I say suppressed, but we're not actually suppressing them and I'll go into a little bit more detail about what that means. So in this example, we have a using const declaration that takes an expression at (a) [on the slide]. when we evaluate this. We record the reference to the dispose method within the lexical environment somewhere within the user code an exception is thrown this could be a user throw, an exception or something else, but it's around within the body of the code before any dispose is evaluated, then as we exit the block, we still always evaluate the dispose methods that are provided. So, at (c) we will attempt to dispose the resources recorded at (a). So, all calls to dispose in this example are going to complete without error. And if all the dispose complete without error, then since the completion for this block is a throw completion and no errors occurred. We just propagate that throw so the error that's thrown is the error that the user threw. +RBN: One of the things that we've also been looking into and investigating is how to deal with exceptions and aggregating exceptions from dispose and how to handle suppressing exceptions from dispose so that we're showing the expression for the exception rather raised by a user, and I say suppressed, but we're not actually suppressing them and I'll go into a little bit more detail about what that means. So in this example, we have a using const declaration that takes an expression at (a) [on the slide]. when we evaluate this. We record the reference to the dispose method within the lexical environment somewhere within the user code an exception is thrown this could be a user throw, an exception or something else, but it's around within the body of the code before any dispose is evaluated, then as we exit the block, we still always evaluate the dispose methods that are provided. So, at (c) we will attempt to dispose the resources recorded at (a). So, all calls to dispose in this example are going to complete without error. And if all the dispose complete without error, then since the completion for this block is a throw completion and no errors occurred. We just propagate that throw so the error that's thrown is the error that the user threw. RBN: Now, a different example shows cases where we've got an expression that will throw in the dispose. And in this case, the user code will complete without error. And as we exit the block, since we're exiting with a normal completion or some other completion that is not a throw completion. We will evaluate the disposed resources. And in this example, one of them will throw. Any exceptions thrown during dispose are stored as the errors of an aggregate exception and then thrown. So this allows you to investigate, the exception is thrown and know that if an aggregate exception is thrown there were errors thrown from disposed. @@ -125,7 +130,7 @@ RBN: When we had the discussion at the Feb 2020 meeting and we were discussing t MAH: I think I just want to emphasize one thing. Currently all interleaving points are tied to either control flow statements or explicit await and with this proposal it would no longer be the case. And, in my opinion, comments are not code. We don't need to go much further right now. I just want to point out that this is novel and it does concern me. -RBN: There's also a reply in the queue from Mark Miller. But or to go into clarifying question from Yulia first, I think. +RBN: There's also a reply in the queue from Mark Miller. But or to go into clarifying question from Yulia first, I think. YSV: Yeah, this is a quick one just to be completely clear. We are already in an async context, either a module or an async function, right? @@ -135,7 +140,7 @@ YSV: OK, so there's an implicit, async interleaving point of (b), which is we ex MM: Yeah, so I wanted to clarify the history here on Ron's point and also in search of new information from recent conversations with MAH. So I had been attempting to ensure that all interleaving points in control flow are marked with either await or yield. And was almost and were almost successful a committee in ensuring that we caught many things that would have introduced hidden interleaving points and fixed them. And the one that Ron had raised that he also just mentioned that convinced me that I had missed one and therefore we were no longer in a state where we had an absolute invariant in the syntax was the return or the early exit from an async Loop, a for-await loop, specifically calling return the iterator and then you have this implicit await on the inner, on the return, to the promise from the call. The code to complete. And Ron is correct, that one's surprised me, but in talking this over with MAH, MAH raised a very important point, which is when you're reading a loop. You're very aware that you're in a looping context and you're very aware that when you, you know, at the end at the end of the body of the loop, you go back through the head. So there's this, this understanding that there is a back through the head nature of reading code in the loop and understanding what happens when you fall off the bottom. I think that means that the await in the for loop is much more - it's much easier to stay aware of the await in the for loop and not forget it and not miss it. The using await has two problems compared to that. One is that it's in the context of control flow, that might be nothing than simply an open, curly and closed, curly for which there's nothing else about it that draws your eye back up to what happened earlier or what happened on introduction. And the other thing was issue that Matthew raised, which is the earlier form of this proposal did have a syntactic marker at the beginning of the block, which is much easier to think to look for when you're when you're trying to understand the meaning of the end of a block. So for all of these reasons, I am no longer of the position that the case that we missed is an adequate precedent to justify the hidden interleaving point in the current state of this proposal. -RBN: one thing that I have considered and I am willing to consider again is introducing a Keyword syntax, that would have to be on the same line at the end of the block to indicate that there is an interleaving point. Something like `await using` for example, there are certain problems. We have to make sure there's no line terminator in between so that we don't end up parsing this as a trying to parse something as multiple statements like a block followed by a new line, using await new line or the new line or await new line using could be three different statements. So if it's something that's necessary we could consider adding it again. I have been trying to avoid it by erring on the side of reducing some syntax burden as opposed to being more explicit about those interleaving points. Again. I'm not seeking stage advancement today. This is primarily an update so we can again address this on the issue tracker. And if we determine that if the committee generally determines that we do need to have this, or if it's they're strong enough opinion within those in the committee that this needs to be present for it to consider before it can be advanced to Stage 3. +RBN: one thing that I have considered and I am willing to consider again is introducing a Keyword syntax, that would have to be on the same line at the end of the block to indicate that there is an interleaving point. Something like `await using` for example, there are certain problems. We have to make sure there's no line terminator in between so that we don't end up parsing this as a trying to parse something as multiple statements like a block followed by a new line, using await new line or the new line or await new line using could be three different statements. So if it's something that's necessary we could consider adding it again. I have been trying to avoid it by erring on the side of reducing some syntax burden as opposed to being more explicit about those interleaving points. Again. I'm not seeking stage advancement today. This is primarily an update so we can again address this on the issue tracker. And if we determine that if the committee generally determines that we do need to have this, or if it's they're strong enough opinion within those in the committee that this needs to be present for it to consider before it can be advanced to Stage 3. MM: Okay. Thank you.. @@ -147,7 +152,7 @@ RBN: So I'll continue on with my slides. Let's see. So one of the things I want RBN: so, that kind of wraps the syntax side of things and I want to discuss the API designs for what we're looking to introduce as well as part of this proposal. The first one is a disposable container class. The purpose of the `Disposable` container class is to provide a very simple API for wrapping multiple disposable objects and for adapting existing code that does not use these semantics to be usable with a using const declaration. This is essentially a very simple object. It has a static from method that takes an iterable of disposables similar to array from It returns a disposable container, that ensures that the disposed method is called properly on each of these intervals when the block ends for throws error during from, if any of the Disposable aren't any values aren't disposable or null or and Define The Constructor version takes a callback that can be used to, it basically becomes the method, that's where the function is evaluated. When you call symbol dot disposed and it can be used to provide a building block for cleanup of resources that don't again match the dispose syntax from third-party libraries or npm Packages Etc and the result object has just a single dispose method. Similar to this pose method, which performs the necessary operations for disposal. The prior art for this, there's a couple different cases. A similar container with a different name exists in the .NET framework. The VS code editor uses a similar model for their disposal implementation. That's used both within the vs code code base, And any extensions that, anybody writes. So there's a fair amount of existing code that is very similar to this approach. An additional proposed API is the async Disposable and this is very similar to the Disposable method in the previous slide, but is designed around working with async disposables and an async iterator in this case and performs the same functions. but again with asynchronous code -RBN:I have a couple examples here. I brought this up so that as we have more discussion, I can kind of point out some ways that certain some of these things are used. One example that I've referenced that is also present on the explainer is for example, transactional consistency. If you're working with distributed transaction across multiple sources or a single SQL database, or any other source that you could theoretically have a transaction across multiple services. Perform an operation asynchronously in this case, and I'm using the await const with the expectation that whatever you're going to do will most likely rely on an service operation as the transaction is distributed amongst all of the peers and then committed. And then if the entire block of code succeeds, you'll hit the last statements of the block which allows you to mark the transaction is successful. And then when it's disposed, if it was successful, it will commit. And if it was never it, succeeded equals true, then it will roll back the transaction. Another example, that is a motivating reason why I'm trying to get this proposal up to date and ready to eventually reach Stage 3. Was our respects three have been discussions around shared data, the shared structs. little in that we will eventually need to have a way of managing synchronous access to shared data and there are different forms of could take. But one of the ways that I've been considering this is something similar to the locks and mutexes that are available in something like C++ using the RAII style. And in this case, we could take a lock. We don't necessarily need to use the variable for the lock. There's not much that we would use it for in this specific or simpler case. There might be more complex cases where you might want to access the lock to manually release States etcetera. But in case, we could use it for locking and allow shared State mutations, and then release the lock at the end of the block. And this very much is designed to match how the same operations could have would occur in something like C++. Another use case that is on the explainer is something like logging and tracing where you want to log the entry and exit of a method or for performance counters, their various other mechanisms that the disposed mechanism has been used in other languages as well. So this is a case of we might start an activity that logs the entry of the method and then as soon as you function, as soon as you exit, function would log the exit of the function. So this is another possible use case for this. +RBN:I have a couple examples here. I brought this up so that as we have more discussion, I can kind of point out some ways that certain some of these things are used. One example that I've referenced that is also present on the explainer is for example, transactional consistency. If you're working with distributed transaction across multiple sources or a single SQL database, or any other source that you could theoretically have a transaction across multiple services. Perform an operation asynchronously in this case, and I'm using the await const with the expectation that whatever you're going to do will most likely rely on an service operation as the transaction is distributed amongst all of the peers and then committed. And then if the entire block of code succeeds, you'll hit the last statements of the block which allows you to mark the transaction is successful. And then when it's disposed, if it was successful, it will commit. And if it was never it, succeeded equals true, then it will roll back the transaction. Another example, that is a motivating reason why I'm trying to get this proposal up to date and ready to eventually reach Stage 3. Was our respects three have been discussions around shared data, the shared structs. little in that we will eventually need to have a way of managing synchronous access to shared data and there are different forms of could take. But one of the ways that I've been considering this is something similar to the locks and mutexes that are available in something like C++ using the RAII style. And in this case, we could take a lock. We don't necessarily need to use the variable for the lock. There's not much that we would use it for in this specific or simpler case. There might be more complex cases where you might want to access the lock to manually release States etcetera. But in case, we could use it for locking and allow shared State mutations, and then release the lock at the end of the block. And this very much is designed to match how the same operations could have would occur in something like C++. Another use case that is on the explainer is something like logging and tracing where you want to log the entry and exit of a method or for performance counters, their various other mechanisms that the disposed mechanism has been used in other languages as well. So this is a case of we might start an activity that logs the entry of the method and then as soon as you function, as soon as you exit, function would log the exit of the function. So this is another possible use case for this. RBN: And then finally the status of the current proposal. I'm continuing as champion on this proposal. We currently have to Stage 2 reviewers, WH and YK. I haven't heard from YK in a while on this, so I'm not sure if I need to look for an additional or replacement reviewer. There is an explainer that explains the current state of the proposal and has examples possible. You should review for the proposal, Etc and full specification text is, which is under review. So I'll open this up to comments then at the end of the comments, I can help again, bring up that I'm looking for reviewers. So we can go back to the queue. @@ -280,7 +285,7 @@ RBN: Is there a link to MAH's libraries? I'm going to put into Matrix because I' MAH: I created [an​​ issue](https://github.com/tc39/proposal-explicit-resource-management/issues/76#issue-1034723632) that has a link. -RBN: Okay? Yeah. I mean, I've looked at this as well and consider the the for agai my concern has been that, it's essentially abusing iterators to manage in many cases a single resource, and it also doesn't give you the kind of flexibility you get with being able to chain multiple resources and deal with their Cleanup in the correct order. Instead. You have to create multiple blocks. +RBN: Okay? Yeah. I mean, I've looked at this as well and consider the the for agai my concern has been that, it's essentially abusing iterators to manage in many cases a single resource, and it also doesn't give you the kind of flexibility you get with being able to chain multiple resources and deal with their Cleanup in the correct order. Instead. You have to create multiple blocks. MM: MAH's library addresses that specifically. @@ -297,11 +302,13 @@ AKI: Thank you, Ron. Thank you everyone And this was just an update, right? RBN: Yes. This is just an update. ### Conclusion/Resolution + - just an update - WH to continue to review - SYG new reviewer ## Change Array by Copy + Presenter: Ashley Claymore (ACE) - [proposal](https://github.com/tc39/proposal-change-array-by-copy) @@ -319,7 +326,7 @@ ACE: So repeating this slide again from last time. So the thing we're focusing o ACE: Instead, what we're doing to kind of reduce having such a large API addition is to just introduce 4 new methods, the rationale here being that a lot these things you can already do if you know the API well enough. Things like pop, you can do slice 0 to minus one, .push is `.concat` with the value in an array to avoid is concat spreadable. Shift is slice after the first index. Though we do want to have a non-mutating version of splice because there's no real equivalent way of doing this directly without creating a copy of the array and then calling splice on the copy, which you can't do for something like a tuple because it can't create a mutating form to temporarily do a splice unless you go into an array, then back into the Tuple. Unshift, there isn't a method form of doing unshift, like you can with the other kind of queue/stack things, but if you have the non mutating version of splice, then you can do an unshift using that we say, I'm from zero, from the beginning of the array, I'm not going to delete anything but I will add something in. -ACE: for copyWithin and fill, we see that these methods have really, really low usage. So we didn't feel like there would be much value in having non mutating forms of these. Fill you can almost get with map if you just ignore holes and copyWithin it seems to be all the documentation around copyWithin suggests use cases for like high-performance copying things around. It seems counter to, if you're creating copies of the entire array every time. So it felt okay to not have a non-mutating form of that. Reverse and sort. Sort gets a lot more use than reverse. ANd these are already linear operations. That's very common to want to sort an array without mutating the original. So we feel like things like sort, we would really miss not being able to sort a tuple. There is .set for typed arrays. This sets multiple things at once. We don't have a version of that. We do have this extra kind of odd, one out ‘withAt’, which isn't actually a kind of non-mutating form for method. I’ll explain that easier on the next slide. The withAt is that kind of the non-mutating form of direct index assignment. So if the array here was immutable, you can't just assign to an index. +ACE: for copyWithin and fill, we see that these methods have really, really low usage. So we didn't feel like there would be much value in having non mutating forms of these. Fill you can almost get with map if you just ignore holes and copyWithin it seems to be all the documentation around copyWithin suggests use cases for like high-performance copying things around. It seems counter to, if you're creating copies of the entire array every time. So it felt okay to not have a non-mutating form of that. Reverse and sort. Sort gets a lot more use than reverse. ANd these are already linear operations. That's very common to want to sort an array without mutating the original. So we feel like things like sort, we would really miss not being able to sort a tuple. There is .set for typed arrays. This sets multiple things at once. We don't have a version of that. We do have this extra kind of odd, one out ‘withAt’, which isn't actually a kind of non-mutating form for method. I’ll explain that easier on the next slide. The withAt is that kind of the non-mutating form of direct index assignment. So if the array here was immutable, you can't just assign to an index. ACE: So the big piece of work, we still think there is to do on this proposal is coming up with names. There is [an issue](https://github.com/tc39/proposal-change-array-by-copy/issues/10) to talk about these names and there's three suggestions that seem to be kind of talked about the most ["with", "copy", and "to"]. Though it's not like we're saying it's limited to these three. These just seem to be the kind of the frontrunners and `with` definitely being the least popular. ‘with’ is what the current proposal has, but we're not saying it's what we want. We're just waiting. We're not going to update the spec and update everything until we've made a decision, the fact that the current proposal uses ‘with’ is just a placeholder. That's not signaling a desired preferred naming. So, if anyone has strong opinions on the naming please do get involved. I'll post a link to the issue. @@ -330,16 +337,19 @@ ACE: We've got the spec text. There's a polyfill. They're the kind of help us te ACE: Lastly, when we got to Stage 2 last time, we didn't have time to ask for reviewers. I know JHD has already reached out and said he's interested, but it'd be great to get some more reviewers. Thank you. AKI: Do we have reviewers? The queue is empty. Could I hear some volunteers? SRV would like to be a reviewer. Do we have another reviewer? - + ACE: And similar to what Michael said earlier, it's got, this is fairly small because it doesn't add any new syntax. Just a few new methods that have, it's just the four methods. Eight, I guess you've got. You have to write them slightly differently for whether it's on array or on typedArray. JRL: Okay, I can review. ACE: Great. Thanks. Yeah, Thanks Justin. + ### Conclusion/Resolution + - JHD, SRV and JRL to review ## RegExp modifiers for Stage 1 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-regexp-modifiers) @@ -347,7 +357,7 @@ Presenter: Ron Buckton (RBN) RBN: Alright, so at the last meeting, I presented this overarching concept around some additional regex features that I was interested in investigating for adoption into the language. At the time it was kind of presented as feature parity to reach the number of features in other languages and my intent is rather that There's a lot of useful features in other languages that are fairly common in regular Expressions that have a lot of value that I'd like to see if we could eventually consider adopting. And I was asked by the committee to break this down into a number of different proposals, more focused on specific features rather than trying to have a design goal of feature parity since that's a little bit too broad of a goal. So, I've taken this and have broken down most of these into individual proposals although, there are some cross-cutting concerns between the proposals, which I'll discuss as they come up. -RBN: The first one I wanted to talk about is regular expression modifiers. For anyone not familiar with this. Okay. Let's get this. This was a slide from the previous slide deck, talking about some of the features we have added some of the new features that we've considered and some Places that we're behind on, at least as far as what features we support and I put together a site that has kind of a list of comparisons of various features and various engines. And what the level of support is this specifically modifiers, there motivated by trying to improve support within regular Expressions to to support scenarios that are used by web-based editors textmate grammars, or Fairly common or used in the u.s. Code, They're used. Even in Visual Studio. In cases. They're used in Eclipse. They're used in textmate. They're used in Adam. They're used on various websites, but there are limitations to our regular expression, grammar. That doesn't allow us to use these or parse these parse syntax that uses these types of grammars with in the JavaScript, regular Expressions and said we're often these environments will have to Shell out to a or will have to use native bindings for something like Oniguruma. And one of the most frequently used features within these texts, make grammars are modifiers and modifiers are very valuable in that they allow you to enable or disable a subset of the regular expression Flags within the pattern itself, things that you can't necessarily put into a regex string in a grammar file because it's a string, not a regular expression, which has context of the regex it self. Another motivating example is JSON configuration files that use regular Expressions stored as strings. This happens for things like electron Forge, webpack. configuration, often has fixed regular expression, stored as strings within a JSON file and no capability for augmenting that outside in the street itself. +RBN: The first one I wanted to talk about is regular expression modifiers. For anyone not familiar with this. Okay. Let's get this. This was a slide from the previous slide deck, talking about some of the features we have added some of the new features that we've considered and some Places that we're behind on, at least as far as what features we support and I put together a site that has kind of a list of comparisons of various features and various engines. And what the level of support is this specifically modifiers, there motivated by trying to improve support within regular Expressions to to support scenarios that are used by web-based editors textmate grammars, or Fairly common or used in the u.s. Code, They're used. Even in Visual Studio. In cases. They're used in Eclipse. They're used in textmate. They're used in Adam. They're used on various websites, but there are limitations to our regular expression, grammar. That doesn't allow us to use these or parse these parse syntax that uses these types of grammars with in the JavaScript, regular Expressions and said we're often these environments will have to Shell out to a or will have to use native bindings for something like Oniguruma. And one of the most frequently used features within these texts, make grammars are modifiers and modifiers are very valuable in that they allow you to enable or disable a subset of the regular expression Flags within the pattern itself, things that you can't necessarily put into a regex string in a grammar file because it's a string, not a regular expression, which has context of the regex it self. Another motivating example is JSON configuration files that use regular Expressions stored as strings. This happens for things like electron Forge, webpack. configuration, often has fixed regular expression, stored as strings within a JSON file and no capability for augmenting that outside in the street itself. RBN: What a modifier is. It's a special pattern within a regular expression, that enables or disables flags for either the entire expression, as in the first case, or within a sub expression, essentially inserting the modifiers to add or the modifiers to remove between the question and colon in a non capturing group. The syntax here shows kind of all the flags, but you might say `(?i)` to indicate that you're using ignore case throughout the rest of the pattern up until the end of the current alternative or the end of the pattern itself, or you might want to enable Unicode or disable Unicode mode matching within a sub Expressions, Etc. One of the values from modifiers. We could implement it today and not change any of our it wouldn't break any existing syntax. There are no collisions with existing syntax with this because both the syntax options described are currently illegal within a ecmascript regular expression. Some Flags can't be modified such so you couldn't change flags that affect specific matching Behavior such as the global sticky and has indices Flags. Cuz those are primarily designed around controlling where indexes occur. and how advancing works. And that's outside the scope of what you might use for handling things like multi-line single-spaced ignore case Unicode or X mode, which is another proposal. I'll discuss more later. And this is one of the most frequent most supported features that are outside of what's currently supported by ecmascript, Perl pcre. Boost reg ex, dotnet, Oniguruma, hyper scan, the ICU regular Expressions, which is what we're trying to emulate in other proposals around, unicode set notation Etc. So, there's a number of prior art references for this with, in multiple languages and all pretty much do the same thing. The main difference is What the flags themselves mean. @@ -367,9 +377,9 @@ WH: I suppose this for Stage 1. MF: I guess I would like to ask that we explore options for just having this capability with a subset of the flags. Waldemar expressed reservations around the Unicode flag in particular. I would like to see a path forward where we address each flag individually and justify them each individually. Instead of saying we're starting with all flags and reducing ones that we don't think we can make make sense. -RBN: I think that's a perfectly viable option. My intent here was showing the global sticky has-indices, flags are ones. I've already looked at and considered there's something that - controlling them won't be very useful. A, it doesn't affect the actual, how we would parse and evaluate a match. And the global and sticky flags at least are primarily based around how advancing within a regular expression works. The Unicode one is one that I would like to be able to support because if we can't support it. There's certain cases where we'd like to be able to see it, but I do agree that we need to look at these on a case-by-case basis. +RBN: I think that's a perfectly viable option. My intent here was showing the global sticky has-indices, flags are ones. I've already looked at and considered there's something that - controlling them won't be very useful. A, it doesn't affect the actual, how we would parse and evaluate a match. And the global and sticky flags at least are primarily based around how advancing within a regular expression works. The Unicode one is one that I would like to be able to support because if we can't support it. There's certain cases where we'd like to be able to see it, but I do agree that we need to look at these on a case-by-case basis. -MF: Yeah, so in particular my specific request, is that when you do come back that we have motivating examples for each individual flag we would want to support with a feature like this. +MF: Yeah, so in particular my specific request, is that when you do come back that we have motivating examples for each individual flag we would want to support with a feature like this. MS: I'm not on the queue, but I agree with MF. Unicode I think is problematic because of the way that the grammar is written in the standard, that the unicode flag is basically passed down to the full syntax when we do the parsing. Multi-line also seems to be little bit dubious to me since it seems funny that you're going to have a string that let's say you start with not multi-line, but even put some multi-line stuff in it, and then you go back to multi-line. I'm thinking of beginning of line and end of line assertions become kind of weird in there. I mean, it's doable, but it seems like it's kind of a foot gun. @@ -380,9 +390,11 @@ RBN: it sounded like we have support for Stage 1. AKI: I would agree that it sounded like we have support for Stage 1. The queue is empty and you are just shy of time. ### Conclusion/Resolution -* Stage 1 + +- Stage 1 ## RegExp Conditionals for Stage 1 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-regexp-conditionals) @@ -392,15 +404,15 @@ RBN: All right, so conditionals are another feature in many regular expression e RBN: The basic idea between a conditional expression. is that you have a group that has a condition head. So, the question open paren, and then some condition close paren if the condition evaluates to true, and I can all go a little bit more into what that means in a moment, then it will match then it will attempt to match the yes pattern if it evaluates to false, then it will match the note pattern. You can't have more than one alternative in a condition. So you had a yes pattern or do pattern or something else, that would be a syntax error Alternatives. Would instead need to be grouped within a capturing or non capturing group inside that pattern. You can elide the no pattern if you don't want to mention anything for false and it's essentially as if you had an empty that matches nothing. This again, doesn't conflict with any existing syntax. This can be added to the regular expression syntax regardless of whether you're in Unicode mode or the new proposed mode for it's being discussed around the rig set, notation proposal. Because again, this is illegal syntax within any regular expression that conscripted a So a condition is one of the specific set of patterns so you can have a look ahead condition. So it's essentially true. If the positive look at matches, a look behind condition that is true. If a positive look behind matches it has the same restrictions for for look behind the here within the pattern that you would in a normal look behind in that, it has to be a fixed-length pattern, it can't have quantifiers like Star Plus quantifiers where you have could have an end Look Backwards, Backwards, look Then we have the a negative, look ahead pattern and a negative look behind for testing. You also have a way to test whether or not a specific back reference was matched. So you can check if you previously matched something you can check whether that match was successful or not, and you can also do back reference conditions by name. Many of the other engines, we've looked at have some additional features. Is that we might be considering for future proposals that I discussed in the last meeting. They're currently out of scope for this proposal because we're restricting the syntax of what a condition is. We have some room to move ahead in the future. So for the example of like a condition that uses a capturing group name because we're explicitly requiring you have the less than and greater than on the either side to match the syntax. We use for capturing groups, then that allows us to not conflict with possibly having defined for subroutines or are for recursion Etc. Etc. But again, these are currently out of scope for what I'm trying to propose in this, in this case. -RBN: Here some examples. Here's a case, where I might want to perform a conditional match that matches the first alternative, but only if it starts with two digits and a `-` and then matches the second alternative. Otherwise where to do this without this feature requires. Repeating this condition in both branches, one being positive, one being negative. This would be a lot of repetition without this. So again, this is intended to be a fairly small proposal, but it adds a lot of additional and very powerful capabilities to a regular expression. There's the explainer as listed and then again seeking Stage 1. So with that we can go to the queue and see if there's any feedback. +RBN: Here some examples. Here's a case, where I might want to perform a conditional match that matches the first alternative, but only if it starts with two digits and a `-` and then matches the second alternative. Otherwise where to do this without this feature requires. Repeating this condition in both branches, one being positive, one being negative. This would be a lot of repetition without this. So again, this is intended to be a fairly small proposal, but it adds a lot of additional and very powerful capabilities to a regular expression. There's the explainer as listed and then again seeking Stage 1. So with that we can go to the queue and see if there's any feedback. AKI: So the queue is not empty. Kevin? KG: Yeah, so this is one of those funny proposals where I would definitely use it if it were in the language, but I'm not a hundred percent convinced it makes sense to put it in the language. Regexes are already extremely complicated and I have found that breaking up my regexs so that I express more of the logic with the normal code, like if statements or whatever, makes my code more readable. I am aware that there is prior art in other languages for this sort of thing, but my experience of those other languages is that people really really struggle to read them and like I am one of the people who's best at regexes in like a lot of the code bases I review and I am still struggling to read them. So, I don't know. I don't object to Stage 1. I am hesitant about this proposal though because it makes it possible to write regexes which are much, much more complicated. And while that is sometimes useful, I'm not sure it's something that we actually want. -RBN: So I wanted I did mention before that. It's possible to emulate part of what I'm trying to provide here with conditional, with a conditional expression today using two Alternatives one with a say A positive look ahead the other with a negative look ahead and specifically the the look ahead case. Yeah, but like so it's possible possible to write that but then, that regular expression becomes significantly harder to read because you then having to know that you're looking at two Alternatives that are that are distinct from each other. That one is not possible versus the other Expressions, which again is common. In many other languages, makes it much easier to read the to read the expression. Know what's going on. +RBN: So I wanted I did mention before that. It's possible to emulate part of what I'm trying to provide here with conditional, with a conditional expression today using two Alternatives one with a say A positive look ahead the other with a negative look ahead and specifically the the look ahead case. Yeah, but like so it's possible possible to write that but then, that regular expression becomes significantly harder to read because you then having to know that you're looking at two Alternatives that are that are distinct from each other. That one is not possible versus the other Expressions, which again is common. In many other languages, makes it much easier to read the to read the expression. Know what's going on. -KG: I agree that a conditional expression is easier to read than having both the positive and negative look ahead. The question is the extent to which people would be using that trick versus expressing the thing they want to express in some other clearer way. In my experience almost no one uses the trick of having both the positive and negative look ahead. And instead they have an if statement and I can read the if statement and if there was this then maybe they would be using this and then people would not be able to read it. So yes, I agree it's possible to emulate this. I don't think that means that we should add it. +KG: I agree that a conditional expression is easier to read than having both the positive and negative look ahead. The question is the extent to which people would be using that trick versus expressing the thing they want to express in some other clearer way. In my experience almost no one uses the trick of having both the positive and negative look ahead. And instead they have an if statement and I can read the if statement and if there was this then maybe they would be using this and then people would not be able to read it. So yes, I agree it's possible to emulate this. I don't think that means that we should add it. RBN: my Counterpoint to that though, is that one of the motivating use cases for this are cases where we're bringing in a regular expression from a JSON configuration file or something. That doesn't allow you to represent a full regular expression, only a string that are very commonly used within the ecosystem today. textmate grammars. jest, configurations, Etc, where I might need a more complex pattern than what is currently viable. And I can't write an if statement because all I have is Strings numbers and brackets. @@ -410,7 +422,7 @@ RBN: My interest, or the reason I bring up that motivation, motivation is it's a KG: My response to that is that if you try to write your 8601 parsing as a single regular expression, I think your code is going to be a lot less readable than if you don't do that. And so I would like to encourage people to not do that. -AKI: I just look at the time and realize we have five minutes to get through there the whole queue. Is this tension resolved? +AKI: I just look at the time and realize we have five minutes to get through there the whole queue. Is this tension resolved? RBN: Is this a reason not to advance to Stage 1? @@ -432,7 +444,7 @@ RBN: Well, my intent is to match though. either way the idea of the goal would b WH: Yeah, the ones you have on this slide that are syntactic sugar for lookaheads or lookbehinds seem unproblematic. I’m less excited about those that reference numbers or names of capture groups. I’m really unexcited about subroutines and recursive regexes. -RBN: The capture group is one that we can't actually emulate using look behind or look ahead and the alternative because it would require basically looking behind the entire match before you to find some point where that was matched. +RBN: The capture group is one that we can't actually emulate using look behind or look ahead and the alternative because it would require basically looking behind the entire match before you to find some point where that was matched. WH: Not being able to emulate it is why I'm not excited about it. @@ -440,9 +452,9 @@ MF: So I feel like you've explained the feature, but Stage 1 is about understand RBN: I'm not clear on whether that's something that should be a reason to block Stage 1 stage. is that we want to consider it. If you're saying that we might want to consider in the future. That doesn't seem like a reason to block and I'm more than happy to look for additional real world examples of this. I know I've used it significantly in the past in dot.net and other languages, which is again, one of the reasons why I want to bring it to ecmascript is, it's a, it's a, it's been a thorn in many cases of regular sessions that I've had to write to write in JavaScript. -MF: I'm not saying that this feature is not motivated. I'm just saying that from what I've seen of it, I've not been convinced yet. Like I haven't been able to convince myself that it is well motivated. In particular, I would like to see comparisons to how we would do something today as an alternative, or if it's not possible today, it would be even stronger motivation. +MF: I'm not saying that this feature is not motivated. I'm just saying that from what I've seen of it, I've not been convinced yet. Like I haven't been able to convince myself that it is well motivated. In particular, I would like to see comparisons to how we would do something today as an alternative, or if it's not possible today, it would be even stronger motivation. -RBN: The cases that I had on the list of conditions such as only advancing. If a specific capture group has matched is something you can't do in a regular expression today. It's just not available at all. within the regular expression itself, +RBN: The cases that I had on the list of conditions such as only advancing. If a specific capture group has matched is something you can't do in a regular expression today. It's just not available at all. within the regular expression itself, MF: The entire language is available to us. Any code that would be possible as an alternative would be a good example. @@ -450,9 +462,9 @@ AKI: We are at time. I would say and I would really like to get MLS to be able t MLS: Yeah, so I'm also looking for motivating examples. The example you use here, if you got rid of the condition you just make this an alternation. It's actually more efficient. What you have here. We'll look at the first characters. And then the dash twice whereas if you eliminate that the alternation is actually quicker way of doing this. this. So, this is not a motivating example, in case, in my eyes. We don't need to make regular expressions turing-complete in my mind. I'm not going to stop Stage 1, but I have concerns about complexity. -RBN: I've made this comment before. Is that in other Repose and proposals. Contrived examples where I'm trying to showcase a feature often don't go well, to providing complete examples of motivation. So when you're talking about like if I remove the condition, this case, yes, the Alternatives would work but the first alternative would be first. I have to pair, whether it has two digits, a dash and then seven digits. So I'm having to go some length. So a much more if a regular expression was more complex than this. I'm having to go some level of depth in of scanning where I could have bailed after the first three characters, for example, +RBN: I've made this comment before. Is that in other Repose and proposals. Contrived examples where I'm trying to showcase a feature often don't go well, to providing complete examples of motivation. So when you're talking about like if I remove the condition, this case, yes, the Alternatives would work but the first alternative would be first. I have to pair, whether it has two digits, a dash and then seven digits. So I'm having to go some length. So a much more if a regular expression was more complex than this. I'm having to go some level of depth in of scanning where I could have bailed after the first three characters, for example, -MLS: you'll fail at the dash - if it's a second, +MLS: you'll fail at the dash - if it's a second, RBN: if yes, for this case, but for other cases, that might not be true. It might be that I have 15 characters. I have to match before I see a difference. And no I need to backtrack and try and try the alternative which then matches the In 15 characters, but has a different branch that I could have made a headache condition earlier on. That said, you know, if the first two characters are something slightly different or I could have a look ahead that scans, ahead to find something that I need to make the decision whether through certain types of matching and capture groups. So if you had captured groups that matches well, so again, this is a somewhat contrived example to show how the feature works much less than to show a degenerate case, that would require it. So I can think of plenty of cases that where this would be more efficient than doing just the initial scan. so, I'm happy to as has been mentioned before, look for additional motivating examples to add to the explainer to bring back should we consider to advancing this to Stage 2? @@ -466,7 +478,7 @@ RBN: I do want to make one additional point that we've made comments about how i RBN: So, one thing I did want to go back to is and based on the comments that were just made was looking at the Stage 1 acceptance. What acceptance signifies is that we've expecting to vote time and examine the problem space and it sounds like what you're wanting me to do is not Advanced to Stage 1 so that I can devote time to examine the problem space, but that so I'm wondering if this still should be Stage 1, because it all it means is that we want to continue looking at it. I'm not sure blocking Stage 1 advancement so that I can go and do the things we're going to do in Stage 1 1 makes sense. -MF: Yeah. Yeah, I think that's fair. The way you describe it. I typically consider Stage 1 to be where we agreed that there exists a problem. Yeah, if that's the exact wording of the process document. +MF: Yeah. Yeah, I think that's fair. The way you describe it. I typically consider Stage 1 to be where we agreed that there exists a problem. Yeah, if that's the exact wording of the process document. RBN: Yeah, there'll be a dress, that I would say the only time we wouldn't Advance something stage. One is where we agree as a committee. It's something we definitely don't consider. @@ -480,7 +492,7 @@ RBN: my point. Well, there was that your your intention for blocking Stage 1 adv AKI: Does that not mean that we are that we are agreed that this is not going to be blocked from Stage 1. Am I understanding it correctly? -MF: I'll leave it to the chairs to figure that out. I've expressed my opinion. +MF: I'll leave it to the chairs to figure that out. I've expressed my opinion. RBN: Sorry Aki. @@ -496,21 +508,20 @@ RBN: That's fine. We can hold off on the stage advancement discussion till after AKI: All right, sounds good. Thank you for your flexibility. Ron. You have several more proposals at this meeting still. So we will certainly have time with you to talk it through. - ### Conclusion/Resolution -not advancing right now, will plan to revisit later this meeting - +not advancing right now, will plan to revisit later this meeting ## String.cooked + Presenter: Hemanth HM (HHM) - [proposal](https://github.com/bathos/proposal-string-cooked) - [slides](https://docs.google.com/presentation/d/1Au8FP1xTuXb52d6kG1fxX5Cxl3J-02h3FAaq8tMEtn8/edit?usp=sharing) -HHM: Here's an example. we're, we're passing the and a string, which has escaped any escape sequence. And we are just converting that to upper case. It would be could probably see such implementation in view of the open source code and probably folks would start switch to string dot raw. We implement same. And that would result in something like this. Probably they didn't didn't pay attention that this had the escape sequence. To fix that. We probably do string.raw and then have this object with attribute raw whose values are strings, and then do an upper case. And that I think, most of the developers would probably never think of having two raws to cook the string. So why just have string.cooked, but you could pass in those strings which might also have this escape sequence and it would work perfectly. And that's the proposal. +HHM: Here's an example. we're, we're passing the and a string, which has escaped any escape sequence. And we are just converting that to upper case. It would be could probably see such implementation in view of the open source code and probably folks would start switch to string dot raw. We implement same. And that would result in something like this. Probably they didn't didn't pay attention that this had the escape sequence. To fix that. We probably do string.raw and then have this object with attribute raw whose values are strings, and then do an upper case. And that I think, most of the developers would probably never think of having two raws to cook the string. So why just have string.cooked, but you could pass in those strings which might also have this escape sequence and it would work perfectly. And that's the proposal. -HHM: And it could be a default Behavior. It could be just a tag. You could have it here and it will work just like an untagged or it would also it would also give the same results if it were to be hosting. Not cope with escape sequence. The string like just like the untag templates. +HHM: And it could be a default Behavior. It could be just a tag. You could have it here and it will work just like an untagged or it would also it would also give the same results if it were to be hosting. Not cope with escape sequence. The string like just like the untag templates. HHM: And it makes sense to have the same signature as raw for cooked. So, it could be either named string.cooked string.Identity, string.interoplate, string.interleave or maybe even zip. @@ -520,13 +531,13 @@ HHM: In and what should, what should we do in case of invalid escapes should be HHM: Maybe we could use itertools tools and then have the leave values. -HHM: And then this is the the energy of spec draft, what we have, which is nearly identical to String.raw, and it can be refactored to use shared steps. and so we have agreed on the motivation, explain are examples and we are exploring a few of the issues in the repo. +HHM: And then this is the the energy of spec draft, what we have, which is nearly identical to String.raw, and it can be refactored to use shared steps. and so we have agreed on the motivation, explain are examples and we are exploring a few of the issues in the repo. HHM: So, Asking string.cooked for Stage 1, and if there are no questions or concerns, and if we could identify reviewers, it could also be Stage 2. BT: All right. Got a couple items on the queue. Mark, you want to go first? -MM: Yep. So first of all, I support this. First of all I want to point out that the language that template literals come from, the e language, where they are called quasi literals, did have an explicit template tag called quasi parser that exactly corresponded to the default Behavior. What you got without it. So I think that having it have the same signature as raw is good because it lets you use it directly as a template literal tag, and I think calling it cooked as good because of the contrast with raw. Maybe we can regret having called it raw but having called it raw cooked is the natural dual. +MM: Yep. So first of all, I support this. First of all I want to point out that the language that template literals come from, the e language, where they are called quasi literals, did have an explicit template tag called quasi parser that exactly corresponded to the default Behavior. What you got without it. So I think that having it have the same signature as raw is good because it lets you use it directly as a template literal tag, and I think calling it cooked as good because of the contrast with raw. Maybe we can regret having called it raw but having called it raw cooked is the natural dual. SHO: Hi everybody. So we just wanted to mention that on the Igalia side. There was some concern actually about cooked as a name since it falls basically into the case of being an English pun, which is, you know, great, when you speak English, but maybe there would be a name that is not as much dependent on that. So, we just wanted to raise that. That and then also Philip from Igalia Philip jamendo, his would like to be a reviewer, and he asked me to mention that. So, I'll do both of them at once. @@ -551,10 +562,12 @@ BT: All right. Thank you with that. The queue is empty. HHM: So, we have Stage 1. ### Conclusion/Resolution + - Stage 1 - bikeshedding for the name to continue ## Bind-this operator for Stage 1 + Presenter: J. S. Choi (JSC) - [proposal](https://github.com/js-choi/proposal-bind-this) @@ -562,19 +575,19 @@ Presenter: J. S. Choi (JSC) JSC: I'm JSC, Indiana University. Thanks for listening. This is the bind-this operator. I'm going to go through this fairly quickly. Most of the stuff that's in the slides is also on the explainer. There's also a spec available. I am presenting this for Stage 1: whether it's worth investigating. This proposal is a simplified resurrection of an old Stage-0 bind operator. It's also an alternative to a Stage-1 proposal called extensions that was presented in November last year. So this is a rival to that and it's a resurrection of the old bind operator. For what it's worth at least one champion of the old bind operator proposal said just make a new proposal because of the baggage of the old proposal. -JSC: My point is twofold. Bind and call and especially call are common and clunky. So there are two slides here, common and clunky. Call is actually really common and I'm excluding transpilation. The dynamic `this` binding is a fundamental part of JavaScript design, its practice, and its semantics. It's really fundamental to the language and that means programmers need to change the `this` binding all the time, the receiver of a function. And they do this for a variety of reasons. We did a fairly thorough review of this, especially with `.call`. We manually went through a lot of results from the Gzemnid dataset, and found that people use `.call` for a lot of reasons and they use it a lot. They sometimes use the receiver as a context object, sometimes they want to protect methods from prototype pollution, sometimes they want to swap between two methods depending on some conditional. People use `.call` a lot. Actually, it surprised me: It's maybe one of the most commonly used methods in the entire language. I think that, of course, we all use console.log, maybe more often in one off-code. It's not going to show in Git-committed code as much. But like, seriously, it occurs more often than that. And this is excluding transpiled code. We did a pretty thorough review, at least for `.call`. We did a pretty thorough job. And by that I mean there was a volunteer who went through the first 10,000 lines of results from the Gzemnid dataset. (Thanks go to Scotty Jamison.) And where you can see [our methodology](https://github.com/tc39/proposal-bind-this#bind-and-call-are-very-common). You can reproduce it. We are excluding transpiled code emitted by Babel, Webpack, CoffeeScript, etc. Even with all that, people still use `.call` a lot. +JSC: My point is twofold. Bind and call and especially call are common and clunky. So there are two slides here, common and clunky. Call is actually really common and I'm excluding transpilation. The dynamic `this` binding is a fundamental part of JavaScript design, its practice, and its semantics. It's really fundamental to the language and that means programmers need to change the `this` binding all the time, the receiver of a function. And they do this for a variety of reasons. We did a fairly thorough review of this, especially with `.call`. We manually went through a lot of results from the Gzemnid dataset, and found that people use `.call` for a lot of reasons and they use it a lot. They sometimes use the receiver as a context object, sometimes they want to protect methods from prototype pollution, sometimes they want to swap between two methods depending on some conditional. People use `.call` a lot. Actually, it surprised me: It's maybe one of the most commonly used methods in the entire language. I think that, of course, we all use console.log, maybe more often in one off-code. It's not going to show in Git-committed code as much. But like, seriously, it occurs more often than that. And this is excluding transpiled code. We did a pretty thorough review, at least for `.call`. We did a pretty thorough job. And by that I mean there was a volunteer who went through the first 10,000 lines of results from the Gzemnid dataset. (Thanks go to Scotty Jamison.) And where you can see [our methodology](https://github.com/tc39/proposal-bind-this#bind-and-call-are-very-common). You can reproduce it. We are excluding transpiled code emitted by Babel, Webpack, CoffeeScript, etc. Even with all that, people still use `.call` a lot. JSC: So we think might this be worth lubricating because the second half is: it's clunky. They're really frequent. They're also pretty clunky. As you know, we're used to writing methods and noun verb noun court order English is subject–verb–object language. We're used to writing `receiver.verb(arg)`. `.bind` and `.call` flip this natural word order around and that makes it pretty clunky. Now you'd have `verb.call(receiver, arg)` instead of `receiver.verb(arg)`. So a bind-this operator would restore the word order back to the noun–verb–noun order. We've got a couple examples here. These are all real-world examples. The word order is just so much less clunky when you put the receiver first. It's a lot less clunky. So, like again, we've done a pretty thorough job of checking for real-world cases. You can reproduce all of this. You can look at the data set yourself. You can look at the [issue that Scotty Jamison made](https://github.com/js-choi/proposal-bind-this/issues/12). He put a lot of the manual review results there. I think this is all pretty robust. -JSC: This is a really simple proposal. It's simpler than the older proposals and I'll get into more detailed comparisons in a bit. I call it the this-bind operator because by itself it binds and then if you put parentheses after it, like any function call, it turns into a call because it's indistinguishable from using `.call`. The left-side precedence can be bikeshedded, and then the right side matches decorator syntax. We can bikeshed that too, but right now we're using decorator-like syntax. So it's an identifier or chain identifiers or a parenthesized expression and, hopefully, they'll make sense to people who also use decorators. Anyways, the big point is that when you put parentheses after it, it's indistinguishable from calling directly on the receiver. You don't have to allocate a bound function. It's literally indistinguishable. Also you can't mix it with `new`, at least without parentheses. You've got to be explicit about that. Also there's also a little thing where you can't mix on the right-hand side of optional chaining because it'd be weird if it switches grouping [when you change a `.` to ` ?.`]. +JSC: This is a really simple proposal. It's simpler than the older proposals and I'll get into more detailed comparisons in a bit. I call it the this-bind operator because by itself it binds and then if you put parentheses after it, like any function call, it turns into a call because it's indistinguishable from using `.call`. The left-side precedence can be bikeshedded, and then the right side matches decorator syntax. We can bikeshed that too, but right now we're using decorator-like syntax. So it's an identifier or chain identifiers or a parenthesized expression and, hopefully, they'll make sense to people who also use decorators. Anyways, the big point is that when you put parentheses after it, it's indistinguishable from calling directly on the receiver. You don't have to allocate a bound function. It's literally indistinguishable. Also you can't mix it with `new`, at least without parentheses. You've got to be explicit about that. Also there's also a little thing where you can't mix on the right-hand side of optional chaining because it'd be weird if it switches grouping [when you change a `.` to `?.`]. JSC: Let's see, just comparing and contrasting. Like I said, I'm going to go through this fairly quickly. It's basically the same as the old bind operator proposal, except there's no unary form. There's no prefix form. We're not trying to solve method extraction here. We're assuming that any method you're using is already extracted somehow or you could just put it on the right hand side. Yes, that means that you would have to repeat the receiver if that receiver already contains that function, which is already what you have to do with `.bind`. We could always add more syntax later to solve that. RBN's partial function application proposal would also do that, but we'll get into any overlap with that in a bit, but it's basically the same. Otherwise we're trying to keep this really simple. I personally don't consider the repetition to be that big of a deal. JSC: The extensions proposal deserves a little more because extensions is a fairly ambitious proposal. So it's got a lot of stuff going on. I'll go through this quickly. Extensions use a special variable namespace; bind-this uses the ordinary namespace. The concern that extensions are trying to address is name collisions, but the tack that bind-this takes is that we're already solving this using ordinary naming conventions. Extensions has special semantics for accessors, that is, for property descriptors that have get and set methods. Bind-this does not try to solve that. Its point of view is that this isn't that very common and even when you do it's clearer to extract to functions rather than using property descriptors directly. Extensions have a polymorphic extraction syntax. Its application syntax is also polymorphic. It depends on whether the thing you're extracting from is a constructor or not. So it actually resembles the `import` syntax a bit. And if it's a constructor it extracts from the prototype; if it's not, then it's a static method and so it actually also applies the left-hand side as the first argument rather than `this` receiver. Bind-this doesn't have any special extraction syntax. It tells you to just use ordinary destructuring as usual, trying to be explicit about that. There's a corresponding polymorphic ternary syntax which also may support metaprogramming with a new symbol. Bind-this doesn't do that either. It would just have you insert into the right-hand side or or just use a variable or whatever. And it's not polymorphic, again. -JSC: Bind-this isn't redundant with the pipe operator, or it’s redundant with it insofar as member access is redundant with the pipe operator. The pipe operator is for generic un-nesting and linearization, and that includes function application as well as lots of other stuff. The scope of bind-this is small. The fact that bind-this and member access happen to linearize expressions is a side effect – a happy side effect – but their purposes are tightly coupled to the concepts of object membership and `this` binding and the concept of function receivers. And there is code that would arguably be more readable if you use both pipe and bind-this, like this example, from Chalk. +JSC: Bind-this isn't redundant with the pipe operator, or it’s redundant with it insofar as member access is redundant with the pipe operator. The pipe operator is for generic un-nesting and linearization, and that includes function application as well as lots of other stuff. The scope of bind-this is small. The fact that bind-this and member access happen to linearize expressions is a side effect – a happy side effect – but their purposes are tightly coupled to the concepts of object membership and `this` binding and the concept of function receivers. And there is code that would arguably be more readable if you use both pipe and bind-this, like this example, from Chalk. -JSC: I would also argue that it's not redundant with partial function application either, that's RBN’s proposal. They are complementary; they are orthogonal and handle different use cases. They overlap in one small way, and that's the case that I mentioned earlier: when you're trying to bind a function that is specifically owned by the same object that's going to be the receiver. And in that case, you can with bind-this, you would have to repeat the receiver like with `.bind`, and you would have to repeat the receiver – versus how with partial function application you do not have to. That is I think the only overlap. In contrast, partial function application does not address changing the receiver of an unbound function. That's bind-this's purpose. Partial function application and bind-this can also work together in the case when people are using `.bind` and more than one argument there. They're partially applying other arguments in case. They could theoretically be mixed, also. +JSC: I would also argue that it's not redundant with partial function application either, that's RBN’s proposal. They are complementary; they are orthogonal and handle different use cases. They overlap in one small way, and that's the case that I mentioned earlier: when you're trying to bind a function that is specifically owned by the same object that's going to be the receiver. And in that case, you can with bind-this, you would have to repeat the receiver like with `.bind`, and you would have to repeat the receiver – versus how with partial function application you do not have to. That is I think the only overlap. In contrast, partial function application does not address changing the receiver of an unbound function. That's bind-this's purpose. Partial function application and bind-this can also work together in the case when people are using `.bind` and more than one argument there. They're partially applying other arguments in case. They could theoretically be mixed, also. JSC: If the committee does not block Stage 1, then we can bikeshed a lot with regards to what we want the operator to look like. There's at least one person who doesn't like the `->` option. There's lots of options. We can even go use a bracket notation now, though I think JHD doesn't like that and probably others too. We can bikeshed it later. That's for Stage 1 and even Stage 2. That is it. @@ -594,7 +607,7 @@ JHD: That is true, and as KG has pointed out in Matrix, there are probably a sin JSC: And I would add up what I want to add onto that is just returning to the point that whether or not we're talking about prototype pollution or whatever. I take your point that word order is solved by pipeline, but if you actually write a pipeline version, it's actually, it's actually a lot longer than either old version. -JSC: And I also would return to the fact that `.call` is quite common, and it is not just for a prototype pollution. Now, there is actually a lot of prototype-pollution protection happening. If you run the dataset, search through it, and look at the results, you would see that people are doing this a lot actually: people are calling intrinsic prototype methods on their own, using `.call` to do that a lot. There's also what I mentioned: swapping conditionally between methods. There's using the receiver as a context object. All of that is more common than that `.slice`, it’s more common than even `.push` occurrences. And this is excluding transpiled code. I didn't bother excluding transpiled code for `.slice` or `.push` or whatever. +JSC: And I also would return to the fact that `.call` is quite common, and it is not just for a prototype pollution. Now, there is actually a lot of prototype-pollution protection happening. If you run the dataset, search through it, and look at the results, you would see that people are doing this a lot actually: people are calling intrinsic prototype methods on their own, using `.call` to do that a lot. There's also what I mentioned: swapping conditionally between methods. There's using the receiver as a context object. All of that is more common than that `.slice`, it’s more common than even `.push` occurrences. And this is excluding transpiled code. I didn't bother excluding transpiled code for `.slice` or `.push` or whatever. JSC: So the pipe operator would worsen the brevity a lot. It would worsen it from baseline. Word order is important. And the biggest thing of all is the sheer frequency of this. I think that it's worth looking at this, whether it's worth optimizing in terms of just how often people are using `.call`. And, again, you can reproduce these findings yourself. MM, do you want to talk more about that topic or can we move on to the next on the queue? @@ -608,7 +621,7 @@ JHX: I'm JHX and I'm the champion of the extensions proposal and I want to make JHX: Could you go back to the data page?. Yeah, I really appreciate that. This is if we can see the data because that can say – and actually as I said that `.call` is what extensions and bind-this share, the same part of it of that. But the difference here is the bind and call in this data, we could notice that the `.bind` also has relatively large usage, but it may have some problems here because it [repeats the receiver]. Oh actually this should be the most of use cases and this is the actual is the part which in the olds by the by the operator that the prefix operator, it has been removed here. So for the data here I think. There are subtle problems here. If the proposal is about the bind operator, but the actual use case is for call - for bind I can't say that there are no use cases but it is very rare. Could you open the readme, because they have some more examples about bind? -JSC: I have only five minutes left so I want to move on to the next thing. What I would say would just be: First, I think that repeating the receiver is not that big of a deal. And in that sense that I would say, this operator does address it. It just addresses it a little more wordily, but still less wordily and with a better word order. And then the current dot binding situation and also, as I think you might be hinting, the explainer does have some examples where people are dot binding, something to a function that is not already contained in the receipt here. I would like to move on to the next one, since we've got four minutes left. WH, go ahead. +JSC: I have only five minutes left so I want to move on to the next thing. What I would say would just be: First, I think that repeating the receiver is not that big of a deal. And in that sense that I would say, this operator does address it. It just addresses it a little more wordily, but still less wordily and with a better word order. And then the current dot binding situation and also, as I think you might be hinting, the explainer does have some examples where people are dot binding, something to a function that is not already contained in the receipt here. I would like to move on to the next one, since we've got four minutes left. WH, go ahead. WH: I couldn't figure out the precedence of the proposal either from the description or the spec. The example I gave is `a?.b::c.d`. That's ambiguous in the spec. @@ -616,25 +629,25 @@ JSC: What I would expect is that the `a?.b` would be the left hand side, but we JSC: JHX [posted another precedence question], same thing, if I could just gloss over that I think that precedence can be a bikeshedding thing. If this nobody blocks Stage 1, we could just move straight onto SYG. -SYG: I do have a use case for `.call`. Not so much for `.bind`. I kind of agree with what has been said in the chat about the robust-code motivation being a pretty niche use case for bind. But for `.call`, I have a speculative future use case for code sharing, if we were to share code across threads. The problem is hard to solve because of closures, but top-level functions that don't close over anything are very nice. Seems a much more reasonable opportunity to share. What that means is you can't really ergonomically call them, syntactically with a custom receiver with this, it becomes very easy. +SYG: I do have a use case for `.call`. Not so much for `.bind`. I kind of agree with what has been said in the chat about the robust-code motivation being a pretty niche use case for bind. But for `.call`, I have a speculative future use case for code sharing, if we were to share code across threads. The problem is hard to solve because of closures, but top-level functions that don't close over anything are very nice. Seems a much more reasonable opportunity to share. What that means is you can't really ergonomically call them, syntactically with a custom receiver with this, it becomes very easy. SYG: That said, this is a speculative future use case. Just wanted to note that I guess I don't think we should figure very heavily into whether we accept this now, but I also certainly have no concern with Stage 1. JSC: All right. Thank you very much. Next up is YSV. -YSV: I'll try to make this quick. So I want to push back a bit on folks who have been saying that word order is not important because we have an entire proposal dedicated to this: pipeline operator. In addition, there are a few folks here who are, perhaps, not following the research in this space, but there is research that word order programming assists learners and adopting a language quickly and also helps reduce misconceptions about what a piece of code does. +YSV: I'll try to make this quick. So I want to push back a bit on folks who have been saying that word order is not important because we have an entire proposal dedicated to this: pipeline operator. In addition, there are a few folks here who are, perhaps, not following the research in this space, but there is research that word order programming assists learners and adopting a language quickly and also helps reduce misconceptions about what a piece of code does. YSV: I also wanted to say that I very much appreciate the corpus analysis that you've done here. That's really, really great. I was impressed by it. And additionally, I want to echo a little bit of what MM said. I know that you've done a lot of work here to try and not step on any of the other proposals, but I do feel like we have four proposals tending in the same direction without a concrete problem statement. However, I think the problem statement in this proposal comes very close to something that might be productive. All of them are quite disparate but the things that they're trying to do are similar: we're talking about pipeline, the extensions proposal, this proposal, and partial function application. So I'm curious about what will happen next. I do think that this deserves Stage 1. That's all. JSC: Thank you, YSV. We've got one minute left, CZW, if you can be quick and then I will ask for consensus and I think RGB is echoing the last part of what YSV said. -CZW: So I'm just wondering. How do we handle them? Two proposals, have similar motivations. +CZW: So I'm just wondering. How do we handle them? Two proposals, have similar motivations. JSC: So this is a good question. It's kind of procedural. To be frank, I'm not sure. I think that if either rivalry proposal goes to Stage 2, then that is saying that the other one can remain at Stage 1, but that saying that the committee is preparing Stage 2. We already had a situation kind of like this with the pipe proposal, with a bunch of competing rival proposals, with different styles. Yeah, so anyways, JHD, if you could be really quick and then I will ask for consensus. JHD: We're not bound by any precedent, but typically have competing Stage-1 proposals a lot and usually at the time that something moves to Stage 2 is when we withdraw all the things that proposal obviates. -JSC: I think it would be cool if the process document made that formal. But yeah, I think that stuff like this has already happened. +JSC: I think it would be cool if the process document made that formal. But yeah, I think that stuff like this has already happened. JSC: I'm going to ask if anyone's blocking Stage 1. That's the time. All right, is that long enough? Because I'm out of time? If this proposal not blocking the extensions one, I guess, it's okay to move to Stage 1. Yeah, Stage 2 would block the other proposals moving to Stage 2, but Stage 1. Neither one is blocking the other. So is anyone blocking Stage 1. I'll give it twenty more seconds. Yeah, JHD on the queue with explicit support. Any other explicit support, feel free to put on the queue or bring up as well. @@ -643,10 +656,13 @@ RGN: I'm not blocking Stage 1, but I did want to echo MM and WH and YSV and spec JSC: Please open an issue. I would say that the scope of this is fairly narrow. I mean, it's very frequent, but it's also narrow. It can cooperate at least with the pipe operator. With other stuff, maybe it can be more arguable. But I do appreciate your point. Do we have any blockers for Stage 1. If not, I guess it's Stage 1. BT: Yeah. I'm not hearing any blockers. + ### Conclusion/Resolution -* Stage 1 + +- Stage 1 ## Array Grouping for Stage 2 + Presenter: Justin Ridgewell (JRL) - [proposal](https://github.com/tc39/proposal-array-grouping) @@ -654,7 +670,7 @@ Presenter: Justin Ridgewell (JRL) JRL: I am bringing back array grouping. Just motivating use case. I actually had to do so in Babel because old v8 didn't support stable sorting. Essentially allows you to bucket everything into little groups and then process the groups. The exact code here doesn't really matter. It's an actual code that I wrote. With the discussion from last time, we have settled on the default method returning a prototype-lessobject. This is to avoid collisions for things that exist on the `Object.prototype` like if for some reason you named your group `hasOwnProperty`, then you won't have any issue here. It'll just be a blank object for you. And there'd be a second method called `groupByMap`. That would return a Map to you so that you could use complex objects as your keys for any reason that you have. -JRL: One of the topics of discussion I want to bring up is whether this is actually useful for typed arrays? Every time I tried to use a typed array personally, I'm really only caring about uint8 and the bytes that it packs into if I were to write it to a file system or something like that. So I don't have a use case at the moment for trying to group things in a typed array. There might be one, but I don't have it. Whereas, when I'm using an array, I'm generally dealing with higher level concepts, I'm dealing with complex objects that could have properties that need to be sorted, etc. All kinds of weird things. I think there's a huge motivation for this to have to exist on array, but I'm not certain about type arrays yet. +JRL: One of the topics of discussion I want to bring up is whether this is actually useful for typed arrays? Every time I tried to use a typed array personally, I'm really only caring about uint8 and the bytes that it packs into if I were to write it to a file system or something like that. So I don't have a use case at the moment for trying to group things in a typed array. There might be one, but I don't have it. Whereas, when I'm using an array, I'm generally dealing with higher level concepts, I'm dealing with complex objects that could have properties that need to be sorted, etc. All kinds of weird things. I think there's a huge motivation for this to have to exist on array, but I'm not certain about type arrays yet. JRL: Second point of discussion is, how do we treat holes in the arrays. Currently it treats holes as the undefined value. So you're going to get out an undefined and it'll call the callback with undefined. We could change it to option b, where it checks presents before trying to get the value, but I think we just need some kind of justification for why it should change. And I think the other es6+ methods are slowly moving towards treating holes as just undefined generally, so I think the current way is the correct way. @@ -678,22 +694,26 @@ MM: I support undefined. JRL: Okay. So, unless there's more topic items in there, I think we can ask for Stage 2 for groupBy. -MM: I support. +MM: I support. YSV: I also support. JRL: Okay. I was actually expecting a lot more debate. So this is really nice. BT: OK, that sounds like we have consensus for Stage 2. Thank you. + ### Conclusion/Resolution -* Stage 2 -* will have groupBy and groupByMap -* will not use species -* will not be present on TAs -* will do an unconditional `get` for holes (= undefined, unless someone has put a numeric index on the prototype) + +- Stage 2 +- will have groupBy and groupByMap +- will not use species +- will not be present on TAs +- will do an unconditional `get` for holes (= undefined, unless someone has put a numeric index on the prototype) + - Reviewers: MF JHD SRV ## Normative PRs for Temporal + Presenters: Justin Grant (JGT) and Philip Chimento (PFC) - [proposal](https://github.com/tc39/proposal-temporal) @@ -705,19 +725,19 @@ PFC: Alright, to start off with the adjustments. The first one up is one that I PFC: Another change that we'd like to make from strings, this one exceptionally originated from within the champions group when we found out that there databases that sometimes attached local timezone semantics to ISO strings. And so, if you If you are deserializing a plain type from a string with a Z UTC designator in it, and you're coming from one of these databases, you may find yourself with a nasty off-by-one-day bug. So what we're proposing is that strings with a Z in them only specify exact time and not wall time as expressed by the Plain types. You can still do what you could previously do, just with a little bit more explicit work around. -PFC: We'd like to bring the constructor of Temporal.Duration in line with the other ways to create a duration. Because it's possible to create it from a string with say, 1.1 hours. We already disallowed this when creating it from a property bag because non-integer numbers are potentially not exact. Before, the Duration constructor would silently drop the fractional part of numbers that were not integers. We'd like to change that to be in line with the other ways to create a Duration using Temporal.Duration.from(). So, passing a non-integer number to the constructor should throw. +PFC: We'd like to bring the constructor of Temporal.Duration in line with the other ways to create a duration. Because it's possible to create it from a string with say, 1.1 hours. We already disallowed this when creating it from a property bag because non-integer numbers are potentially not exact. Before, the Duration constructor would silently drop the fractional part of numbers that were not integers. We'd like to change that to be in line with the other ways to create a Duration using Temporal.Duration.from(). So, passing a non-integer number to the constructor should throw. PFC: This next one was a request from from FYT, who is implementing the proposal in V8, to simplify things slightly. Several operations have a relativeTo option which previously could be a Temporal.PlainDateTime or Temporal.ZonedDateTime. But if it was a PlainDateTime, the time component was never used, and this led to the potential observable creation of extra objects, because a Temporal.PlainDate was needed to pass to some calendar operations. So this is a small optimization. Here's an example of what might change observably. -PFC: We have another fix for an inconsistency in the order of observable operations, when calling Temporal.ZonedDateTime.prototype.with(). This is another small optimization that helps implementers. Here's a code example of what would observably change. It gets rid of some observable method calls on calendar methods. +PFC: We have another fix for an inconsistency in the order of observable operations, when calling Temporal.ZonedDateTime.prototype.with(). This is another small optimization that helps implementers. Here's a code example of what would observably change. It gets rid of some observable method calls on calendar methods. -PFC: Those were the adjustments where we actually change the way that the proposal works in some way. Now, there are a couple of bugs to run through. So, here was a mistake where we had unintentionally written the spec text such that property bags used to represent a Temporal.PlainTime unintentionally had to have all six properties: hour, minute, second, millisecond, microsecond, and nanosecond. So that code like the above actually wouldn't work if it was implemented exactly according to the spec text. Obviously we'd like to fix that. (next slide) We had an off-by-one string indexing error in the spec text with parsing Duration strings with fractional parts in them. (next slide) We made another off-by-one string indexing error when parsing time zone UTC offsets with fractional parts in them. (next slide) Another thing while we're on the subject of time zone offsets is this: yeah, we forgot an `abs()`. So things got an extra minus sign in the string when they weren't supposed to. (next slide) We have another bug in Duration string serialization, where we didn't account for zero decimal digits being requested. So this changes the algorithm to output what was intended here. (next slide) We had an accidental repeated line in one of the spec text algorithms that sadly had a large effect. (next slide) At one point in the history of the proposal, we had some fallbacks like this where you would fall back to a built-in method if you set a property shadowing a prototype method, that was undefined. We removed those a long time ago, but this remained unintentionally. It's kind of harmless, but we got the feedback that it wastes memory in implementations, so we'd like to remove it. (next slide) There is a mistake in the formal grammar for ISO strings: an ambiguity, which we're correcting. (next slide) There is another typo that turned out to be normative if you implement it the way it was typo'd. +PFC: Those were the adjustments where we actually change the way that the proposal works in some way. Now, there are a couple of bugs to run through. So, here was a mistake where we had unintentionally written the spec text such that property bags used to represent a Temporal.PlainTime unintentionally had to have all six properties: hour, minute, second, millisecond, microsecond, and nanosecond. So that code like the above actually wouldn't work if it was implemented exactly according to the spec text. Obviously we'd like to fix that. (next slide) We had an off-by-one string indexing error in the spec text with parsing Duration strings with fractional parts in them. (next slide) We made another off-by-one string indexing error when parsing time zone UTC offsets with fractional parts in them. (next slide) Another thing while we're on the subject of time zone offsets is this: yeah, we forgot an `abs()`. So things got an extra minus sign in the string when they weren't supposed to. (next slide) We have another bug in Duration string serialization, where we didn't account for zero decimal digits being requested. So this changes the algorithm to output what was intended here. (next slide) We had an accidental repeated line in one of the spec text algorithms that sadly had a large effect. (next slide) At one point in the history of the proposal, we had some fallbacks like this where you would fall back to a built-in method if you set a property shadowing a prototype method, that was undefined. We removed those a long time ago, but this remained unintentionally. It's kind of harmless, but we got the feedback that it wastes memory in implementations, so we'd like to remove it. (next slide) There is a mistake in the formal grammar for ISO strings: an ambiguity, which we're correcting. (next slide) There is another typo that turned out to be normative if you implement it the way it was typo'd. PFC: And then, we have a couple of bugs we discovered in the run-up to the plenary after the advancement deadline. I know this is not stage advancement, but as a courtesy we'd like to note that the following three slides are four things that were added to this presentation after the advancement deadline. So if you feel like you need more time to study it because it wasn't available 10 days in advance, then no hard feelings. That said, this one was caused by two arguments reversed in the order in the spec text, and so the since() method of Temporal.PlainDateTime and Temporal.PlainTime would produce the wrong answer if you had a certain setting of largestUnit. This one was not intended, because we wrote the polyfill and the docs and the tests to assume the other way, but these arguments were flipped in the spec text. (next slide) Another bug that I think came out of that, somebody spotted it while looking at the previous bug, was that you'll get the wrong sign in some cases out of PlainTime.prototype.since() and until(), because we need to multiply something by the sign. (next slide) And then from last time you might remember, there was an item where we changed things to use the remainder operation instead of modulo. There's confusion because modulo in the spec text is not defined the same as the `%` operator in the language. Anyway, those were mostly fixed but there is one straggler which we discovered at the last minute. So, I'd like to ask for consensus on those normative pull requests. Hopefully, with the three that we added later, but if necessary without. If people have questions about any of these items, I'm happy to answer those first. I can't see the queue while I'm sharing my slides, though. BT: The queue is presently empty, but we'll give it a few seconds here for folks to put in their input. the benefit of sleeping or of speaking so late is that everybody's tired. -YSV: I'm a little late on this because I was thinking about something else, but I believe you discussed options bags, the work that you were doing around options bags? +YSV: I'm a little late on this because I was thinking about something else, but I believe you discussed options bags, the work that you were doing around options bags? PFC: Yeah, that'll be after this. That's going to be the discussion item that JGT will present. @@ -731,17 +751,17 @@ JGT: That sounds great. Thanks PFC. And thanks everybody for hanging in after a JGT: So we're going to talk about options bags right now. And the context here is we thought we had a PR that was ready to go. And as is sometimes the case, it turns out, it wasn't ready to go. So first, before I get into the PR itself, let me provide some context. So, the use case that we're concerned about here is when you have an interconnected set of methods in the larger API set like Temporal. It's highly likely that you're going to see options where there are options that are optional in some context, but in other context, they're going to be required. This is not unique to Temporal, right? You imagine a hypothetical file API, where you create the file with no metadata is required. But if you have a setMetadata method, then course you're going to require some metadata. Because what's the point of calling a setMetadata method without metadata? And so that's another kind of example of this problem of where you have the same shape of arguments of positional arguments, which in ECMAScript is done via options bag where one is required in one place and optional in another. So in the Temporal case, here's one example, where you can call until() and since() to measure the duration between one PlainDateTime or whatnot and another, and you can call it with no arguments, right? Which essentially means don't round the result at all, or you can call it with some arguments that do change the units that come back in the results. Let's say the unrounded version gives me the PlainDateTime, the commute, the duration, and the duration would come back would be (because it's October 27th) it would be 10 months and 27 days and 6 hours and 13 minutes all the way down to nanoseconds. Or you could decide, you know what, I don't really care about that. Nanoseconds, or seconds, or hours. So I'm going to call this with the `smallestUnit: "day"` option or I could add other options on that, right? I can control the largest unit. So let's say, I want to know the number of hours since New Year's I can call that as well. And so in almost all cases in these, these options are used in a variety of places around Temporal. And in almost all of those, the options, as you'd expect from the name, are optional, right? But there are a few cases, like specifically the round() method, where that's not the case. And so the reasoning for that is sort of obvious, that if you're calling PlainDateTime.since() the default is clear, which is don't do any rounding. But if you're calling a round() method, then it doesn't make sense to call the round() method with no parameters, because it doesn't do anything, right? You need to tell it what to round to. And so therefore the parameter is required in that case. And so the same shape of these objects is shared across these methods. The only difference between them is which ones are required and which ones aren't. -JGT: That's the context. And we've been going back and forth on this with JHD for a few months now to try to figure out, is there something that (??)? Because JHD's concern is fairly reasonable, which optional arguments should be optional and required options should have a different shape to distinguish them from optional ones. And so there's an inherent trade-off here between that that line of thinking and trying to be consistent across different different methods of the same set. Through that back and forth and, you know, thanks again to JHD for persisting with us as we try to work out a compromise, that would work for what works well, and I think we've actually found that compromise and we thought it would be good enough. It wasn't quite there yet, but we did want to share what we've come up with that I think meets everybody's needs as far as from the Temporal champions' standpoint it maintains the ability to use options in different places. But from JHD's standpoint, it makes it less. It does vary the shape between the required and the optional case to clarify that. So we could go to the next slide. I'll show you what we came up with. This pattern should be familiar to folks if you're if you used Webpack config or jQuery or any any one of many userland libraries, that tend to have strings stand in for more complicated objects. And so that's what we're doing here. The idea is we would extend the API here in the required argument case to allow a literal string to stand in for a property bag. And so, in this case, I could call duration.round() and I could call it with a literal string of `"seconds"`, which means round this duration to the second, or I could call `.round({ smallestUnit: "seconds" })`, which does the same thing. Or I could also reuse the same options bag that I would have used in until() and since() and use that in duration.round(). And so this pattern, I think kind of addresses those needs as I mentioned before, what's nice is it's a non-breaking change, so it doesn't interfere with how tests or other folks are going along. And it allows this and it actually makes the API more ergonomic which is why when I showed this to Temporal champions, they were like, yes. Yes, let's do this. This looks better. So we're pretty happy this at this point. There is one open issue that I wanted to discuss that we haven't yet reached a consensus opinion, but, you know, so why don't we go to the next actually? +JGT: That's the context. And we've been going back and forth on this with JHD for a few months now to try to figure out, is there something that (??)? Because JHD's concern is fairly reasonable, which optional arguments should be optional and required options should have a different shape to distinguish them from optional ones. And so there's an inherent trade-off here between that that line of thinking and trying to be consistent across different different methods of the same set. Through that back and forth and, you know, thanks again to JHD for persisting with us as we try to work out a compromise, that would work for what works well, and I think we've actually found that compromise and we thought it would be good enough. It wasn't quite there yet, but we did want to share what we've come up with that I think meets everybody's needs as far as from the Temporal champions' standpoint it maintains the ability to use options in different places. But from JHD's standpoint, it makes it less. It does vary the shape between the required and the optional case to clarify that. So we could go to the next slide. I'll show you what we came up with. This pattern should be familiar to folks if you're if you used Webpack config or jQuery or any any one of many userland libraries, that tend to have strings stand in for more complicated objects. And so that's what we're doing here. The idea is we would extend the API here in the required argument case to allow a literal string to stand in for a property bag. And so, in this case, I could call duration.round() and I could call it with a literal string of `"seconds"`, which means round this duration to the second, or I could call `.round({ smallestUnit: "seconds" })`, which does the same thing. Or I could also reuse the same options bag that I would have used in until() and since() and use that in duration.round(). And so this pattern, I think kind of addresses those needs as I mentioned before, what's nice is it's a non-breaking change, so it doesn't interfere with how tests or other folks are going along. And it allows this and it actually makes the API more ergonomic which is why when I showed this to Temporal champions, they were like, yes. Yes, let's do this. This looks better. So we're pretty happy this at this point. There is one open issue that I wanted to discuss that we haven't yet reached a consensus opinion, but, you know, so why don't we go to the next actually? JGT: So the open issue is of all the methods where this property would be applied in Temporal and just like six or seven of them. There's one method that works differently that has this issue. So context around the method is when you have a duration. Durations do rounding in a slightly different way from from the way other round method works, and the reasoning for that is, let's say you have a duration that is three years, two hours and 26 seconds. And so what you want to do is first of all, get rid of anything smaller than hours, so you can say `duration.round({ smallestUnit: "days" })` and it takes off the time. But you can also do it on the top end. You could say well I have a duration of two years and I really want months in my results. So you can say `duration.round({ largestUnit: "months" })` and you end up with a 24 month duration as the output and so those two use cases exist that don't exist in a PlainDateTime, because there's no such thing as a PlainDateTime with no years in it. It's not a PlainDateTime anymore. And so, we chose to make the shape as either you could provide a smallestUnit or a largestUnit or both, but you can't provide neither of them because that's a no-op and it doesn't make sense. It's clearly a programmer bug. So we chose to throw in that case. So the question is, if we move to add the literal string as a synonym for the smallestUnit property, is the smallestUnit property always required when we provide the object form of this, for this method? And so the two ways of looking at it is, first of all, yes, it must. It always must be required, because if it's a required property, it should be required everywhere. And the other point of view is, well, there are going to be some cases where that might not be the best decision. So in the Temporal case, where you could do one or the other or both, but not zero. You can have an example of mutually exclusive properties, where you can either have one or both, but not all those who have one or the other but not both. You can have a case where a primitive an aggregate multiple properties. So like in internationalisation cases, you could have a Locale string that contains the calendar. So, you have a little The (??) corresponds to two properties when you move that out into a into and they object form (??). So the way I look at this is there's two mental models you could use to to think about what the the literal string means, right? One mental model is with a literal string a one-to-one projection of the string into a property. And so, of course, because it's one to one if you have the literal and you have the property that they both need to have the same semantics as far as whether they are required. The other mental model, is that, well, the literal string is an ergonomic shortcut for a prefilled checked (??). And that ergonomic shortcut could correspond to one property, could correspond to multiple properties or there could be other valid shapes of that object. That wouldn't be what the literal string projects at all. And so I think it's safe to say that JHD's point of view is that must is the right answer here. The Temporal champions are favorable to the should option because we see the flexibility being helpful in in some cases, but but honestly, this is one method and kind of a corner of the API. I don't think anybody would say don't ship the whole Temporal proposal because we can't, you know, can't have a perfect version of this method. I think that the Temporal champions are pretty flexible, that our goal is, we want to ship this thing. And so we've been working on it for a long time and we really would love to get some feedback from the committee. Is there a consensus opinion? Both about this, this open issue, as well as the the pattern overall. That way, when we come back in December, we're more confident that there's not going to be objections we haven't heard already. So with that thanks again for your attention. I know it's a long day and I'll open it up. JHD: I just wanted to clarify my point of view, which is same as my topic: when there's method where it makes sense that there is one required thing, I actually really, really like this compromise. It is very common in the ecosystem. eslint rules often will take a string as config or you can replace that with an object that the string is default for. Node's new “exports” feature in package.json, you specify a string in any place. You can put a string, you can put a more complex object if you like, the string is always an equivalent for a default more-complex object. So I think that makes perfect sense. My point of view is not that all of the Temporal methods here should have one required thing. For this `duration.round()` method in particular, I accept the logic of why it makes sense that one of the things is required at a minimum and I don't think that we should force one of them to always be required and reduce the intuitiveness of the use cases of the method just for some principle. But given that one or the other is required it seems really weird to me to have an options bag only where, you know, one or the other is required. That's a complex mental model to reason about. And it also seems weird to me to just arbitrarily pick one of them as the string value and I don't don't have a solution there. -JGT: And just to quickly respond, the main reason why we chose that one to be the default is because the other round() methods and Temporal use that as the default. And so we felt and also it's from what we understand it's most likely to be the most common use of this. So the largestUnit is kind of an uncommon case that we weren't as concerned about. +JGT: And just to quickly respond, the main reason why we chose that one to be the default is because the other round() methods and Temporal use that as the default. And so we felt and also it's from what we understand it's most likely to be the most common use of this. So the largestUnit is kind of an uncommon case that we weren't as concerned about. JHD: That reasoning also makes sense. It still results in something that I think is weird. -WH: I am very uncomfortable with the `round` example. If you don't specify a unit of rounding, then the clear thing to do is to not alter the value. It's like the notion of the number zero. If you sum up the elements of an array of integers — it's not a bug to pass an empty array to a function which gives you a total of the elements of the array. It just gives you the additive identity element. So for an option bag which doesn't have a smallest unit, you should just get the identity for rounding, which is to return the value unchanged. That will solve a lot of problems here. I'm fine with retaining the string API variant for convenience. That should allow option bags, which do not have a `smallestUnit` to be passed to `duration.round`, and `duration.round` can still do interesting things when it gets a `largestUnit`. +WH: I am very uncomfortable with the `round` example. If you don't specify a unit of rounding, then the clear thing to do is to not alter the value. It's like the notion of the number zero. If you sum up the elements of an array of integers — it's not a bug to pass an empty array to a function which gives you a total of the elements of the array. It just gives you the additive identity element. So for an option bag which doesn't have a smallest unit, you should just get the identity for rounding, which is to return the value unchanged. That will solve a lot of problems here. I'm fine with retaining the string API variant for convenience. That should allow option bags, which do not have a `smallestUnit` to be passed to `duration.round`, and `duration.round` can still do interesting things when it gets a `largestUnit`. JGT: And so, just a quick response of why we chose to make this required. As there are some platforms that do have similar round methods, where unit is assumed. And oftentimes, it's assumed to be a second, and so we were concerned that people who are coming from this platform and coming to ECMAScript would call the round method expected to round to the nearest second and then be dismayed or or in some cases not even discover that that wasn't it exactly. So that was our logic. You know that's certainly valid, I think, your point and it sounds like CM's point next in the queue is similar to that. So we're not vehemently opposed to to allowing it to be a no-op, but that sort of other platform was why we made the decision. @@ -763,7 +783,7 @@ JHD: Neither is JavaScript. MAH: Well, you know what I mean. -JHD: package.json has a schema and “expected types”. So it is typed in the same way that most JavaScript is typed: there's expected things, and even in that case, the overload is often chosen because it is ergonomic. I'm sure it's not universal. And sure, there are some people that, you know, some sections of the ecosystem have moved away from it, but like I don't think that it's accurate to say that the community has moved away from it. It's just not something I've seen. +JHD: package.json has a schema and “expected types”. So it is typed in the same way that most JavaScript is typed: there's expected things, and even in that case, the overload is often chosen because it is ergonomic. I'm sure it's not universal. And sure, there are some people that, you know, some sections of the ecosystem have moved away from it, but like I don't think that it's accurate to say that the community has moved away from it. It's just not something I've seen. SYG: Could you recap for me why we arrived here again? Like what was the actual issue with requiring the object bags? @@ -791,7 +811,7 @@ PFC: I'm not in the queue because I'm showing the slides, but just some context CM: I think there were some objections to strings. Just a general overloading, a parameter to sometimes be a string and sometimes be an object seems sketchy. I'm not opposed to them because as I said, there are cases where it happens. I think if we go that way, it should be used sparingly. And here with what I've seen, I am not sure it really cuts for the use cases, but, I think others expressed the same thing that for round(), for example, always requiring a bag. -JGT: Maybe I'd ask the question, would anybody, you know — because I have heard a lot of — it seems that there's a fairly enthusiastic thumbs up and fairly less enthusiastic, maybe even mild thumbs down on that one. It does seem to be kind of a personal preference thing depending on maybe what people are used to and what they're used to seeing, but is this something where there are objections? If we decide to put in the string alternative would there be a blocking objection to do so? Because we do feel like it actually improves the ergonomics, the API we played around with. +JGT: Maybe I'd ask the question, would anybody, you know — because I have heard a lot of — it seems that there's a fairly enthusiastic thumbs up and fairly less enthusiastic, maybe even mild thumbs down on that one. It does seem to be kind of a personal preference thing depending on maybe what people are used to and what they're used to seeing, but is this something where there are objections? If we decide to put in the string alternative would there be a blocking objection to do so? Because we do feel like it actually improves the ergonomics, the API we played around with. SYG: For which? I already know. I'm around her for the other. (??) The other ones seem problematic @@ -811,7 +831,7 @@ JGT: So round() and total() are the two methods we're looking at, and then of al SYG(??): So, let me ask another question before we go, I guess given that the Temporal champions are excited about the ergonomics improvement here, regardless of how you arrived because of the disagreement about API shape of options bags, regardless of that. Would you think that the ergonomics independently motivate you adding string shortcuts anyway? -JGT: We think so. I mean we think it helps the other issue and it's just good on its own merits. Okay. All right, we're running low on time. What's the normal process around knowing whether we made a decision? +JGT: We think so. I mean we think it helps the other issue and it's just good on its own merits. Okay. All right, we're running low on time. What's the normal process around knowing whether we made a decision? WH: You could ask for consensus about having a string shortcuts and having it always means the smallest element. @@ -822,11 +842,14 @@ JGT: So then there are two other issues here. One is whether we should support n JHD: I think it's always fine to say let's not set a precedent. Let's just argue about it again if it comes up, like if SYG is not comfortable setting a precedent it should not be taken as one. BT: Okay, I think we'll probably want to return to this item and overflow time if we can get time since the time box elapsed. That would be great. Well, I moved this to potential overflow and we can hopefully come back to it at the end of tomorrow. + ### Conclusion/Resolution -* Consensus for adding a string overload to the `round` and `total` methods, which will mean the smallest unit -* not setting a precedent about whether options bags are optional + +- Consensus for adding a string overload to the `round` and `total` methods, which will mean the smallest unit +- not setting a precedent about whether options bags are optional ## RegExp extended mode and comments + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-regexp-x-mode) @@ -859,11 +882,11 @@ a+b/x NRO: Okay. Thank you. -RBN: Yeah, so this is why -- and this was something that was brought up by Waldemar at the last meeting as well regarding this -- which is why I added the restriction between the last meeting and now to block forward slash characters without escaping. +RBN: Yeah, so this is why -- and this was something that was brought up by Waldemar at the last meeting as well regarding this -- which is why I added the restriction between the last meeting and now to block forward slash characters without escaping. -WH: I wanted to echo the point that was just raised about various characters inside comments causing trouble: square brackets, slashes, backslashes, for example `/(?#[/)/`. It would be undesirable for a variety of reasons, which MM can explain some of, to have to modify the lexer grammar — the permissive grammar which finds the end of a regular expression. Also, having the lexer’s permissive grammar diverge from the grammar used to parse the contents of regular expressions will also cause serious problems. +WH: I wanted to echo the point that was just raised about various characters inside comments causing trouble: square brackets, slashes, backslashes, for example `/(?#[/)/`. It would be undesirable for a variety of reasons, which MM can explain some of, to have to modify the lexer grammar — the permissive grammar which finds the end of a regular expression. Also, having the lexer’s permissive grammar diverge from the grammar used to parse the contents of regular expressions will also cause serious problems. -RBN: So, Yeah, yeah, as I was saying if it becomes necessary, I think it's completely reasonable to require escaping anything that we would find to be invalid inside comments. +RBN: So, Yeah, yeah, as I was saying if it becomes necessary, I think it's completely reasonable to require escaping anything that we would find to be invalid inside comments. WH: Inside comments too? Okay. @@ -874,6 +897,7 @@ BT: Thank you, Philip. With that, the queue is empty. RBN: In that case, I am seeking Stage 1. Are there any objections? BT: Put any objections that you have into the queue. Give it another minute. Few seconds anyway. [no objections] Right. I think that's Stage 1. Thank you very much. And JHD just put an explicit +1 on the queue. + ### Conclusion/Resolution -* Stage 1 +- Stage 1 diff --git a/meetings/2021-10/oct-28.md b/meetings/2021-10/oct-28.md index 41d31bff..507a06c5 100644 --- a/meetings/2021-10/oct-28.md +++ b/meetings/2021-10/oct-28.md @@ -1,7 +1,8 @@ # 28 October, 2021 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Sergey Rubanov | SRV | Invited Expert | @@ -19,27 +20,28 @@ | Jordan Harband | JHD | Coinbase | ## Records & Tuples update + Presenter: Nicolò Ribaudo (NRO) - [proposal](https://github.com/tc39/proposal-record-tuple/) - [slides](https://drive.google.com/file/d/1cExHCNFxl8x5tvF63Vt9THnQICBJqlan/view) -NRO: Okay, So this Stage-2 status update is because we want to hear the Committee's opinion about some problems. Just a quick refresher about records and tuples: they are compound primitives, similar to objects and arrays, and they can only contain other primitives. So they're deeply immutable and also they are compared by value recursively. Records and Tuples have different behavior than the other objects because `Object.is` and `===` can be different. And this is when they deeply contain negatives zero: they use SameValueZero semantics, so also for NaN. +NRO: Okay, So this Stage-2 status update is because we want to hear the Committee's opinion about some problems. Just a quick refresher about records and tuples: they are compound primitives, similar to objects and arrays, and they can only contain other primitives. So they're deeply immutable and also they are compared by value recursively. Records and Tuples have different behavior than the other objects because `Object.is` and `===` can be different. And this is when they deeply contain negatives zero: they use SameValueZero semantics, so also for NaN. NRO: The problem we have is that they do not integrate well with the rest of the language, because JavaScript is full of objects. So we need a way to be able to somehow associate objects to records and tuples. We introduced something that from now on I'm going to call ObjectPlaceholder, just for the sake of this presentation. ObjectPlaceholders are something that represents an object; they are an opaque value that represents the object. They are immutable, and can be put in records and tuples. They're primitives and two object placeholders are equal if they represent the same object. And as I was saying, this is just a placeholder for the final name. We do not have a good name yet for this: the current proposal uses ‘box’, but this has two problems. One is that it clashes with existing boxed primitives concept, when you wrap up a primitive in an object, and we initially didn't consider this. But we found that this is a very big problem every time we talk about boxes. Second, it gives the impression that it should be a generic container. Or maybe it should be a generic container, so that it can also contain primitives and it can be generically used outside of Records and tuples structures? We've heard a few people asking for this, for example to replace a Maybe monad or to mark a value as trusted. And [if you have any idea for a good name, we have an issue where we are discussing this](https://github.com/tc39/proposal-record-tuple/issues/259). And we would love to hear your opinion. -NRO: So with this object placeholder, there are a few security considerations that we discovered while talking with the SES group. One is that ObjectPlaceholders should not provide direct access to objects across shadowrealm boundaries,because ShadowRealms are meant to isolate object graphs and you can only pass primitives across the boundary. ObjectPlaceholders would be primitive, but they need to not be able to expose the objects that they contain when they come from a different Realm. And in order to not break existing iframe-based membranes, used for some security purposes, they should also not give access to objects across iframes (realms created using platform-specific functions). And also, in order to be able to use them in compartments (which is a different isolation level which lives inside the same realm), the function to get the contents of an object should not live on the prototype. -A possible solution that we came up with is to throw when unboxed if they reference an object created in a different realm. So that for example, in this case it throws because we are trying to get the content of the placeholder created in the different Realm. And this would also be the same when using iframe-based realms. However, this has a problem, a drawback and the drawback: this would be the only API, or at least the only API that we found, that doesn't work cross realm. We also tried looking for API outside of ECMAScript, for example, in the HTML spec, but we couldn't really find them. And well, it's not controversial that they should not work across Shadow realms because they would break the purpose for shadow Realms. However, this can be an unwanted limitation when using iframes. So, when using iframes users could work around this limitation by installing inside the iframe the ObjectPlaceholder from the parent realm, so that they all share the same ObjectPlaceholder constructor. And from both from the inside and from the outside points of view, they're always like placeholders created and dereferenced in the parent realm, and so they always work. +NRO: So with this object placeholder, there are a few security considerations that we discovered while talking with the SES group. One is that ObjectPlaceholders should not provide direct access to objects across shadowrealm boundaries,because ShadowRealms are meant to isolate object graphs and you can only pass primitives across the boundary. ObjectPlaceholders would be primitive, but they need to not be able to expose the objects that they contain when they come from a different Realm. And in order to not break existing iframe-based membranes, used for some security purposes, they should also not give access to objects across iframes (realms created using platform-specific functions). And also, in order to be able to use them in compartments (which is a different isolation level which lives inside the same realm), the function to get the contents of an object should not live on the prototype. +A possible solution that we came up with is to throw when unboxed if they reference an object created in a different realm. So that for example, in this case it throws because we are trying to get the content of the placeholder created in the different Realm. And this would also be the same when using iframe-based realms. However, this has a problem, a drawback and the drawback: this would be the only API, or at least the only API that we found, that doesn't work cross realm. We also tried looking for API outside of ECMAScript, for example, in the HTML spec, but we couldn't really find them. And well, it's not controversial that they should not work across Shadow realms because they would break the purpose for shadow Realms. However, this can be an unwanted limitation when using iframes. So, when using iframes users could work around this limitation by installing inside the iframe the ObjectPlaceholder from the parent realm, so that they all share the same ObjectPlaceholder constructor. And from both from the inside and from the outside points of view, they're always like placeholders created and dereferenced in the parent realm, and so they always work. -NRO: Also other than this manual workaround there, there are two possible alternatives. Alternatives one is to allow objects, placeholders to be dereferenced across Realms but throw, when passing an object placeholder across a shadow realm boundary, and this makes it harder to create membranes around ShadowRealm boundaries because you have to first clone on one side. It has to traverse records twice, replacing object placeholders with something and storing those associations; then pass everything to the other side of the membrane. And then reconstruct the whole thing on the new side. However, this introduces a security vulnerability in existing iframe-based membranes because currently they assume that primitives are safe to be passed and they do not give access to objects. A second alternative is to make the error host-defined behavior, and I know that we try to minimize the things that are supposed host defined, But we could do something like this, so HTML can use a new content security policy to allow or disallow dereferencing across the iframes, so that we can keep existing iframe-based membrane secure. +NRO: Also other than this manual workaround there, there are two possible alternatives. Alternatives one is to allow objects, placeholders to be dereferenced across Realms but throw, when passing an object placeholder across a shadow realm boundary, and this makes it harder to create membranes around ShadowRealm boundaries because you have to first clone on one side. It has to traverse records twice, replacing object placeholders with something and storing those associations; then pass everything to the other side of the membrane. And then reconstruct the whole thing on the new side. However, this introduces a security vulnerability in existing iframe-based membranes because currently they assume that primitives are safe to be passed and they do not give access to objects. A second alternative is to make the error host-defined behavior, and I know that we try to minimize the things that are supposed host defined, But we could do something like this, so HTML can use a new content security policy to allow or disallow dereferencing across the iframes, so that we can keep existing iframe-based membrane secure. NRO: Even if host defined behavior might not be considered a good thing, we must remember that iframes can already be created only using custom host functions. So it wouldn't affect the possibility. -NRO: And so, this is the design space we are exploring right now, and we would love to hear your opinions about this ObjectPlaceholder and how to make it interact with realms. +NRO: And so, this is the design space we are exploring right now, and we would love to hear your opinions about this ObjectPlaceholder and how to make it interact with realms. WH: This seems really complicated for a basic feature of the language. I just struggle to figure how somebody would teach newcomers to the language abouts records and tuples, in particular the object storage restrictions on them. So, I'm unsure whether we should be doing this at all. I'd rather keep records and tuples simple and if we get to this kind of complexity I’m not sure it's worth it. -NRO: So I don't think that newcomers would realistically see this complexity around iframes mostly because like, I don't think newcomers interact with different realms. +NRO: So I don't think that newcomers would realistically see this complexity around iframes mostly because like, I don't think newcomers interact with different realms. WH: Newcomers will want to use records and tuples because they’re a nice new feature for value types. They will also want to store objects in them. So far this doesn't seem like the right answer for storing objects inside records and tuples. @@ -51,13 +53,13 @@ RRD: So, can I ask another clarifying question? So I mean no, it's more of a sta WH: You could just stick objects inside records and tuples. -NRO: sorry, could you repeat your last sentence +NRO: sorry, could you repeat your last sentence WH: Without realms you could just stick the objects inside records and tuples. -NRO: Well, no. Our goal with this proposal is to give the stability that you do not accidentally access mutable parts of your tree and that the equality semantics can be easily understood. So like before introducing this object placeholder concept, the proposal just did not support containing objects. +NRO: Well, no. Our goal with this proposal is to give the stability that you do not accidentally access mutable parts of your tree and that the equality semantics can be easily understood. So like before introducing this object placeholder concept, the proposal just did not support containing objects. -WH: I'm really unhappy with the complexity of supporting objects inside records and tuples. It may be better not to do it. +WH: I'm really unhappy with the complexity of supporting objects inside records and tuples. It may be better not to do it. RRD: So we had multiple discussions on this multiple dates. We, this is something that has been requested by the community around us, and also that we have been discussing in length in SES meetings. I think, at this point, we do not want to get into the debate of whether we want object placeholders to exist or not. We can Work towards having them as a possibility and the main question that we're trying to field right now is interaction with realms and membranes and mostly the conference. So if there is an objection against object placeholder, I'm wondering exactly what it is except for the complexity one because this has been discussed. @@ -69,19 +71,19 @@ RPR: Okay. Thank you. KG? KG: Yeah, sorry. Can you just remind me what the purpose of object placeholders is? I might have known at some point but no longer. -NRO: So, what is the purpose of putting objects in at all or just, why do we have to wrap up objects? +NRO: So, what is the purpose of putting objects in at all or just, why do we have to wrap up objects? KG: The question is, given that you want the ability to put objects in records. What additional benefit is there from wrapping them? NRO: Okay, so we would like to prevent people from accidentally going into a mutable part of their records and tuples structure so that we can give stronger immutability guarantees by default. And this also makes equality easier to explain because you can just say that the equality is recursive and I mean in general, it feels Like, it makes it safer to work with those immutable structures because you can trust their immutability more. -KG: It does not seem to me like it lets you trust their immutability more, and the other benefits that you listed seem not commensurate with the level of complexity they are otherwise entailing. +KG: It does not seem to me like it lets you trust their immutability more, and the other benefits that you listed seem not commensurate with the level of complexity they are otherwise entailing. RRD: Okay, so I guess we can. Okay. I can try to summarize this again. So essentially we went through multiple possibilities here that are not discussed in this slide. So that's why I don't want to take too much time discussing this because this is not the main question at hand that we have the main to do. The three main possibilities that we explored so far is either a record into goes, could contain. and as NRO just explained, this is kind of a risk for people using the data structure to accidentally mutate something. ObjectPlaceholders make it explicit whenever we want to look up an object. We think that's a good feature and by discussing and with the community interested or on the record and tuples, there is now kind of an agreement on that. But otherwise we would be to have Symbols as Weakmap keys because we could, instead of having object placeholders, we could use symbols and look them up in a WeakMap, and the other alternative is object placeholders, which makes it possible for us to still keep records and tuples compound Primitives that have that do not have identity, but that they can still contain the object through the ObjectPlaceholder, and therefore, giving them in a way, an identity, but this is only controlled through explicitly having an object placeholder in there. So that's why we had this design. We've been told by the community essentially that if that doesn't exist, someone would come up with something similar using symbols or numbers incrementing, or some things like this, it would get into userland most likely. So we're just trying to address this at the feature level instead of having multiple competing implementations of this system. JRL: So what I might use cases for, ignoring the changes. Now, if we had a box that allowed us to wrap user data, essentially we could have a graph like imagine a div and inside that div. You have a placeholder that points to some userdata to a parameter to a function or something. So, you have a component that returns a div and inside that div, You have a more data allowing you to box the Parameter allows you to mark explicit exit out of the div that you wrote in source code into a mutable or immutable area that was represented as the user's data to a parameter. It could be like a string, it could be more records, or it could be anything else. The ability to Mark the exit point out of real source code and into Data allows you to attach security guarantees on to what you actually render into a DOM tree. Allowing you to skip things like us, the XSS rendering directly into inner HTML and opening up an xss vulnerability where you accidentally treated user data as something you actually want to render into the inner HTML. That was my original use case that I wanted for it. It's very different now that it's just an object placeholder instead of a box. but I think it's still a useful basic primitive. The representing the data. -RRD: So, I wanted to add on that. It really should be useful. This is effectively not quite the object placeholder is for the and the moment it was designed for that. But that being said I agree that would be useful. There's something that we could be considering to experiment with in userland and probably another proposal later. but again, this is not object placeholders, which is, non mutable in itself. Once you put an object in it, You cannot change your reference to that object. So before, if you think about this user data use case, you wouldn't be able to swap out what's inside. That's just what I wanted to clarify here. +RRD: So, I wanted to add on that. It really should be useful. This is effectively not quite the object placeholder is for the and the moment it was designed for that. But that being said I agree that would be useful. There's something that we could be considering to experiment with in userland and probably another proposal later. but again, this is not object placeholders, which is, non mutable in itself. Once you put an object in it, You cannot change your reference to that object. So before, if you think about this user data use case, you wouldn't be able to swap out what's inside. That's just what I wanted to clarify here. MAH: Yeah, just to follow up quickly on being able to put any value Inside the Box. We recently showed in discussions that it's still possible to do that with an ObjectWrapper in a userland library that holds the value inside it because it can only contain an object so that kind of use case is still supported by this restriction. @@ -89,7 +91,7 @@ JHX: Yeah, my question is, it seems, if One Direction is the box or maybe we now NRO: Undefined would be handled as the other Primitives are handled. There is a thinking of allowing something like ObjectPlaceholder created with undefined would return undefined, rather than objectPlaceholder. So that you can select as a leader to use it with external changes. -JHX: Oh, yeah, I think I could follow. This is probably in the group policy. +JHX: Oh, yeah, I think I could follow. This is probably in the group policy. YSV: I wanted to talk about the alternatives if we do choose to go this way, so I believe it was alternative 1. Alternative one is the one that introduces a potential security issue with older iframes. We are not too comfortable with that. But alternative 2 and alternative 3 are fine for us. So I just wanted to mention that otherwise, a lot of our concerns that we raised previously for this proposal have been addressed. @@ -107,13 +109,13 @@ ACE: There is another thing with symbols as weakmap keys would allow user name t MAH: Really quick. I would actually say that object placeholder solves the leaking problems of symbols as WeakMap keys. You can create a unique object and put it in a box, and then use that as a WeakMap key. And which is basically an all intents and purposes like this equivalent to a user created symbols without the leaking problem of well-known or registered symbols as for. I just wanted to ask, like, an issue you mentioned is that this is complicated. I'd like to understand why you think object placeholders are complicated or any more complicated than something as simple as that with my keys. -SYG: Symbols as weakmap keys are less complicated. in this sense that it leverages existing concepts. This is more complicated in the sense that it seeks to introduce a Whole New Concept. That you now have to understand to really understand records and tuples. I think, was the point several other delegates have brought up before. +SYG: Symbols as weakmap keys are less complicated. in this sense that it leverages existing concepts. This is more complicated in the sense that it seeks to introduce a Whole New Concept. That you now have to understand to really understand records and tuples. I think, was the point several other delegates have brought up before. RRD: you will have that issue as soon as you do equality operations on records and tuples, do so, I guess I think that if that's the complexity that they’re talking about, it's already baked into the proposal. I have trouble understanding this whole complexity point, to be honest. -NRO: Yeah, it is a complexity issue. A question of being able to know that you can actually put immutable or linking mutable data from immutable structure link, immutables data from immutable structure. +NRO: Yeah, it is a complexity issue. A question of being able to know that you can actually put immutable or linking mutable data from immutable structure link, immutables data from immutable structure. -SYG: I won't speak for other delegates. That's part of the complication for me that the userland thing would sidestep. You can argue that in userland, the user of the records and tuples, if they want to have exit points, have to deal with that fundamental complexity. And for that, I agree, but is it the case that every record and tuple user needs mutable exit points? +SYG: I won't speak for other delegates. That's part of the complication for me that the userland thing would sidestep. You can argue that in userland, the user of the records and tuples, if they want to have exit points, have to deal with that fundamental complexity. And for that, I agree, but is it the case that every record and tuple user needs mutable exit points? RPR: We are at the end of the time box. We do have extra time on the schedule. So you could propose an extension and we can see if the Committee wants. @@ -137,7 +139,7 @@ NRO: alternative 3 I think is what we were currently proposing, which is to like YSV: I can quickly jump in and say that when we looked at this alternative one, sorry alternative two because it's just throws it has relatively simple behavior that can be expanded upon later into alternative 3. If necessary. -WH: What is alternative 3? I don't see it in the slide show. +WH: What is alternative 3? I don't see it in the slide show. NRO: Yeah, I think let's call these alternatives to these alternative 1, and the one above drawing, referencing from the realms is alternative 0, just to make sure that we understand what you're talking about. Because I did not have an alternative 3, but at this point we have this solution and two alternatives. @@ -152,10 +154,13 @@ WH: What I would like to see is an example of how you would teach users a simple NRO: We will check that the current explainer that we have is up to date. So we can, we can then share it. NRO: OK, Yes, we weren't asking for concessions on anything. So that's okay. + ### Conclusion/Resolution -* + +- ## RegExp `\R` Escape for Stage 1 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-RegExp-r-escape) @@ -175,13 +180,13 @@ RBN: I don't believe that's true. That's the purpose of the mode has expanded si WH: I believe we rejected that option the last time we discussed it. -RBN: If that's the case, then that's fine. My primary goal is to align with whatever is supported within `v` mode, and the feedback that I received early on after the last meeting was that the plan was to use mode to support CRLF as a single character. If that's changed I'll align with whatever the direction is. +RBN: If that's the case, then that's fine. My primary goal is to align with whatever is supported within `v` mode, and the feedback that I received early on after the last meeting was that the plan was to use mode to support CRLF as a single character. If that's changed I'll align with whatever the direction is. WH: I have not heard anything about trying to represent a carriage return line feed sequence as though it were a single unicode character. That would be really weird. -RBN: So I can find there's an issue, this repo something that came up elsewhere. can probably then post the link to the Matrix for the issue from the larger RegExp features proposal from last meeting that I plan to move over to the new repo for this that specifically calls out the interest in, having it not match between crlf and match Behavior. I'm perfectly fine with not matching, as long as the goal is consistency with whatever the flags are. So again, I'll back a slide here. The goal is that we are essentially treating things consistently when in Unicode or in `u` or `v` mode so that we’re aligning with whatever the anchor characters perform. +RBN: So I can find there's an issue, this repo something that came up elsewhere. can probably then post the link to the Matrix for the issue from the larger RegExp features proposal from last meeting that I plan to move over to the new repo for this that specifically calls out the interest in, having it not match between crlf and match Behavior. I'm perfectly fine with not matching, as long as the goal is consistency with whatever the flags are. So again, I'll back a slide here. The goal is that we are essentially treating things consistently when in Unicode or in `u` or `v` mode so that we’re aligning with whatever the anchor characters perform. -RGN: Yeah, just to clarify, the Unicode sets proposal was hoping to get a CRLF handling / Unicode in multi-line mode, but that is currently out of the proposal. +RGN: Yeah, just to clarify, the Unicode sets proposal was hoping to get a CRLF handling / Unicode in multi-line mode, but that is currently out of the proposal. MLS: Before I get to my issue, I agree with WH. That it seems weird to have you and BMO treat a `^` line feed [?] as a predicate [?]. I just want to clarify. My question is outside of `u` or `v` mode, `\R` is syntax here. Is that correct? @@ -202,9 +207,13 @@ MLS: I'm fine with but agree with WH. WH: The proposed `v` mode semantics are really bizarre, matching or not matching a line feed depending on what's in the string prior to where you started matching. We don't have anything else like that. RPR: I'm only hearing support. So, if there are no objections, then congratulations. You have Stage 1. Thank you very much. + ### Conclusion/Resolution -* Stage 1 + +- Stage 1 + ## RegExp Buffer Boundaries (\A, \z, \Z) for Stage 1 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-RegExp-buffer-boundaries) @@ -218,27 +227,27 @@ RBN: Some other existing, some other examples here, show mixing both using buffe RBN: And the final example shows the trailing buffer boundary would match an optional newline then `/r` sequence. So it matches any Unicode line terminator following the end of the input. And this again was added after the agenda cut off, but I am seeking Stage 1. So there's currently no one that I see on the queue. I'll give it a moment for anyone if they have questions or would like to add any commentary. And then I can ask for Stage 1. -JRL: Yeah, the `\z` that allows any newlines, but still matches if it's the end of the string - is that actually regular? The way I imagined this is implemented in my head is that a forward look ahead for an arbitrary number of newline characters afterwards, which can't be implemented as regular, which makes me think it's not a good fit for adding to a regular Expressions. +JRL: Yeah, the `\z` that allows any newlines, but still matches if it's the end of the string - is that actually regular? The way I imagined this is implemented in my head is that a forward look ahead for an arbitrary number of newline characters afterwards, which can't be implemented as regular, which makes me think it's not a good fit for adding to a regular Expressions. RBN: It's not an arbitrary number. It is a single newline at the end of the input. So it checks the current position. And if the current position is a newline, it looks at the following position and checks to see if that's the end of the buffer. JRL: Can it just be implemented as a union then? Like a dollar and then a newline and then a `\Z`. -RBN: This isn't looking for a newline or the end of the buffer. This is looking for the end of the buffer. That may have an optional newline. So it's not a union. If it was a union it would be a union of `\z` or `\R\z`. Yes, because it's always looking for the end of the buffer. buffer. +RBN: This isn't looking for a newline or the end of the buffer. This is looking for the end of the buffer. That may have an optional newline. So it's not a union. If it was a union it would be a union of `\z` or `\R\z`. Yes, because it's always looking for the end of the buffer. buffer. JRL: Yeah, that's what I mean. So, I'm just curious why we need the special case that has a slightly different meaning. It allows a newline if we could implement it. If we just had a `\Z`. RBN: The primary case for this is specifically for the `\Z` is consistency. Many of the other languages that support these buffer boundaries have this capability for matching the trailing line terminator, And it's sometimes the case or fairly often the case, depending on codebase really and your lint rules as to whether or not line terminator is required at the end of file. file. So it can often be the case where you're looking for something, that's the end of the end of the buffer. In a regular session pattern, but you're having to also check to see if the trailing newline. I know that a lot of engines that, for tooling, use sourcemaps, looking for sourcemap comments at the end of the buffer. There's a number of different use cases that would leverage the ability to check for this and having a convenient syntax would be valuable as opposed to having to remember that. I need to write out something like the zero width assertion that I provide is the equivalence here. Because again, part of the goal for the `\R` originally was there mentioned was to provide a convenience mechanism for something that is easy to get wrong, especially when working with unicode. -JRL: Okay, then I have a second point, but WH actually is going to talk about it so we can just go to WH. +JRL: Okay, then I have a second point, but WH actually is going to talk about it so we can just go to WH. WH: It's a little jarring that in order to get the simplest semantics you need to use `\A` and `\z`. I assume that's because `\a` is taken already? -RBN: I believe that's possibly part of the case. It's also the end of the buffer with trailing lime. Terminator, is a fairly common case when parsing files from the file system. you'll tend to see a as of, at least in some of the examples, in references that I've seen that use this, where they'll use `\A `and `\Z` in many cases when parsing files. +RBN: I believe that's possibly part of the case. It's also the end of the buffer with trailing lime. Terminator, is a fairly common case when parsing files from the file system. you'll tend to see a as of, at least in some of the examples, in references that I've seen that use this, where they'll use `\A`and `\Z` in many cases when parsing files. -WH: I think that having the same characters as Perl and the other regular expression engine trumps any consideration about inconsistency of upper and lowercase `\A` and `\z`. +WH: I think that having the same characters as Perl and the other regular expression engine trumps any consideration about inconsistency of upper and lowercase `\A` and `\z`. -RBN: Yes. There's the only difference that I found is – and I would have to look at the feature site that I put together that does a comparison of RegExp features – that I think there's one engine where the `\Z` matches any number of trailing line terminators before is e [?], which is not the common case and is the specifically that the general case it was being pointed out as being not truly a regular grammar, but predominant use case is Is looking for a single line terminator. +RBN: Yes. There's the only difference that I found is – and I would have to look at the feature site that I put together that does a comparison of RegExp features – that I think there's one engine where the `\Z` matches any number of trailing line terminators before is e [?], which is not the common case and is the specifically that the general case it was being pointed out as being not truly a regular grammar, but predominant use case is Is looking for a single line terminator. WH: Yeah, I'm happy with it as long as it matches exactly what Perl is doing. @@ -250,16 +259,20 @@ RPR: Okay, so this was after the agenda of but that just gives people the right WH: I support this. -JRL: +1; have wanted this multiple :snare-drum: times. +JRL: +1; have wanted this multiple :snare-drum: times. -RPR: Okay, I'm only hearing support, no objections. +RPR: Okay, I'm only hearing support, no objections. RBN: Thank you very much. RPR: You have Stage 1. + ### Conclusion/Resolution -* Stage 1 + +- Stage 1 + ## RegExp atomic operations + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-RegExp-atomic-operators) @@ -269,11 +282,11 @@ RBN: This one I imagine will be more controversial. The final proposal I'm prese WH: It is possible to write regular expressions with exponential runtime. This is not one of them. This is linear. -RBN: I've tested this with possessive quantifiers and the growth might be linear, but with a possessive quantifier, at least the actual performance cost is or the actual runtime cost is almost imperceptible. +RBN: I've tested this with possessive quantifiers and the growth might be linear, but with a possessive quantifier, at least the actual performance cost is or the actual runtime cost is almost imperceptible. WH: This is a linear match both in forward tracking and backtracking. -RBN: Yes. What you're now, you're set in the same position, but you're retrying it for every possible failure case. So you're trying this a hundred thousand times in the case, where, you know that we know as part of writing the regular expression. That's why we're looking for the end of the buffer. So if we see anything, that's the end, the buffer we should stop trying. We shouldn't try this a hundred thousand times because we never see anything that is in a sequence. Characters that turn into [?] line feeds will never see the end of the buffer. If we see something that is not a carriage return line feed. So the degenerate case is anything that is a significant length of newlines, which is what results in the denial of service. +RBN: Yes. What you're now, you're set in the same position, but you're retrying it for every possible failure case. So you're trying this a hundred thousand times in the case, where, you know that we know as part of writing the regular expression. That's why we're looking for the end of the buffer. So if we see anything, that's the end, the buffer we should stop trying. We shouldn't try this a hundred thousand times because we never see anything that is in a sequence. Characters that turn into [?] line feeds will never see the end of the buffer. If we see something that is not a carriage return line feed. So the degenerate case is anything that is a significant length of newlines, which is what results in the denial of service. WH: It's still linear in either case. @@ -287,7 +300,7 @@ MM: Good. Thank you. RGN: I appreciate the simple example, but I wonder in this case if it is too simple. In particular, does current spec text require a long run time to process it or is this a question of implementation choice? -RBN: I can't speak to the implementations within, say, V8 or SpiderMonkey and what they're using them for underlying support for the regular-expression engine. The specification doesn't say that this has to be long, but there is an issue with the way the specification text is written is that it's expecting backtracking to be essentially, in these cases, to be a repeated operation. There's no type of heuristic that's used to determine that. The match couldn't possibly be successful to avoid these types of degenerate cases And even if there could, there’s still corner cases where you could formulate a regular expression that would be able to break any of these systems, because the fact that grammar is the regular-expression grammar allows so much flexibility when it comes to optionals so, I'm not sure that optionals, alternatives, repeating – there's any solid answer to whether or not this is a suboptimal implementation. One of the goals with providing possessive quantifiers, is that it gives the developer or the provider of the regular expression control over matching behavior when they know that certain things shouldn't be possible and then can control backtracking behavior because of it. I think there's a clarifying question, but I think it's more of an answer from JRL that it's a suboptimal implementation, but not possible to support all features without using a suboptimal implementation. +RBN: I can't speak to the implementations within, say, V8 or SpiderMonkey and what they're using them for underlying support for the regular-expression engine. The specification doesn't say that this has to be long, but there is an issue with the way the specification text is written is that it's expecting backtracking to be essentially, in these cases, to be a repeated operation. There's no type of heuristic that's used to determine that. The match couldn't possibly be successful to avoid these types of degenerate cases And even if there could, there’s still corner cases where you could formulate a regular expression that would be able to break any of these systems, because the fact that grammar is the regular-expression grammar allows so much flexibility when it comes to optionals so, I'm not sure that optionals, alternatives, repeating – there's any solid answer to whether or not this is a suboptimal implementation. One of the goals with providing possessive quantifiers, is that it gives the developer or the provider of the regular expression control over matching behavior when they know that certain things shouldn't be possible and then can control backtracking behavior because of it. I think there's a clarifying question, but I think it's more of an answer from JRL that it's a suboptimal implementation, but not possible to support all features without using a suboptimal implementation. JRL: Yeah, it's specifically certain features of regular expressions as we know them are not actually regular and it's not possible to implement those features without using a backtracking implementation. Lookbehind, lookahead, backreferences. And so everything essentially uses a backtracking implementation in JavaScript. It's possible if you were to analyze the regular expression beforehand and guarantee that none of those features were used, because it's syntax, you could switch to a linear implementation, but no one does it currently and that's disappointing. @@ -317,7 +330,7 @@ RBN: So it does cut a significant part of what happened. So that is one of the a RPR: Seven minutes remaining. -RBN: I'd like to go through the rest of the slides and we can come back to this discussion if that's fine. So, again, the matching is similar to how greedy quantifiers match in that it will first attempt to match everything in a repeated list. If it fails to match, however, it does not perform any type of backtracking and the goal for this is again improved back performance when backtracking isn't necessary. This is something that does not conflict with existing syntax. So it does not require a special mode to use. This would currently be illegal syntax in a regular expression and is again used by a number of existing implementations. And here are some of the examples that I wanted to point out of greedy versus lazy. So a greedy quantifier will try each of these operations. First, probably take the most characters first before failing to find something and backtracking and then take the next set of characters and try. In the case of the lazy quantifier, it will try the least number of characters before it tries to match and then and then grow. Sorry, for the possessive quantifiers. We would see that first. tries all 4 "a"s and then fails and stops the match at that point. +RBN: I'd like to go through the rest of the slides and we can come back to this discussion if that's fine. So, again, the matching is similar to how greedy quantifiers match in that it will first attempt to match everything in a repeated list. If it fails to match, however, it does not perform any type of backtracking and the goal for this is again improved back performance when backtracking isn't necessary. This is something that does not conflict with existing syntax. So it does not require a special mode to use. This would currently be illegal syntax in a regular expression and is again used by a number of existing implementations. And here are some of the examples that I wanted to point out of greedy versus lazy. So a greedy quantifier will try each of these operations. First, probably take the most characters first before failing to find something and backtracking and then take the next set of characters and try. In the case of the lazy quantifier, it will try the least number of characters before it tries to match and then and then grow. Sorry, for the possessive quantifiers. We would see that first. tries all 4 "a"s and then fails and stops the match at that point. RBN: So the point I was making before about that CVE in the example is the existing code was incorrect. If you change that to add a new `+` and work to utilize this feature the exact same pattern with a hundred thousand newlines followed by a non [?] not the end of the buffer, takes less than one millisecond on this. Same older generation processor. @@ -335,7 +348,7 @@ KG: Yes, so I appreciate the motivation for this proposal, but in line with my c RBN: My counterpoint to that. Is that for this example specifically, without having a possessive quantifier, there is no solution within a regular expression that could avoid this performance cost. -KG: I have the same comment as I had on conditional groups, which is that that doesn't seem so bad. +KG: I have the same comment as I had on conditional groups, which is that that doesn't seem so bad. RBN: As someone who has used a significant number of regular expressions in software, engineering within the JavaScript platform, and knowing the number of people within the community that use regular expressions, the number of times that this has become a problem…It's one of the most common things you see in npm and GitHub audit reports for JavaScript projects, are these RegExps denials of service? It does feel like it's a valuable pattern to implement. It is a niche case, it's not going to be something that is used all the time. It is something that if you are familiar with it, you can be aware of when doing matching. But again, if it's something that isn't in the language, then there are no no Alternatives and still be able to use a regular expression. The only alternative is flattening out the regular expression into user code which is sometimes much more complex to match the same behavior as with a regular expression. So not having this is There's no current solution within regular expressions and having it is a boon to those who do use regular expressions and use them heavily. It might be the case. Yes, that it's something that you add on when you realize that there is a performance issue with a regular expression, and that's not much different than if someone adds a question to the end of the regular expression because what they're getting is what they expected. And so they want to try a lazy quantifier but not having it just because it seems complicated when there is no alternative doesn't feel to me like a good reason not to have it, especially since the syntax here is relatively terse. @@ -345,11 +358,11 @@ RBN: This feels not so complex to me. RPR: So I think we don't really have time. We don't want to go in the queue. I don't know if those things in the queue are blocking. Could anyone say if they have a blocking item in the queue? -WH: Mine is. I'm not opposed to this feature, but your description of the behavior of this feature and of how it behaves in this example contradict each other. So I do not understand what it is we are proposing. I would be more comfortable if we could take some time to get a better explanation of what exactly is being proposed here, in particular about which backtracking does and does not happen. +WH: Mine is. I'm not opposed to this feature, but your description of the behavior of this feature and of how it behaves in this example contradict each other. So I do not understand what it is we are proposing. I would be more comfortable if we could take some time to get a better explanation of what exactly is being proposed here, in particular about which backtracking does and does not happen. -RPR: Okay, so we will leave the time box now, I think Ron and WH I'd ask to to work this out off-line. Is that okay? Aadd more information to the explainer to try to present at a future meeting. +RPR: Okay, so we will leave the time box now, I think Ron and WH I'd ask to to work this out off-line. Is that okay? Aadd more information to the explainer to try to present at a future meeting. -MLS: Okay, we have something else before the break. +MLS: Okay, we have something else before the break. RPR: We do. Yes, I really want the full time. And then MLS I do note you have an item on the Queue. Could the notetakers please capture Michael's comments from the queue. @@ -358,9 +371,13 @@ RPR: We do. Yes, I really want the full time. And then MLS I do note you have an JHX: Same feeling [as KG], but if you consider at large scale (10+ years), I hope people can finally get it. MLS: This introduces a 3rd middle counting type. Seems like the semantics may be difficult to reason about. + ### Conclusion/Resolution -* Not Stage 1​​ (does not advance) + +- Not Stage 1​​ (does not advance) + ## Evaluator Attributes + Presenter: Guy Bedford (GB) - [proposal](https://github.com/lucacasonato/proposal-evaluator-attributes) @@ -368,7 +385,7 @@ Presenter: Guy Bedford (GB) GB: So this one was actually, it was brought up by Luca Casonatto, who is here as an observer today from Deno. And the proposal is for evaluator attributes, primarily justified for these WebAssembly importing scenarios. -GB: So to try and give some background with an incredibly readable wall of text: when importing WebAssembly through the ES module integration that we currently only have as a specification for WebAssembly imports. There are various conventions that current WebAssembly loading patterns need to translate into the WASM integration patterns. So for example, how to interpret the meanings of the specifiers that are imported in the WebAssembly binaries, like module imports? What the export interfaces are. So, the namespaces, and the actual runtime conventions around that to get a functional application, which are all things that are being developed and are also in flux and changing. And this is in contrast to JS where there's a very clear execution model that we specified, that is a single kind of graphic solution model that we have full kind of convention and community consensus around. +GB: So to try and give some background with an incredibly readable wall of text: when importing WebAssembly through the ES module integration that we currently only have as a specification for WebAssembly imports. There are various conventions that current WebAssembly loading patterns need to translate into the WASM integration patterns. So for example, how to interpret the meanings of the specifiers that are imported in the WebAssembly binaries, like module imports? What the export interfaces are. So, the namespaces, and the actual runtime conventions around that to get a functional application, which are all things that are being developed and are also in flux and changing. And this is in contrast to JS where there's a very clear execution model that we specified, that is a single kind of graphic solution model that we have full kind of convention and community consensus around. GB: To give some examples. Many WebAssembly modules have things like an end import that is kind of like just an object that they hook all the standard library functions on in WASI. It's called WASI preview 1. So if you import a WASI module, it's going to have an import of this bare specifier that in the WebAssembly module be seen as a very specified [?] and if you wanted to then get correct WASI execution. You would need to individually map that for every single WASI module in your application, and you would want to. Then the fact is that they share memories or shared between these modules. if you want a different instance, you would want to map it to a different version of the standard library for each one. So you could get a different memory being shared with each one. So there's these difficult… [unable to transcribe]. I don't know how far it is along the specification process, but given the ability for what is happening right now in all this JS, glue code to be brought down into WebAssembly in a way that's going to be compatible with not necessarily requiring a GC integration. And one of the things that this module linking specifies. Is basically like you would in the current mechanism instantiate WebAssembly as a module instance and posting the imported object programmatically basically gives it the way to do that. Using a WebAssembly module imports the actual WebAssembly itself. So use this pattern inside of WebAssembly to get you [?] on the module, it needs to be able to import the module, as an uninstantiated module. So you have two types of imports. You've got a kind of a module import type and instance, import type. @@ -402,7 +419,7 @@ MM: But what do you buy? What context was that? Can you please clarify? GB: Any relative URLs are relative to the URL that the block was defined in. -MM: I think the linkage context, the same static module record can be linked multiple times in different contexts. So, to start module record, really should be no more specific than the source code that was compiled into it as a separately compilable in a way that supports separate compilation. so, I think that I think they should be independent, because of the URL context and that way that I'm not sure about. I agree with you about the cycles, though. That's certainly a difference that we need to wrestle with. Yet, there's there's a lot of cross-cutting concerns. +MM: I think the linkage context, the same static module record can be linked multiple times in different contexts. So, to start module record, really should be no more specific than the source code that was compiled into it as a separately compilable in a way that supports separate compilation. so, I think that I think they should be independent, because of the URL context and that way that I'm not sure about. I agree with you about the cycles, though. That's certainly a difference that we need to wrestle with. Yet, there's there's a lot of cross-cutting concerns. GB: So, I think having those discussions is key to making sure these things work together. Well, I'll definitely do some more reading compartments. Yep. @@ -410,13 +427,13 @@ MAH: Yeah, just quickly. I believe that the relative URL part and how resolution YSV: I wanted to raise an issue that I put on the repository, which is when I read through this, I noticed that there are a couple of overlaps with the goals of different module evaluation, specifically splitting, the loading of modules into two parts. In deferred module evaluation we do the fetch, compilation and linking eagerly, whereas this only does compilation eagerly if I understood correctly. However, they are similar in that they both evaluate at a later step. So, I've been working on the side on this proposal a little bit and one thing that come across is that in fact to really Bridge the compatibility gap between userland libraries that implement modules and es6 modules, is there are implemented in browsers, would be exposing the module loader system itself and allowing people to write loaders. This is tricky and GB pointed this out on the issue and I am very interested in seeing if we can find a common abstraction that would work for both cases. That could be this static record representation that then users could decide how to represent. For example the way that we do it in spidermonkey is we replace the placeholder name space and what we could do if with a custom loader is take a module record that's kept by the static loader, a module loaded by the custom loader and replace the placeholder. So if there's common abstractions that we can share because we do have a known pain point in JavaScript modules right now. I would be really interested in seeing how this proposal can evolve. -YSV: Additionally, GB you made a great suggestion that maybe we could use evaluator attributes for deferred modules. Think that's also a direction that that proposal can go and if we don't want to give so much freedom. +YSV: Additionally, GB you made a great suggestion that maybe we could use evaluator attributes for deferred modules. Think that's also a direction that that proposal can go and if we don't want to give so much freedom. GB: Yeah, there's certainly not a shortage of things to consider and that is what makes this stuff difficult with these wide design spaces, but, certainly, there are a lot of crossovers and in having these discussions. Yeah, seeing these concepts of things like having a deferred module attribute kind of come out of the discussions that we've been having has been really interesting to see what could be there. Because of the fact that in this model, it's these different representations of a module. You're getting a kind of a higher-order representation of a module, and there could be, you know, different types of higher-order representations of a module that you could then link into our graph. One concern with the loader hook, smog [?] model, as I mentioned in that issue is you get down to that kind of fine-grained linkage that we see with Node.js [?] VM Source, text module record, which exposed and right now there are some users who are using that quite extensively and it's difficult for users to get the usage exactly right. It requires a lot of understanding of module linking and concepts to be able to use those source text APIs and get cycles right. And things like that. So there is a balance to be found between the perfect abstraction and the most usable abstractions as well. DRR: Yeah, I think that, you know, I'm probably not the only one here that feels this way. I think I have a hard time understanding the use case fully. I see some syntax. I have some ideas of what I should do. I mean I can kind of piece things together based on some of the slides. It sounds like there's some sort of representation of a module that can be instantiated with Maybe some parameterization, right? I don't know if I interpreted that correctly, but it is. It is hard for me to understand, you know. The direction here. And, you know, I'm willing to give a little slack and say, well for Stage 1, like maybe others have a better understanding here, but, you know, it is one of those things where it's like, adding syntax to modules and that already feels like a very high cognitive overload place for people. -GB: Thanks for bringing that up. I completely agree. It should be crystal clear. I can try and go through this example. Again, if it would help. and So currently in the ES module integration when you import from WebAssembly you are executing the graph like any other module. You're executing the dependencies first, then you're executing the WebAssembly module. And then you're getting back the exports and in doing that, you're also resolving the imports of the WebAssembly module using the same host resolver, including very expressive higher resolution, relative resolution, and one of the issues with the WebAssembly integration for that. is that these conventions in WebAssembly binaries often simply don't match up with the conventions that we have in the JS world. And instead what you see in WebAssembly usage is the direct programmatic calling of these WebAssembly modules, where they base it your, which is basically just the the WebAssembly to instantiate code where you can pass it, the WebAssembly module and then the the second object is the map of the imported specifier names to the modules that they that implement. These effectively, the module name spaces for its imports. So you're parsing the imports to the WebAssembly module when you're instantiating it and executing it in the same step here and having fine-grained control over, setting the imports. Whoever assembles the module in a way that doesn't require to perfectly align with the host module system conventions. Where in JS, we assume it's all URLs, and relative URLs end up being useful for WASI, for example. Because WASI often represents the process model of having something? That is like a traditional binary. And when you run that WASI start, it's going to sort of run the binary from start to end and go, by saying you, it's the time that starts. as if the binary is done on its work already, whereas if you're importing a moment show, you don't necessarily want that execution during initialization. And so it's basically just defining the default export when importing as WASM module to be that compiled WebAssembly.module. so that you can do this this more kind of fine-grained instantiation using the existing WebAssembly APIs and paving the path that current WebAssembly execution already takes, which is the standard instantiate custom programmatically instantiate calls and that encapsulates the extra wrapping that the web is coming in. +GB: Thanks for bringing that up. I completely agree. It should be crystal clear. I can try and go through this example. Again, if it would help. and So currently in the ES module integration when you import from WebAssembly you are executing the graph like any other module. You're executing the dependencies first, then you're executing the WebAssembly module. And then you're getting back the exports and in doing that, you're also resolving the imports of the WebAssembly module using the same host resolver, including very expressive higher resolution, relative resolution, and one of the issues with the WebAssembly integration for that. is that these conventions in WebAssembly binaries often simply don't match up with the conventions that we have in the JS world. And instead what you see in WebAssembly usage is the direct programmatic calling of these WebAssembly modules, where they base it your, which is basically just the the WebAssembly to instantiate code where you can pass it, the WebAssembly module and then the the second object is the map of the imported specifier names to the modules that they that implement. These effectively, the module name spaces for its imports. So you're parsing the imports to the WebAssembly module when you're instantiating it and executing it in the same step here and having fine-grained control over, setting the imports. Whoever assembles the module in a way that doesn't require to perfectly align with the host module system conventions. Where in JS, we assume it's all URLs, and relative URLs end up being useful for WASI, for example. Because WASI often represents the process model of having something? That is like a traditional binary. And when you run that WASI start, it's going to sort of run the binary from start to end and go, by saying you, it's the time that starts. as if the binary is done on its work already, whereas if you're importing a moment show, you don't necessarily want that execution during initialization. And so it's basically just defining the default export when importing as WASM module to be that compiled WebAssembly.module. so that you can do this this more kind of fine-grained instantiation using the existing WebAssembly APIs and paving the path that current WebAssembly execution already takes, which is the standard instantiate custom programmatically instantiate calls and that encapsulates the extra wrapping that the web is coming in. DRR: Okay. That gives me some context. Okay. Thank you very much. @@ -424,7 +441,7 @@ SYG: The last time this came around, you know, the reason we have assertions at GB: Yeah, and possibly BFS, I believe I did discuss it briefly. I'm not sure. I'm sorry. I don't want to assume but yeah, so the fact that it's not altering the underlying execution, semantics the resource that is being targeted. It's not altering the way that the execution model runs. It's just either reflecting that execution model at a higher level or altering. the way that it's represented through the namespace, exports, possibly. but before, what would be worth checking, -SYG: Yeah, it's satisfactory to me, but I really didn't have too much of a concern anyway, but I just realized maybe those folks aren't actually in the room due to timezones. So I would support this for Stage 1, but, in the interest of not putting extra work on you, if there's just categorical objection, I would like that resolved. Certainly before Stage 2, but hopefully before the end of the meeting if those folks show up. +SYG: Yeah, it's satisfactory to me, but I really didn't have too much of a concern anyway, but I just realized maybe those folks aren't actually in the room due to timezones. So I would support this for Stage 1, but, in the interest of not putting extra work on you, if there's just categorical objection, I would like that resolved. Certainly before Stage 2, but hopefully before the end of the meeting if those folks show up. MM: SYG, if I understand your question, I'm one of those people that did object and would object to something that changes the interpretation. That's why I don't like this framing of the API, but I think that the actual goal of the API is that the interpretation isn't changing. What's different is that you're taking the interpreted artifact and catching it in an earlier stage. You're catching it at the stage before, it gets linked and initialized. As opposed to saying the same text could be seen as a source code in one language or another language. That would be a change of interpretation, which would be a security nightmare. That's not what's going on here. And I hope the proposal changes. So it doesn't doesn't seem like that's what's going on here. @@ -432,15 +449,15 @@ SYG: I see. Thank you, MM. I was mentally framing it as…This comes back to my MM: Yes. -SYG: I'm okay with either if they solve the use case at hand. But yes, thanks for your feedback. I'm glad that this kind of solves it for you. +SYG: I'm okay with either if they solve the use case at hand. But yes, thanks for your feedback. I'm glad that this kind of solves it for you. RPR: Yeah, three minutes left and only JRL on the queue. -JRL: So, I'm concerned about how you teach this to users who are trying to use WASM. I remember from conversations, when we were discussing assert, that a user would be able to specify import, whatever from [unable to transcribe] assert type equals WASM and that would give them a WASM binary. And I don't understand how we can teach people that they would get it. Great. Why is [unable to transcribe] things by specifying this an evaluator or what? And even evaluator could be here for anything besides WASM, so I'm not sure why need both evaluator and assertion and maybe it's just the way that it's presented in this API currently. Maybe this is exactly what Mark is talking about. I just don't understand what's going on. +JRL: So, I'm concerned about how you teach this to users who are trying to use WASM. I remember from conversations, when we were discussing assert, that a user would be able to specify import, whatever from [unable to transcribe] assert type equals WASM and that would give them a WASM binary. And I don't understand how we can teach people that they would get it. Great. Why is [unable to transcribe] things by specifying this an evaluator or what? And even evaluator could be here for anything besides WASM, so I'm not sure why need both evaluator and assertion and maybe it's just the way that it's presented in this API currently. Maybe this is exactly what Mark is talking about. I just don't understand what's going on. GB: Yeah, it is. Unfortunately, verbose in that would be import X from specifier asserts type wise. We could potentially call it, you know, as well as a module if we could unify on a generic, you know, kind of module definition of what it means to have as module it could be "as module", if we could more strictly define that. But for now, we're calling it a module. So yeah, you would say import action, specify a certain type of WASM as WASM module to be able to get this kind of compiled. But linked on instantiated module form that would basically just be the, you know, the spell to cast to load some WebAssembly and it would effectively become a standard pattern for that. Because it has these benefits over fetch and compile streaming with CSP. And the import assertions potentially have some, I mean, we still need to decide if – there's some interesting CSP questions that are kind of unrelated so I don't want to get into that now. -JRL: Okay, I can bring it on GitHub. Instead. We could have a better discussion for sure. +JRL: Okay, I can bring it on GitHub. Instead. We could have a better discussion for sure. GB: Thank you. Please do. @@ -467,16 +484,18 @@ YSV: I also support Stage 1. RPR: All right, we've had no objections. So congratulations. You have Stage 1. Thanks so much. All right. Thank you, everyone. ### Conclusion/Resolution -* Stage 1 + +- Stage 1 ## Agenda deadline rule clarifications + Presenter: Rob Palmer (RPR) RPR: So about a week or so ago. There was a question on adding topics to the agenda for these meetings. So meta-process thing. And as part of that, the question came up, I think, from Ron, about the meaning of what it means to submit something to the agenda, as in, have you made the cutoff in time or not? And originally the language I think was not entirely clear what it means to get something onto the agenda. The key question is, is it okay to Just get the PR open, to raise the PR in time, or does it actually need to be merged by the deadline? and at the moment JHD clarified that both, we both talked about on the TC39 delegates Matrix Channel, and it was clarified to say that an [un-merged PR still counts for the purposes of being added](https://github.com/tc39/agendas/commit/d2ef80976759f763eaf621b851479753e29b081c#diff-0b87e2fc7748588525a23909f36542c8244da7bf86fe1e93ee9715e549f7944b). You can see the wording there, The wording explicitly says, "Note: an unmerged PR counts as added for the purposes of this requirement". So we're raising it here for awareness. And so people have the opportunity to discuss or to object. If you think instead that we should say, "no people must have merged by the deadline". I think as part of the explanation for this. Normally PRs on the repo, to get submitted to the agenda, gets merged fairly quickly. I would say, normally, within 24 hours and as we approach the deadline there is even more attention on it. So things generally get merged within a few hours. So I think that under all practical purposes, any PR that was open but not merged at the deadline, would ordinarily get merged within that. The next 12 hours. And so if anyone has any questions – I see that there's SYG on the queue. SYG: Yeah, this can’t all delegates already push to master? Why not just do that? Like who's the PR workflow for? -RPR: I think it's mostly just to protect the integrity of the document. So people don't make accidental mistakes and people obviously are putting things on it in an order. So it is appropriate for someone to check that things go order +RPR: I think it's mostly just to protect the integrity of the document. So people don't make accidental mistakes and people obviously are putting things on it in an order. So it is appropriate for someone to check that things go order JHD: So, I can speak to that please. The reason that we've retained ability to push directly to master is because folks like the convenience of it and the reason that we have, I don't know if it's required or preferred, pull requests for some kinds of changes is because it notifies people in a way that pushing a commit does not. So generally speaking, if you're adding something to the agenda after the deadline, that's the point. When people may have looked at it for the last time, the convention we followed is to use a pull request for anything like adding a new item or, you know, changing whether you're asking for stage advancement, things like that so that people are aware of it. Having the PR merged is not a requirement for people to be aware of it. That was the point of the deadline - so that people are aware of it in time to be able to review it. So the way we've always treated it in the past - the actual precedent followed - is that as long as people are notified before the deadline, then the requirement, spiritually at least, is satisfied. So that was why I went ahead and updated the requirement in the agenda to note that the PR doesn't have to be merged. So you're correct that someone could just merge their own PR, but sometimes there's merge conflicts and as Rob said, sometimes there's a few of them open within 12 hours and they all usually get landed. Does that answer your question? @@ -499,12 +518,13 @@ JHD: So, the challenge at that is, I mean we all have a personal preferences of RPR: Okay, I don't think we need to talk more about the details here. I think we've achieved awareness. MF said he thought it's painfully obvious that a PR was already sufficient. So, we've made sure that this is formally recognized, but of course, with all of this, if anyone has more suggestions will always be welcome to talk about this and in future, I think. So. Thank you for your time today. ## Function Helpers + Presenter: J. S. Choi (JSC) - [proposal](https://github.com/js-choi/proposal-function-helpers) - [slides](https://docs.google.com/presentation/d/1MShu-uA_gz1LDpmlckQ9Wgsb0ZLylYV0QWZBnsTAOGk/edit?usp=sharing) -JSC: I’m going to go through the slides quickly. Everyone can read the details that they want. There's also an explainer. +JSC: I’m going to go through the slides quickly. Everyone can read the details that they want. There's also an explainer. JSC: This is a proposal for Stage 1. The concept is that there are a lot of common useful helper functions that are defined a lot, used a lot, downloaded from npm a lot. We should standardize at least some of them. So I'm seeking consensus for Stage 1 that standardizing at least some of the helpers I'm going to list here is at least worth investigating. It is not a proposal seeking to standardize every imaginable helper function, just some selected frequently used ones. Choosing *which* ones out of the bag that I'm going to present, I consider to be bikeshedding for before Stage 2. Stage 1 would be “worth investigating” and we would decide which ones and whether to make them static methods or methods to the Function.prototype. @@ -512,11 +532,11 @@ JSC: Yeah, some people might have philosophical questions. Like: why bother with JSC: Everyone needs to manipulate callbacks. This isn't a matter of like, oh, we're trying to make things better for hard core functional programmers. I think that everyone needs to manipulate callbacks and these are pretty simple affordances for doing them. -JSC: Why can't they just define them on their own? It's a matter of ergonomics. When we standardize it, we can readily use it in the context of the developer console or a script instead of pasting a definition, or as a lot of people actually do: download or bring in an external dependency that has this little function. +JSC: Why can't they just define them on their own? It's a matter of ergonomics. When we standardize it, we can readily use it in the context of the developer console or a script instead of pasting a definition, or as a lot of people actually do: download or bring in an external dependency that has this little function. -JSC: I'd also argue that it would improve code clarity. A lot of these functions have all sorts of different names. Standardizing one name would be great. And—even for simple functions like identity/constant, whatever—a lot of people, myself included, think that a standardized name would be simpler than an inline function Definition like having one word versus having three tokens or whatever. +JSC: I'd also argue that it would improve code clarity. A lot of these functions have all sorts of different names. Standardizing one name would be great. And—even for simple functions like identity/constant, whatever—a lot of people, myself included, think that a standardized name would be simpler than an inline function Definition like having one word versus having three tokens or whatever. -JSC: And unlike new syntax, these are all API. This is all pretty lightweight stuff, lightweight ways to improve the experience of all developers. So like that picture there. I think that all of these are cowpaths and at least some of them deserve being paved. This isn't syntax. It's all API. They are all possibilities. And like I said earlier, choosing which ones to bring forward in this proposal would be, bikeshedding before Stage 2. we could punt some of them to separate proposals, if they are really controversial, whatever I'm asking for Stage 1, whether it's worth considering standardizing a bunch them. I'm going to go through these possibilities really quickly. Remember they’re possibilities. +JSC: And unlike new syntax, these are all API. This is all pretty lightweight stuff, lightweight ways to improve the experience of all developers. So like that picture there. I think that all of these are cowpaths and at least some of them deserve being paved. This isn't syntax. It's all API. They are all possibilities. And like I said earlier, choosing which ones to bring forward in this proposal would be, bikeshedding before Stage 2. we could punt some of them to separate proposals, if they are really controversial, whatever I'm asking for Stage 1, whether it's worth considering standardizing a bunch them. I'm going to go through these possibilities really quickly. Remember they’re possibilities. JSC: For instance, this is a function composition thing. You'd put it as a property on the Function global object. It'd probably be a static method. Lots of people use this. There's plenty of real-world examples on the explainer. All of these have real-world examples that I've looked for and found in the wild. And in this case, this would compose functions. So that you give this function a list of functions and it creates a function that applies, whatever argument it gets to the first function, then the result of that to the second function, X, et cetera. Yes. And there's also, there also could be an async version that would support promises and always return a promise. The reason why this proposal calls it flow is because this composes from left to right, which seems to be the preference of most JavaScript developers rather than the right to left compose operations that you see in like hardcore functional languages that resemble mathematics. We can quibble on the name. TAB floated the idea of having a pipe function too. Yes, there is a pipe operator. This is different. I am one of the champions of the pipe operator proposal. As many of, you know, there's been a lot of community feedback from developers who have desired standardized, unary function application and are unhappy with the topic syntax the placeholder syntax that the operator that move forward to Stage 2 at the text style pipe operator. They seem to be made happier by the presence of a standardized pipe function. I don't really have much of an opinion whether to include both flow and pipe is flow, except that its first argument is the input to the things. So pipe is an application. Well, flow creates a function, but either way, you're also including a list of function callbacks and then sequentially applying them to something. It's just a matter of whether you're doing it now or later. @@ -528,7 +548,7 @@ JSC: Once. This creates a function from a callback that makes sure the callback JSC: Debounce/throttle. Very popular too. Lots of lots of end user-facing HTML. Your HTML-manipulating code uses this to manipulate how often a function actually gets called based on some event or something like that. They're both useful. There have been plenty of articles written about why both are useful. I think we should consider adding them to the core language. -JSC: There's something called `aside` or `tap`. It's just something that creates a function from another function, that that makes it like, it runs it as a side effect. And then it returns the original input and that can be useful for debugging or it's like interposing some sort of side effect like printing to the console or something in the middle of the nested statement or a long chain or something like that. +JSC: There's something called `aside` or `tap`. It's just something that creates a function from another function, that that makes it like, it runs it as a side effect. And then it returns the original input and that can be useful for debugging or it's like interposing some sort of side effect like printing to the console or something in the middle of the nested statement or a long chain or something like that. JSC: And there's also unThis. People also call this uncurryThis, callBind, whatever. It's just basically converting a function that uses the `this` binding into a function that doesn't. The first argument of the new function would be plugged into the original callback’s `this` receiver. Again, these are all just possibilities. I'm asking for Stage 1. @@ -542,7 +562,7 @@ JSC: Okay. Thank you. you. Next up, CZW. CZW: Yeah, I'm just saying that I found many of them. There are two helpers that can be replaced by arrow functions with less characters to type. I don't find building these into the language would help ergonomics. but that's the type of question about what functions to be included in the proposal right now? -JSC: I'm arguing that including which functions to include that question would be bikeshedding for before Stage 2 and Stage 1 would be that: It's worth investigating like just adding helper functions function. Like some of these are one-liners, some of them aren't for the ones whether to include the ones that are, I would say, I'm arguing right now is a pre-stage-two concern, but as for your observation that it's actually shorter, some of these are actually shorter if you use Arrow functions. While that's true, when it comes to length, I argue in the code Clarity, heading on this on the side N Slide I'm showing right now that that a lot of people, myself included think that it can be clear if, if we Just use one word rather than three words to create it. So yeah, like The Arrow function might be visually shorter, but me, it's actually conceptually longer so to speak. It's more words rather than just one word or or to it. And as for constant, like, for instance, it makes it clear, you're creating a constant function for instance, but we can bikeshed over that in the future. +JSC: I'm arguing that including which functions to include that question would be bikeshedding for before Stage 2 and Stage 1 would be that: It's worth investigating like just adding helper functions function. Like some of these are one-liners, some of them aren't for the ones whether to include the ones that are, I would say, I'm arguing right now is a pre-stage-two concern, but as for your observation that it's actually shorter, some of these are actually shorter if you use Arrow functions. While that's true, when it comes to length, I argue in the code Clarity, heading on this on the side N Slide I'm showing right now that that a lot of people, myself included think that it can be clear if, if we Just use one word rather than three words to create it. So yeah, like The Arrow function might be visually shorter, but me, it's actually conceptually longer so to speak. It's more words rather than just one word or or to it. And as for constant, like, for instance, it makes it clear, you're creating a constant function for instance, but we can bikeshed over that in the future. WH: Everytime you use `x=>x` you get a new function. Whereas there is only one `Function.identity` function. @@ -552,7 +572,7 @@ WH: I'm saying that's a reason to use `Function.identity` instead of `x => x`. JSC: Oh, yes. That is also true. It also avoids allocating our creating a new function every time that is also true. -YSV: I'll be quite direct here. I will block this from Stage 1 because it does not have a problem statement. So the statement that there are lots of libraries that are popular and those functions are common isn't enough on its own to satisfy the problem statement requirement of Stage 1. There are even from what you've presented there are some groups that are worth investigating, for example, flow async and flow and maybe even pipe and pipe async. Those can be seen as sort of belonging to the same category of problem. However, they're very different from constant and identity which are very different from other parts, from other helpers that you've proposed here. The reason why I believe it's very important to tighten the problem statement here is because later on when we're reflecting on this proposal and you know, things can change. To make sure that we don't lose track of which problem statement we are trying to figure out - which has happened with proposals - I think it's very important that we are clear about what we're solving for users. So I'm not against the idea of introducing function helpers, but this should be split up into tighter proposals. +YSV: I'll be quite direct here. I will block this from Stage 1 because it does not have a problem statement. So the statement that there are lots of libraries that are popular and those functions are common isn't enough on its own to satisfy the problem statement requirement of Stage 1. There are even from what you've presented there are some groups that are worth investigating, for example, flow async and flow and maybe even pipe and pipe async. Those can be seen as sort of belonging to the same category of problem. However, they're very different from constant and identity which are very different from other parts, from other helpers that you've proposed here. The reason why I believe it's very important to tighten the problem statement here is because later on when we're reflecting on this proposal and you know, things can change. To make sure that we don't lose track of which problem statement we are trying to figure out - which has happened with proposals - I think it's very important that we are clear about what we're solving for users. So I'm not against the idea of introducing function helpers, but this should be split up into tighter proposals. JSC: Okay, that's super fair. I am committing to splitting this proposal up. I'll keep this at Stage 0 like and archive it in the TC39 organization, but I will re-present, probably one at a time several tighter proposals probably in the future. Probably flow and flowAsync will be first. Does that satisfy you YSV? @@ -572,7 +592,7 @@ JSC: I will remark that once. So, although this is bike-shedding, once, bounce, JHX: It's just a simple question about why it's just `uncurryThis` that's a prototype method. -JSC: Are you asking about `unThis`, and `debounce` and `throttle`. Also? +JSC: Are you asking about `unThis`, and `debounce` and `throttle`. Also? JHX: Yeah, so yeah, I'm not sure. What's the rule behind that is. @@ -582,15 +602,15 @@ JHX: Okay, we can discuss that in the issues. JWK: Debounce and throttle cannot be specified in language, unless we add a host hook for it, but the idea was explicitly rejected by the engine when I tried to propose Promise.delay. Maybe you should remove those time related functions. -JSC: All right. So since those will get separate, those two will get set up separate proposals. We can examine that there when I do that. I really would appreciate it. If you could, I'll check it out too. When you try to propose, promise that delay, if that's a hard block from the engines, then that would be great to know them. Looks like SYG giving a plus one. All right. +JSC: All right. So since those will get separate, those two will get set up separate proposals. We can examine that there when I do that. I really would appreciate it. If you could, I'll check it out too. When you try to propose, promise that delay, if that's a hard block from the engines, then that would be great to know them. Looks like SYG giving a plus one. All right. SYG: Yeah, I would save you some time. -JWK: I have some more concern about adding time to the language, it might violate SES requirements that allow the program to observe time. +JWK: I have some more concern about adding time to the language, it might violate SES requirements that allow the program to observe time. JSC: All right. That's yeah. Okay, good to know. We'll take a look at that. But since SYG was giving pretty good, pretty strong signals that he would block debounce and throttle in the core language. We'll probably prioritize that way low compared to everything else I’m bringing up now. I'll look into this more. -JWK: When I write code in TypeScript, I want to make sure my code exhausts all possibilities of a variable and if every type possibility is exhausted, the variable will become type `never`. I wrote this function in my code base manytimes. If I add a new possibility of this variable, And it will no longer become type `never` and have a compile error. +JWK: When I write code in TypeScript, I want to make sure my code exhausts all possibilities of a variable and if every type possibility is exhausted, the variable will become type `never`. I wrote this function in my code base manytimes. If I add a new possibility of this variable, And it will no longer become type `never` and have a compile error. ```ts let x: 1 | 2 = 2 @@ -618,9 +638,9 @@ JSC: [If unreachable is a new function, we can talk about it on a new issue](htt SYG: Looked at web compat risk? -JSC: The answer is, no, I haven't looked too hard at the names yet. And whether there's web compact risk with the names I chose. +JSC: The answer is, no, I haven't looked too hard at the names yet. And whether there's web compact risk with the names I chose. -SYG: Given that these are kind of directly motivated by being very popular NPM packages. At least we have a starting point there to see how they install these methods to see if there is a risk. +SYG: Given that these are kind of directly motivated by being very popular NPM packages. At least we have a starting point there to see how they install these methods to see if there is a risk. JSC: As far as I can tell, none of them monkey patch any intrinsic prototypes; they're all on like the jQuery wrapper objects or the jQuery global or they're on like Lodash's `_` or they're imported from a module. I have not found any intrinsic monkey patching in the real world examples that I brought into the explainer. I can tell you that. @@ -631,10 +651,13 @@ CM: So one of our meta concerns is always about adding complexity into the langu JSC: All right. Your point is appreciated. Thank you CM. I will just say that there is a meta level philosophical level thing whether developer ergonomics is reversed adversity and the burden of like, adding a function to the core API. Like, how much, how much is this? Or that? How much is the benefit to developer ergonomics like or two or being able to For this thing, easily versus having, having to remember that the name, the standard name, for instance, of this thing versus directly defining an arrow function. Not that including this into language will force everyone to not use the arrow function version, but a lot, a lot of people do think, and myself included, think that a lot of these make the code clearer, like having a single word. So I think it would be great if we could reach for them without defining them ourselves or bringing in external dependencies, but that is a meta level thing. It applies to only some of them and not others and I am committing to splitting up this proposal. JSC: All right, it looks like CZW gave a +1. Queue is empty. I'm already committing to withdrawing this proposal and bringing it back split up. I plan to bring pipe and flow functions first. Does anyone else have any comments before I end the presentation? [silence] Yeah. All right. Sounds good. Yeah, I'm splitting up the proposals and I will see you all next time. All right. Thank you very much. + ### Conclusion/Resolution -* JSC to split into multiple proposals and bring back + +- JSC to split into multiple proposals and bring back ## Temporal (overflow) + Presenter: Justin Grant (JGT) - [proposal](https://github.com/tc39/proposal-temporal) @@ -642,23 +665,23 @@ Presenter: Justin Grant (JGT) JGT: So we left off here yesterday and we tacked on a few more slides and PFC should/can chime in if needed. Hopefully he made it awake this morning, but I'll just get going regardless. So of course I've had sequels on my mind. So this is our sequel today and our Overflow and my bad Photoshop skills. And so the first is actually, before we continue what we did yesterday, while we were meeting, FYT found another spec bug. So I figured I'd bring it in here again, like the other bugs that we discovered after the deadline. If folks are interested in delaying this, to take more time to look at it, that's ok. I'll describe it real quickly in that when you convert an input to a ZonedDateTime, one of the steps in that conversion is to compare the offset and the IANA time zone name. And to make sure that they're compatible. And so, a good example is this string in the code sample, here. We'll compare the minus 7. From GMT to Europe/London and realized that that's wrong. Although I wish it weren't wrong so that I wouldn't be so tired, that's the current. And you can also in addition to passing a string, you can pass a property-bag with the same values. And in this case, if they're valid values like plus 1 for the offset then it will work fine. And so the spec as it's written today, will tolerate an invalid not because there's no logic in this back to deal with the offset, but simply because there's a missing line in the spec to read the offset in the first place from the user's input. And so Frank found this and submitted a PR for this yesterday and at the beginning of these slides we're going to ask for consensus on this change. -JGT: Next is a continuation from the discussion yesterday around what should we do when there are no required options submitted to the round method. And just from mining the chats in the delegates channel, it looks like there are three choices here, three options options. One is, call round and return the identity if an empty object is passed, but then have round throw or both of them could return identity or both of them can throw. And so, the current Temporal Stage 3 proposal chose that both of them throw case first. Because it's a no-op. And almost certainly a bug on the programmers part and also to defend against misspelling a required property, like in this case, `smallestUnit`, because this would be interpreted as an empty object by bicycle [?]. So obviously this doesn't catch every bug, right? You can still misname an optional property and we wouldn't catch it. But from our perspective catching some bugs is better than catching no bugs. And so that's how we ended up where we are. So the question for folks is, is there a compelling reason to change the current behavior we have today? And at that all was handed over to feedback. I'm not looking at the queue, so I'm screwed. +JGT: Next is a continuation from the discussion yesterday around what should we do when there are no required options submitted to the round method. And just from mining the chats in the delegates channel, it looks like there are three choices here, three options options. One is, call round and return the identity if an empty object is passed, but then have round throw or both of them could return identity or both of them can throw. And so, the current Temporal Stage 3 proposal chose that both of them throw case first. Because it's a no-op. And almost certainly a bug on the programmers part and also to defend against misspelling a required property, like in this case, `smallestUnit`, because this would be interpreted as an empty object by bicycle [?]. So obviously this doesn't catch every bug, right? You can still misname an optional property and we wouldn't catch it. But from our perspective catching some bugs is better than catching no bugs. And so that's how we ended up where we are. So the question for folks is, is there a compelling reason to change the current behavior we have today? And at that all was handed over to feedback. I'm not looking at the queue, so I'm screwed. WH: Strongly prefer choice 1. To address the case of name typos, most people will just be using the string version of it instead of an option bag so they won't have to spell the property name. Choice 1 lets you reuse the same options bag for multiple kinds of calls and that I see as a compelling reason. -JWK: I agree to change behavior if it’s the right thing to do. We still have a chance to fix the design instead of shipping it to the users.. +JWK: I agree to change behavior if it’s the right thing to do. We still have a chance to fix the design instead of shipping it to the users.. MAH: Yeah, I mean for the three options I would say two or three if it's needed but in general an empty object or no object for config should be equivalent. In my opinion. -USA: JGT, or WH. Would you like to respond to that? We have nothing else in the queue? +USA: JGT, or WH. Would you like to respond to that? We have nothing else in the queue? WH: Yes, the issue is that the first argument is overloaded. It's going to be either an object or a string. I might agree with you if it weren't for the overload, but the overload makes a difference here. USA: JGT, do you want to go ahead? -JGT: Yeah, I don't have a strong opinion either way. My inclination is, if there's a consensus on the Committee to do one of these things then we'll do one of these things, but I don't I don't know enough about the process to understand how we would measure whether that consensus exists, +JGT: Yeah, I don't have a strong opinion either way. My inclination is, if there's a consensus on the Committee to do one of these things then we'll do one of these things, but I don't I don't know enough about the process to understand how we would measure whether that consensus exists, -USA: What you can do is you can propose an option, and you can ask for a consensus on that. +USA: What you can do is you can propose an option, and you can ask for a consensus on that. JGT: Certainly from the proposal champions perspective, we would prefer what we already have because it's already been approved, and from our perspective would need a pretty strong consensus to change that. So I would certainly propose consensus for number 3 and see if that goes. But again I sort of want to defer to folks who are more familiar with the process and I am, @@ -690,15 +713,15 @@ WH: Yes, I would like to propose option 1. I consider this a bug fix. The whole JGT: Just to clarify, we're not— I think our goal is, we want something that there is consensus for. We're not saying, oh, number one is awful. But rather, you know, that there is a status quo, if we are going to change it, it should be something that there is consensus on the Committee for and that we're not in a position, we're not pushing for number one, but we will accept it if that's the consensus. We'll accept number two, and we'll accept number three. We just want the Committee to make that choice. -USA: So, we could ask for consensus on option one being the decision right away or we could do a temperature check or something like that if you prefer that. +USA: So, we could ask for consensus on option one being the decision right away or we could do a temperature check or something like that if you prefer that. JGT: I'm fine with whatever makes sense. Okay, then maybe WH, would you like to explicitly ask for consensus, for option one being the choice? WH: So, I would like to ask for consensus for option one. -USA: Let's see, if somebody objects. It doesn't seem so. So I think, yeah, nobody objects to option one. Justin, you have your choice? +USA: Let's see, if somebody objects. It doesn't seem so. So I think, yeah, nobody objects to option one. Justin, you have your choice? -SYG: I still have a clarifying question. I remember yesterday's presentation, the weirdness for round, was with Duration.round(). Is this for Duration.round() or for all round() methods? +SYG: I still have a clarifying question. I remember yesterday's presentation, the weirdness for round, was with Duration.round(). Is this for Duration.round() or for all round() methods? WH: This will be for all `round` methods. @@ -708,15 +731,15 @@ WH: Would you prefer option 2? JHD: I think option 2 or 3 are more consistent. I think that for the other round() methods option three is the only one that makes sense because they have one required thing. And if that required thing isn't there, it makes sense to throw. And yeah, I hear the argument that, let's just have return identity when the required thing isn't there, which means it's no longer required. But then, in that case, the function has a length of zero because zero required items and then calling it with no arguments must not throw. But yeah, I just talked myself into thinking that option one doesn't really ever make sense because having a required argument that's an empty object doesn't make any sense to me. -WH: The misconception is that `round` always needs to round to something. I would imagine the code being structured as somebody passing in some options, which gets distributed to a bunch of Temporal functions. If you're writing generic code, it's a real hassle to call `round` or not depending on whether somebody wants rounding behavior or not. It's much easier to just pass around some options bag you get from your caller, and this lets the caller control if you're rounding or not. +WH: The misconception is that `round` always needs to round to something. I would imagine the code being structured as somebody passing in some options, which gets distributed to a bunch of Temporal functions. If you're writing generic code, it's a real hassle to call `round` or not depending on whether somebody wants rounding behavior or not. It's much easier to just pass around some options bag you get from your caller, and this lets the caller control if you're rounding or not. JHD: I hear that, but I think that we're weighing the convenience of writing generic code around this method, which I think is going to be very rare. That sort of generic code is already very rare against the likelihood of bugs, but also I think unrelated to the semantics of round(). The function's length, this describes the number of required arguments and in option one, it would have to have a length of 1 because it throws if you give it less than one argument, but for that one required argument to be an empty object, that just makes no sense to me. I think that the intuition that was stated earlier about— I forget by whom— about an empty object and nothing being equivalent, I think that needs to hold. With option 2 the length could be 0, it doesn't require any arguments, and then it makes perfect sense that if you pass an empty options bag, it is the same as nothing and it could be identity. WH: I don't understand that argument. It's just like saying that the functions which take objects should throw if they get empty objects. An empty object is a valid object. -JHD: Sorry, to be clear, functions that take an options bag, empty object, an object here. It's totally fine. If they take an empty object because typically most or all of those properties are optional and so, if the properties are all optional and empty object is fine, but then so is no object at all, and all of those are equivalent. So I think that if it's an empty object, like options bags are kind of like named arguments conceptually and an empty object is providing no named arguments conceptually. We don't have to keep going in circles around it. I'm happy to keep explaining it, but every time we go in a circle. +JHD: Sorry, to be clear, functions that take an options bag, empty object, an object here. It's totally fine. If they take an empty object because typically most or all of those properties are optional and so, if the properties are all optional and empty object is fine, but then so is no object at all, and all of those are equivalent. So I think that if it's an empty object, like options bags are kind of like named arguments conceptually and an empty object is providing no named arguments conceptually. We don't have to keep going in circles around it. I'm happy to keep explaining it, but every time we go in a circle. -USA: Okay, so this seems to be coming close to time JGT, you mentioned at the beginning that you don't necessarily need to come to agreement within plenary. So, do you think it's a good idea to take this offline and WH JHD and the champions could discuss this? +USA: Okay, so this seems to be coming close to time JGT, you mentioned at the beginning that you don't necessarily need to come to agreement within plenary. So, do you think it's a good idea to take this offline and WH JHD and the champions could discuss this? JGT: I'm fine with it. Again, our perspective is, this is sort of a relatively uncommon corner case for the API, it won't destroy anything regardless of which choice is made. We just want there to be a choice that doesn't come around again. So we're fine with taking it offline. If there does turn out to be a consensus for changing it, we're happy to deploy that consensus. And in the meantime, we will stick with the status quo. @@ -726,12 +749,16 @@ USA: You could discuss this in more detail on the issue tracker and come to some WH: I'm not sure what there is more to say about this. We’ve been going around in circles. -USA: You need to come to an agreement with JHD in some way, right? I am not exactly sure about what the process says, but in case of disagreement, I think the status quo unfortunately for you is going to remain for now. Let's continue this offline. Thank you JGT. +USA: You need to come to an agreement with JHD in some way, right? I am not exactly sure about what the process says, but in case of disagreement, I think the status quo unfortunately for you is going to remain for now. Let's continue this offline. Thank you JGT. JGT: One quick thing is I did want to ask for consensus, for FYT's bug fix. Can I get consensus for this bug fix here? Any objections? All right. I'm not hearing any so we're done. + ### Conclusion/Resolution -* consensus only on mentioned bugfix + +- consensus only on mentioned bugfix + ## Evaluator Attributes (continued) + Presenter: Guy Bedford (GB) - [proposal](https://github.com/lucacasonato/proposal-evaluator-attributes) @@ -747,11 +774,11 @@ JHD: Yeah, I can just talk for a minute. So I am looking at slide 2. So what I w GB: Yeah. So this was something Daniel brought up earlier as well and was just really good at getting a clearer idea of that exact use case. I did go through it quite carefully, but with the WebAssembly integration that provides exactly that model. But the argument being that there are multiple conventions that have to kind of line up for that to give you the exact right execution. You often have to assume that the resolve is going to resolve these things correctly, that runtime execution model is going to work correctly and with WebAssembly we also have the fact that you've got to make sure that you're using the right memories and and getting all these things to line up the example just to show the slide. Is this a WASI example, where you have to pipe through a lot of context in order to get that start to end execution. Yeah this does that. Answer the question without trying to dig it up too much. -JHD: I mean, why can't you pass the specifier instead of the foo module here? +JHD: I mean, why can't you pass the specifier instead of the foo module here? GB: You mean into `WebAssembly.instantiate`. -JHD: Like, theoretically, could there be an API that takes the specifier and import.meta URL or something similar? Why does the JavaScript module need to import the foo module in order to do this? +JHD: Like, theoretically, could there be an API that takes the specifier and import.meta URL or something similar? Why does the JavaScript module need to import the foo module in order to do this? GB: There's a bunch of reasons. I would be doing it an injustice to be able to clarify that too simply. But basically one of the biggest benefits is, in theory, we lay the groundwork for CSP integration because you have this declarative mechanism. It's also better for bundlers, potentially. @@ -759,41 +786,41 @@ JHD: So, I guess. Are you envisioning any? So just a quick side note. I think th GB: So to be clear, the name isn't final, but you are specifically in this ten-minute slot. I just wanted to verify those previous concerns around the reinterpretation and specifically you mentioned that with import assertions. There were some unexpected results. Of that reinterpretation, question and just just to see that, that isn't related to what we're doing here. Or see if we can make sure that those aren't those concerns aren’t coming up again. -JHD: So let me summarize that real quick. Even if I have to import lines one after another, whether they're static or dynamic, they pull in the same specifier. I should not be just like I cannot use the in assertion to get two different conceptual things. All I can do is get it twice or get an error out of the assertion one. Similarly, I would expect with these attributes that I would get the same conceptual thing. It seems like you're only proposing the value of the WASM module right now. +JHD: So let me summarize that real quick. Even if I have to import lines one after another, whether they're static or dynamic, they pull in the same specifier. I should not be just like I cannot use the in assertion to get two different conceptual things. All I can do is get it twice or get an error out of the assertion one. Similarly, I would expect with these attributes that I would get the same conceptual thing. It seems like you're only proposing the value of the WASM module right now. SYG: Sorry to interrupt but it may help to give MM’s framing of this, which I found helpful. So MM has framed this not as different interpretations, but as letting you choose the representation that you want at which stage of the module processing pipeline. So in this case, importing it as a WASM module, instead of a WASM instance, gives you the representation before linking and instantiation. So the representation is always WASM throughout the whole pipeline. That doesn't change. You cannot opt into a different interpretation, but you can choose where in the pipeline you want the import to happen and then JHD: That is helpful. How does this interact with module blocks? -MM: That's exactly the issue that I was bringing up, which is I think that the staging issue applies to JavaScript well, as well as it applies to all the things we might want to bring it to you to bring into the module graph and able to import each other the static module records and module blocks are already both the static equivalent for JavaScript. They only contain their own pre-compiled modules of XS. All of these have just the compiled information from a single Source text without any linkage information. And what it's saying: it's not choosing a representation. It's not choosing a different interpretation is just, reifying in early as show us reifying, an earlier stage of the processing pipeline. So, what you have is something that still needs to be linked and initialized. So the remaining stages in the processing pipeline still need to happen before you have a module instance. +MM: That's exactly the issue that I was bringing up, which is I think that the staging issue applies to JavaScript well, as well as it applies to all the things we might want to bring it to you to bring into the module graph and able to import each other the static module records and module blocks are already both the static equivalent for JavaScript. They only contain their own pre-compiled modules of XS. All of these have just the compiled information from a single Source text without any linkage information. And what it's saying: it's not choosing a representation. It's not choosing a different interpretation is just, reifying in early as show us reifying, an earlier stage of the processing pipeline. So, what you have is something that still needs to be linked and initialized. So the remaining stages in the processing pipeline still need to happen before you have a module instance. JHD: Okay, so too, let's imagine. We're in a world where there is an evaluator attribute that applies to JavaScript modules, like source, once you type it out. MM: Part of my point is that it needs to be. We did not frame as an evaluator [?] after that. -JHD: I understand. I'm just trying to understand this proposal. Okay. It's something that applies to JavaScript modules. Obviously, in my parent and child example from earlier, if I have a console log statement, both [?] and I import them. Normally, I will get the console logs in child first and then parent. The employee [?] seems obvious here? If I use something that has similar semantics as these waves [?], a module thing that I wouldn't see any console log statements because none of the runtime code is evaluated just jump in and correct me if that's wrong. +JHD: I understand. I'm just trying to understand this proposal. Okay. It's something that applies to JavaScript modules. Obviously, in my parent and child example from earlier, if I have a console log statement, both [?] and I import them. Normally, I will get the console logs in child first and then parent. The employee [?] seems obvious here? If I use something that has similar semantics as these waves [?], a module thing that I wouldn't see any console log statements because none of the runtime code is evaluated just jump in and correct me if that's wrong. -MM: Yeah, but then the modules themselves, don't know how to link the modules themselves. Know what their linkage demands are. But it's up to the linkage context in which you're linking them to provide the import namespace. +MM: Yeah, but then the modules themselves, don't know how to link the modules themselves. Know what their linkage demands are. But it's up to the linkage context in which you're linking them to provide the import namespace. JHD: So what happens when I get a syntax error in child and then I try to get this as a module style representation of parent, And then I'm importing parent into my program. If there's a syntax error in the child, and I just normally import the parent, it will crap out because of the syntax error. What happens if I import parent as an unlinked module? -MM: What you get is a static module record, you don't, the information that is derived from the source text of the parent. There is no implied linkage to the child. The parent might have in it an import declaration from the child, but that's not that by itself not an association with particular modules, and we have only got it. It's only within a linkage context that it comes to become that, it comes to be associated by the context with some particular child. +MM: What you get is a static module record, you don't, the information that is derived from the source text of the parent. There is no implied linkage to the child. The parent might have in it an import declaration from the child, but that's not that by itself not an association with particular modules, and we have only got it. It's only within a linkage context that it comes to become that, it comes to be associated by the context with some particular child. JHD: And so then later I can run something that completes the process on parent and I would get a runtime SyntaxError exception. Then we have just added a major capability to the language which is the ability to, without eval put in a parsing error and determine later conditionally if it parses or not – is that something we want to do? GB: So, just to clarify: in that case, it's about what is specific in this, why the scenario when you import the child as a module, you're still doing the compilation. So you would get compilation errors during the import and those compilation errors would effectively be stored in the module registry in a way that they would retrigger for subsequent importers like other error records, even though these records are sitting almost in a parallel module map. Because you've got these two separate phases, So it's you would get compilation errors on when importing something that's That's that is in form. You're getting it in a compiled module form. You would get your runtime errors only when you perform the execution. So yeah, so syntax error. If we were extending this analogy to JavaScript syntax error would happen at that import time, Whereas your execution errors you would get at runtime, but again, we haven't specified anything here for JavaScript and it's like, it's in wide?/WASM? space so I wouldn't want to assume either you can assume what that would look like. So, I just the thing that I find appealing about, this in the context of it being unified across, WASM -JHD: Is that a synchronous or an asynchronous mechanism? +JHD: Is that a synchronous or an asynchronous mechanism? -GB: Asynchronous. Okay, and it would have to be for JavaScript modules as well. Yes, most likely. +GB: Asynchronous. Okay, and it would have to be for JavaScript modules as well. Yes, most likely. -JHD: Okay, then in that case, it would be the same as dynamic import, so it's not actually adding a capability - I was just talking that out. Yeah, so I guess I mean it seems fine as Stage 1. It’s pretty weird to block Stage 1 for, you know, for most things. Anyway, I'm on board with that, but I wanted to be very... I think it's very important that before it goes to Stage 2 that it has use cases that applies to JavaScript and also that there's no way for it to change the conceptual representation of the module that it's just as MM has said and SYG's paraphrase of that. It's here's the, there's like multiple modules steps and it's just a, the first chunk of those steps and it just delays but does not change the remainder of the steps you do. That, that seems like a very critical thing to preserve and obviously we can and probably will rename the proposal to match that. +JHD: Okay, then in that case, it would be the same as dynamic import, so it's not actually adding a capability - I was just talking that out. Yeah, so I guess I mean it seems fine as Stage 1. It’s pretty weird to block Stage 1 for, you know, for most things. Anyway, I'm on board with that, but I wanted to be very... I think it's very important that before it goes to Stage 2 that it has use cases that applies to JavaScript and also that there's no way for it to change the conceptual representation of the module that it's just as MM has said and SYG's paraphrase of that. It's here's the, there's like multiple modules steps and it's just a, the first chunk of those steps and it just delays but does not change the remainder of the steps you do. That, that seems like a very critical thing to preserve and obviously we can and probably will rename the proposal to match that. GB: Yeah, that requirement I think is a fundamental part of this framing and kind of the sort of novel aspect that kind of brought it back around from, from being something that seemed like, it could have certain risks associated with it. I would be careful assuming that we can find perfect generalizations to JavaScript and we did discuss this a bit further. I think there are a lot of conversations to have here and I would love to find a great unified approach. My concern is that we maintain the use cases and the best solutions to those use cases and I'm very clear in this proposal that the driving use case is WebAssembly that it could extend to things like JavaScript unlinked imports. I am hesitant to tie the future of the proposal to ensuring those things. work out even though I hope that there could be progress on that front. Yeah JHD: Just a real be real clear to interject. Real quick. Sorry let's say that both this and module blocks advance. Would it make sense to have something like a module block? -GB: There are still a huge amount of questions. I think we wouldn't if we get into the conversations quickly realize that there are lots of subtle semantics that we probably would have to discuss in quite a bit of detail. I guess the first question is, is it useful, right? So, can you do it? Probably? Do you want to do it? it? Well, that's the important thing to discuss first and foremost, but yeah, it could be expected to do what you want. +GB: There are still a huge amount of questions. I think we wouldn't if we get into the conversations quickly realize that there are lots of subtle semantics that we probably would have to discuss in quite a bit of detail. I guess the first question is, is it useful, right? So, can you do it? Probably? Do you want to do it? it? Well, that's the important thing to discuss first and foremost, but yeah, it could be expected to do what you want. JHD: I mean, I would assume it's the same rate, like, you can create a worker with a specifier, but you could also theoretically, you know, maybe you can create one with the module block and it would be the same use case. Here is where you wanted to do the declaratively and help lenders, bundlers and stuff. So, but yeah, it's that just seems like sort of cross-cutting concerns that needs to be worked out with in Stage 1. @@ -804,9 +831,13 @@ SYG: This is a response to earlier. These petition texts are the timing thing. I USA: Great, so guy, is that it then? GB: Yes, that's all the clarification we needed. Yeah. Thank you. Great. Thank you. + ### Conclusion/Resolution -* Stage 1 holds + +- Stage 1 holds + ## CoC Update + Presenter: Jordan Harband (JHD) JHD: Yeah, so this is high-level both because JBN isn't here to present whatever she'd prepared and also because I'm tired. Since the last plenary, there was a lot of activity on the pipeline repo, in particular, I think two users have been banned as a result. Per our normal process, both bans are temporary. But some of the actions of one of the users after being banned, may extend that the. @@ -816,7 +847,9 @@ JHD: We've had some light discussions about considering maybe adding the "mainta WH: To clarify, were the banned people Ecma members, or were they outside folks? JHD: No, as of yet I don't believe we've banned anyone who is an active member or invited expert or a delegate. Like ever, not just recently. I think that's the update. + ## Incubator Call Solicitation + Presenter: Shu-yu Guo (SYG) SYG: As per normal. We call for volunteers proposals were General topics to be added to the list of incubator calls that we try to do between in between plenaries last time there was one about proxy performance that Leo had requested but ended up not really getting any participants on the It also, that was cancelled in case, there are still interested parties who want to discuss Proxy performance, we can bring that back, but I just want to throw that out there before asking for new set of proposals because I think there are several very interesting early stage proposals that could use some -- what's What's the word -- faster feedback loop. @@ -825,7 +858,7 @@ SYG: Any interest in proxy performance? I tried to ping several times last time SYG: Okay, new topics. So we'll go forward. I'll go for volunteers first. Any champion groups of new proposals who would like to resolve some of the possibly controversial issues over an hour, call in, between this meeting at the next? -JHD: If there's anyone outside the Pattern Matching Champion group who would actually show up. We'd love to have a session. But the last time we did, at the incubator call nobody showed up outside the champion group. So I don't know if there's anyone in the meeting who is not in the champion group and is willing to actually commit and follow through with attending. +JHD: If there's anyone outside the Pattern Matching Champion group who would actually show up. We'd love to have a session. But the last time we did, at the incubator call nobody showed up outside the champion group. So I don't know if there's anyone in the meeting who is not in the champion group and is willing to actually commit and follow through with attending. SYG: Okay, I'll push pattern matching onto the queue, but given, yeah, JHD what you said last time. We'll see how the doodle gorgos anything else before I start to call out people in Matrix. @@ -833,11 +866,11 @@ JSC: Okay. Yeah. I have a couple of others who want to talk about bikeshedding t SYG: To be clear, incubator calls follow the same IP our stuff as TC39. So it's open to delegates and invited experts. I know pipe has a lot of interest from the community just to be clear that you are interested in discussing within TC39 groups. -JSC: That's right. Although there is one Community member who has been really involved in the proposal, who is not a TC39 delegate, whom we were wondering if we could bring in. +JSC: That's right. Although there is one Community member who has been really involved in the proposal, who is not a TC39 delegate, whom we were wondering if we could bring in. USA: So I think the way you can go about it is to have that person on as an invited expert; that way they would sign all the IPR agreements and everything. So if it would be okay. Them to attend. -JSC: Alright, we’ll look it up. +JSC: Alright, we’ll look it up. SYG: Yeah, there's a form that you can have them fill out. I think we try to make the friction there less over time. Let us know how your experience goes. @@ -859,7 +892,7 @@ MLS: It's December, 14th 15th. SYG: Okay, so that probably gives us maybe two or three slots and I had in mind that the function helper grab bag that is typically to be split up. JSC’s proposal. Seems like there were a lot of opinions flying around. That could be hashed out if we sat in a VC and just talked to each other, so I'm wondering if Folks, be interested to attend the motivation hash out, I guess to see which people, which of the helpers folks feel strongly that should be or should not be included. -JSC: I'm up for that. My current plan is to make repositories devoted first to flow/pipe on one hand, and uncurryThis. But I am also totally up to a general helper-function incubator call to hear opinions from anyone about helper functions in general. +JSC: I'm up for that. My current plan is to make repositories devoted first to flow/pipe on one hand, and uncurryThis. But I am also totally up to a general helper-function incubator call to hear opinions from anyone about helper functions in general. SYG: If there are no objections to that, but I guess I'm looking for something a little stronger than no objections would. @@ -867,7 +900,7 @@ MF: I would attend. JHD: Okay, I would attend this one. -SYG: Great I will add that on, maybe we'll have time for it, but we'll see by the doodle. I understand the Q4 is usually a quieter time for some folks. Especially European folks, who have better vacation policies than we do. +SYG: Great I will add that on, maybe we'll have time for it, but we'll see by the doodle. I understand the Q4 is usually a quieter time for some folks. Especially European folks, who have better vacation policies than we do. JSC: I was wondering if it might be worth having that against the BigInt Math incubators call. Since there's been a little contention regarding some things, like sqrt. diff --git a/meetings/2021-12/dec-14.md b/meetings/2021-12/dec-14.md index ce7b53fe..5df76b61 100644 --- a/meetings/2021-12/dec-14.md +++ b/meetings/2021-12/dec-14.md @@ -45,7 +45,7 @@ KG: Yes it is! AKI: This will still need note takers to edit and clarify what's being discussed, including marking who is speaking. As a bonus, you get to share the amazing typos and homophones. We should probably have a Twitter account for this. There's some real comedic gems. -AKI: Finally, our next meeting, your next meeting is a traditional three-day meeting. How many days when's the next meeting? +AKI: Finally, our next meeting, your next meeting is a traditional three-day meeting. How many days when's the next meeting? RPR: Four days. @@ -147,7 +147,7 @@ JHD: So, the committee has previously decided essentially that namespace objects Only really, what is the proposed content of the two string tag? Oh, in this pull request. It is the string import meta in Pascal case. -MM: In what case Pascal-like capital? +MM: In what case Pascal-like capital? JHD: Yes, capital I. @@ -211,7 +211,7 @@ MB: We're just giving an update about the regex set notation proposal aka the v MB: So what I want to focus on today is really the changes since last time we brought this to committee because we've done it possible fates already. I'm not going to tell you things, you already know, but so here's something that changed last time in terms of the only proposal changes that aren't purely spec fixes because we've had a few of those as well. So here's the quick overview. We have a dedicated slide for each of these, so we'll go in more detailed in a second. -MB: with the first one has to do with IgnoreCase, which we presented a proposal to change the ignore case, semantics and our proposal, compared to how it works, in the u flag which already exists. And since then we should go and with proposal. The string literal syntax changed again and also there is one point which isn't really a change, but something that we got comprehensive last time we presented some proposal to change the meaning of \d \w and \b, and we got strong pushback during this meeting. So we're not looking to revisit that but we're hoping to clarify what this consensus of last time means exactly at request of some delegates. So let's go over these one by one. +MB: with the first one has to do with IgnoreCase, which we presented a proposal to change the ignore case, semantics and our proposal, compared to how it works, in the u flag which already exists. And since then we should go and with proposal. The string literal syntax changed again and also there is one point which isn't really a change, but something that we got comprehensive last time we presented some proposal to change the meaning of \d \w and \b, and we got strong pushback during this meeting. So we're not looking to revisit that but we're hoping to clarify what this consensus of last time means exactly at request of some delegates. So let's go over these one by one. MB: The first is about the IgnoreCase. @@ -245,7 +245,7 @@ MB: I think I got an answer to the question like on the record like we've been c WH: I agree this has nothing to do with this proposal. -MB: So I think having an answer on the record is good enough, like it's okay to use as a proposal. +MB: So I think having an answer on the record is good enough, like it's okay to use as a proposal. MM: Yeah, sorry if I missed it, but what is v mode? @@ -263,9 +263,9 @@ WH: They can't, unless you introduce a new mode or change the syntax, like we di MED: I mean, if you read the spec, it's `[...?]` and other regular expression engines have done that. They've changed the meaning. I think it's a service to your users, if we say what could or couldn't happen with these things because otherwise people can make wrong assumptions either way. -MB: To clarify, \w \b \d are not going to change for any of the existing flags including the new v flag that we're adding. The only way they could potentially change, I believe is with the introduction of a new flag and Mark. Are you saying that _this_ would be useful to add to the spec just as a note? +MB: To clarify, \w \b \d are not going to change for any of the existing flags including the new v flag that we're adding. The only way they could potentially change, I believe is with the introduction of a new flag and Mark. Are you saying that *this* would be useful to add to the spec just as a note? -MED: Well, I'm just thinking as a user if I see that.. The first thing if I'm coming from another environment where \w does actually align with the properties that are expected of words and not just ASCII then I can have a certain confusion. If I want to make my code future-proof, then I also want to know. Can this change under any flag? So that chunks that I write in regular expression might be wrong if some other flag is turned on. That's all I'm saying. And I don't want to rabbit hole on this. I just wanted to raise that concern. +MED: Well, I'm just thinking as a user if I see that.. The first thing if I'm coming from another environment where \w does actually align with the properties that are expected of words and not just ASCII then I can have a certain confusion. If I want to make my code future-proof, then I also want to know. Can this change under any flag? So that chunks that I write in regular expression might be wrong if some other flag is turned on. That's all I'm saying. And I don't want to rabbit hole on this. I just wanted to raise that concern. WH: A flag is not the only thing we were considering. Another thing we were considering was a slightly different syntax for these. So no, I do not want to put speculative things in the spec text saying that we might introduce a flag in the future. @@ -287,7 +287,7 @@ MB: Okay. Thanks. I guess that answers our question. IID: So, it seems like we were sort of talking about two separate issues. Once because we have the flag and we have the set notation and there are a bunch of things that we might want to change if we had a new flag, but also we've decided that the set notation may require a new flag. And I think it would be good to get to like a sort of normative consensus that adding lots of flags willy-nilly is generally a not a thing. We want to do like we would prefer to add fewer Flags, especially mode then adding more like we added, I still get confused a lot about forgetting to add the U flag and then if we have a v flag to enable new features and then later we need to add a w flag because we have some tweaks that we want to make to things that seems like we've ended up in a situation. That's a very difficult for non-experts to remember. What is what the learning curve is very Steep and Once I think it's good, if we can minimize the number of new flags that we add. So it seems like it would be good if we could separate these in some way - I kind of want to ask this question later, but it seemed like a good time now. Is there a possibility that we could re-examine exactly why we believe that the set notation proposal requires a new flag and consider whether it's possible to do that in the u mode and then separate any changes that we want to make for a future. Let's do improvements mode into a separate proposal, because I think we're kind of if we're going to add a v flag we should take the time to make sure that all of the little warts like the ignore case are getting fixed. Not just the ones we happen to be thinking of, as we add this and yeah. One last thing I'll say, in the issue on backwards compatible syntax several months ago I asked a question about whether it would be possible specifically to use to extend the \p notation that already exists to handle set notation. So, instead of using square brackets around the expression that we want. You would write like back `/p { name of a property - - name of another property }` for example, and I and I think that Fulfills. Our goal of being an exception in the current Unicode mode being parsable under the same constraints in the non-current non Unicode mode. while not requiring a new flag, and the response that I got was just, we decided to go ahead with a flag. We're not talking about this anymore, which if there are problems with my proposal, that's totally fair, but it would be nice to sort of get a concrete explanation of why we can't find any other way without adding a new v flag. -MB: To your point that we don't want to add too many flags: I absolutely agree with this, and I think that for this even applies to ECMAScript features and proposals in general, not just to regular expression flags. As for your question, specifically about, do we really need the flag. There's [an item in our FAQ in the readme of the proposal](https://github.com/tc39/proposal-regexp-set-notation#is-the-new-syntax-backwards-compatible-do-we-need-another-regular-expression-flag) that answers this. The different options we consider 2. And in fact, when this proposal first started, we were trying very hard to avoid the need for a new flag. So the four options we considered were 1) a new flag outside of the regular expression itself 2) a modifier inside of the expression like some other regular expression engines support, 3) a prefix, like `\UnicodeSet{…}` — something that would not be valid under the current `u` flag 4) and then we also considered a prefix like parens, question mark square, brackets — something that is not valid in existing patterns _regardless of flags_. We found that a new flag is the simplest, most user-friendly, and syntactically and semantically cleanest way to indicate the new character class syntax. +MB: To your point that we don't want to add too many flags: I absolutely agree with this, and I think that for this even applies to ECMAScript features and proposals in general, not just to regular expression flags. As for your question, specifically about, do we really need the flag. There's [an item in our FAQ in the readme of the proposal](https://github.com/tc39/proposal-regexp-set-notation#is-the-new-syntax-backwards-compatible-do-we-need-another-regular-expression-flag) that answers this. The different options we consider 2. And in fact, when this proposal first started, we were trying very hard to avoid the need for a new flag. So the four options we considered were 1) a new flag outside of the regular expression itself 2) a modifier inside of the expression like some other regular expression engines support, 3) a prefix, like `\UnicodeSet{…}` — something that would not be valid under the current `u` flag 4) and then we also considered a prefix like parens, question mark square, brackets — something that is not valid in existing patterns *regardless of flags*. We found that a new flag is the simplest, most user-friendly, and syntactically and semantically cleanest way to indicate the new character class syntax. IID: Yeah, so the, the explanation that I see for why not use backslash p or something like a prefix like backslash u that is not valid under the current flag is essentially we would have to use. It could be confusing if you forget to add the u, when you are using this feature, but I think that's already true for all new syntax, and I don't find that compelling and more of that and that we would have to enclose it in curly braces to be consistent, instead of Square braces, and that looks weird for character classes. And I think these are valid concerns. My question is once we realise that we would have to add a new flag. To avoid those. Did we reconsider whether those are actually? as bad as adding a new flag and splitting up the modes even more because it seems that adding a flag is worse to my mind than having slightly less square bracket notation. So, if it didn't have a new flag, we would have introduce a whole new synta that looks very alien for doing character classes with set notation. And set operators and stuff like that. And because we want something that looks reasonably familiar and largely works like before we settled a year ago in this meeting on basically going forward with a new flag. @@ -337,7 +337,7 @@ Presenter: Justin Ridgewell (JRL) JRL: This is super easy because there are essentially no changes. We reached the stage 3 criteria since the last meeting for Array `groupBy` and `groupByMap`. We have spec text. That's been pushed out, reviewers have all approved and editors have all approved - except one change that's currently in flight, but it's an editorial change to the methods. -JRL: The only discussion point which needs to be brought up is, JHD brought up a groupByMap naming issue. Essentially. He's equating flatMap meaning `map` followed by `flat` and if that would be confusing for people who see groupByMap as map followed by groupBy, or groupBy into a map output. I'm not super sold on it. I think it's okay to have groupByMap return a map. But that's the only discussion point we have - and then I can ask for stage 3. +JRL: The only discussion point which needs to be brought up is, JHD brought up a groupByMap naming issue. Essentially. He's equating flatMap meaning `map` followed by `flat` and if that would be confusing for people who see groupByMap as map followed by groupBy, or groupBy into a map output. I'm not super sold on it. I think it's okay to have groupByMap return a map. But that's the only discussion point we have - and then I can ask for stage 3. JHD: Yeah, I just want to make sure we discussed this. So yeah, I mean it's a relatively minor issue. The polyfill that I already made would have to be renamed so this will cause friction for me to rename it anyway, but I still wanted to bring it up. The other thing is - it's kind of unfortunate, right? We have this naming conflict between mapping and a Map and it seems - I don't really have a better suggestion for the name, but it seems like not a great name for the groups, where the groups are Maps instead of Objects. Before stage 3 is the time to discuss it. So I wanted to bring it up before the advancement. @@ -357,7 +357,7 @@ JRL: So I think there is a desire to rename `groupByMap`. Personally, just from RPR: Do you want me to enable temperature checking in tcq? You have to designate emojis for what they mean. -JRL: Okay, so we can have ❤️ = groupByIntoMap, 👀 = `groupByToMap`, ❓= groupByAsMap. +JRL: Okay, so we can have ❤️ = groupByIntoMap, 👀 = `groupByToMap`, ❓= groupByAsMap. RPR: Okay, please please vote as you wish. I'll say we've only got two more minutes on the time box. So we're close to timing out at least on resolving that, you know, the particular choice named @@ -385,7 +385,7 @@ YSV: I support. RPR: So, are there any objections? -RBN: So this is Ron, I do have one question on the queue before we consider advancement to stage 3, if that's fine. I'm not opposed to advancement. I'm just curious if you considered having `.group()` return some type of intermediate result. That might be iterable but then has its `toObject()` and `toMap()`. +RBN: So this is Ron, I do have one question on the queue before we consider advancement to stage 3, if that's fine. I'm not opposed to advancement. I'm just curious if you considered having `.group()` return some type of intermediate result. That might be iterable but then has its `toObject()` and `toMap()`. JRL: That would be a large change from the ecosystem. I'm not sure I would want to pursue that. @@ -572,7 +572,7 @@ FYT: So history, we proposed this in stage 1 in January, and the stage 2 in the FYT: So, when we look at what need to be done for stage 4 it says the purpose is the feature is ready for inclusion in the formal ecmascript standard. And in the very important aspect qualities have to be final and quite (?) is that we need to have test262. And as I mentioned repeating town with a feature flag there and then also have two compatible implementation which pass conformance tests. So we actually have three. Also important things that we should have a PR ready for ecma402. USA gave a clear sign off there, but I haven't heard from RGN. He said he was looking at it, maybe there's no feedback, but it's, we believe this is already being down. And I think I've talked to that in the TG to meeting in December 2nd, and I don't see we received no opposition over that. So I'm here to ask for stage 4; but before that, any questions or feedback - BT: Looks like the queue is empty. +BT: Looks like the queue is empty. FYT: Okay, if so, then I'd like to request the committee to approval for advancement to stage 4, so we can merge that into the 2022 version of ecma402. @@ -632,7 +632,7 @@ FYT: So it currently allows tolist six different kind of information, what calen FYT: so, basically provide one method we think of when I first proposed they're much more complicated and later on figured out we can just using one method to reduce the API service is a local Intl supported values of. with key to currently have six different kind of keys and they return an array. -FYT: And history. So that is advanced to stage 1 in June 2020, and advanced to stage 2 in September last year. We gave update last year in November, and in July we advanced to stage 3, and you can see the slides and notes Here. So what kind of thing are we doing currently in stage three, and there's some changes and there's an open issue of giving discuss. So the first thing is coming from one of the greatest (?) person, is Andre from Mozilla, and we just honor him, and he's really good. One thing is asking us to make sure the calendar and the collation of a number of your system. The return values are automatically and originally when I write the text, it kind of assumes there will be always returned up. I think he's right. We should make it explicit to add it up. It's just adding the word canonical in three different places and not a very big change to spec text, but it's actually a normative, right? Because you didn't require that before. So it's a normative PR but the change in terms of the amount of size of spec text is very small. We still have a couple of open issues. We tried to figure it out, not all of them are really normative right. What issues say, well in the ecma 402 should we, you know, partially it's actually part of that. I could relate to this particular spec but, you know, predated this proposal. There are mentioned currency and calendars. I think one of the issues of whether we should we rewrite it a way that probably either through an editorial PR that centralizes all section, and also #35, is that when we try to make sure that whatever return voluntarily nomination is consistent with the data supported in the Intl object. That is the, how to say that that is the end. That is the goal. but how to express it in a way that in the standard that to ensure that does happen. I think that still have some discussion how should we phrase it in a way to mandate that although that was the intention, right? And seems like obvious thing, but how can we ensure that really in a way that we can check that the other issue? I think it also is mentioned by dries. Is that true little tricky thing. What does Really mean about whether a currency is supported because the spec the previous you can say, "well, if you request for currency code regardless, you know that currency code you have to format it". If you don't know that thing. You just using that code. So, in that sense it’s always supported. So regardless. The three characters, right? So, but our current way is that we've tried to return if we have not just the code, but they are some maybe some name associated with that when a using a long form or other form for what (?). So, there's issue. How can we phrase it in a way? That it really means what our original intent. So it may need some bbetter wording on that part. So there are still open issues or not controversial in your way, but they are. We have a way to resolve it. There is also the editorial PR #42. I forget what it is. You can look take a look at, it was an editorial PR. +FYT: And history. So that is advanced to stage 1 in June 2020, and advanced to stage 2 in September last year. We gave update last year in November, and in July we advanced to stage 3, and you can see the slides and notes Here. So what kind of thing are we doing currently in stage three, and there's some changes and there's an open issue of giving discuss. So the first thing is coming from one of the greatest (?) person, is Andre from Mozilla, and we just honor him, and he's really good. One thing is asking us to make sure the calendar and the collation of a number of your system. The return values are automatically and originally when I write the text, it kind of assumes there will be always returned up. I think he's right. We should make it explicit to add it up. It's just adding the word canonical in three different places and not a very big change to spec text, but it's actually a normative, right? Because you didn't require that before. So it's a normative PR but the change in terms of the amount of size of spec text is very small. We still have a couple of open issues. We tried to figure it out, not all of them are really normative right. What issues say, well in the ecma 402 should we, you know, partially it's actually part of that. I could relate to this particular spec but, you know, predated this proposal. There are mentioned currency and calendars. I think one of the issues of whether we should we rewrite it a way that probably either through an editorial PR that centralizes all section, and also #35, is that when we try to make sure that whatever return voluntarily nomination is consistent with the data supported in the Intl object. That is the, how to say that that is the end. That is the goal. but how to express it in a way that in the standard that to ensure that does happen. I think that still have some discussion how should we phrase it in a way to mandate that although that was the intention, right? And seems like obvious thing, but how can we ensure that really in a way that we can check that the other issue? I think it also is mentioned by dries. Is that true little tricky thing. What does Really mean about whether a currency is supported because the spec the previous you can say, "well, if you request for currency code regardless, you know that currency code you have to format it". If you don't know that thing. You just using that code. So, in that sense it’s always supported. So regardless. The three characters, right? So, but our current way is that we've tried to return if we have not just the code, but they are some maybe some name associated with that when a using a long form or other form for what (?). So, there's issue. How can we phrase it in a way? That it really means what our original intent. So it may need some bbetter wording on that part. So there are still open issues or not controversial in your way, but they are. We have a way to resolve it. There is also the editorial PR #42. I forget what it is. You can look take a look at, it was an editorial PR. FYT: so activity, V8 Stager in 95, which means behind a flag. You can see the flag there. I just got approval for shipping in 1999, which will be in public General available in March on the upon these two. I repeat here. Okay. Anyway, Mozilla is in the branch and 93. I'm not quite sure they're a bit or not. And again, I'm Safari technical preview 132 and we have test 262 task? It will be better if we have more testing, there may be is added and really hope people can help all the fields to. So, that is the update and any question and answer about this? @@ -684,9 +684,9 @@ FYT: So the changes adding order and also adding canonical and mentioning it cou BT: Queue is empty. -FYT: So can I formally ask for approval for this? +FYT: So can I formally ask for approval for this? -BT: Are there any concerns with merging 60 and 61? Speak now. Enter the queue. [silence] All right. I don't think there's any concern. I think you can go ahead and do that. +BT: Are there any concerns with merging 60 and 61? Speak now. Enter the queue. [silence] All right. I don't think there's any concern. I think you can go ahead and do that. SFC: Hi Frank. I just wanted to make the committee aware of the issue regarding text direction that we brought up where the API only returns a very simplistic model of text direction. And I saw that we had closed that issue after some discussion. I support the conclusion of that issue, but I just wanted to make sure the committee is aware of this limitation in the proposal where text direction uses a simplistic model, consistent with what w3c is using. But I hope to see future extensions of this proposal that gives a more expressive model of text direction. @@ -836,7 +836,7 @@ Presenter: Justin Ridgewell (JRL) - [proposal](https://github.com/tc39/proposal-destructuring-private) - [slides](https://docs.google.com/presentation/d/1GMAvGx5i8TikGqJZcZMnHeoPclD_ubCeNSCx5M1DTaI/edit?usp=sharing) - JRL: So when I initially presented this last meeting, I was asking for this to be a "needs consensus" PR. Thankfully, we decided not to do that because simple things can be super complicated in our specification. +JRL: So when I initially presented this last meeting, I was asking for this to be a "needs consensus" PR. Thankfully, we decided not to do that because simple things can be super complicated in our specification. JRL: First off, we have to talk about destructuring. Destructuring actually exists as two separate grammars in the specification. There's an assignment pattern, which is the left hand side of an assignment expression. So there's no concept of var binding here. It's just like `({ … } = obj)`. There's also a binding pattern: it is when you have a variable declaration. So Binding something new in the lexical scope and we're not assigning to an already created value. These are two separate grammars and they have two separate runtime and early error semantics. And, as a weird little side effect of our grammar, both of these need to conform to the object literal parsing grammar, which is also used for object expressions. Object expression, assignment pattern, and binding pattern all parse as object literal initially. And then they are specialized into a particular grammar depending on the context. And it's a little strange. ObjectLiteral essentially has to be able to handle everything. It's a super loose grammar that accepts all forms of objects. And then once we figure out what the context we're parsing it into we add in special early error rules to forbid certain patterns. I've highlighted here demonstrate that certain things that are valid as destructures are not valid as object expressions and certain things that are valid as object expressions are not valid as destructuring. Depending on whether you're on the right hand or left hand side of the equal sign you have different early errors that need to apply. So first off, we have to expand the object literal syntax so that we can parse all forms of objects. Whether they're in a lhs, or if they're in an object expression. We also have to add in the binding pattern because I forgot about that and I only considered the assignment pattern, and we also have to extend the PropName syntax operation. All of these things have been added to the proposal specification text. I do have an open question that I don't know if I need to pass in the private environment into binding initialization, which is a sub-operation of the BindingPattern runtime semantics, but this I think can be hashed out in issues. Doesn't have to be discussed in here. It's just these are all the things that had to be added to what was initially a super simple proposal. At least, I thought it was simple. We currently have a slate of six reviewers, and because I incorrectly thought this was going to be simple, I told everyone that this would be a simple change and it would be easy for beginners to do. Because of all the new changes, I don't think it's a great first PR for someone to go through. So if we want to change reviewers for people who aren't comfortable yet. I don't feel offended or anything. So, I think we need to ask again, for who wants to review stage 2 so that I can bring this up for 2 stage advancement next meeting. @@ -849,7 +849,7 @@ SHG: Likewise even though I'm new, I think with a full group of reviewers. It's SRV: likewise (writing in the meeting chat) -JRL: Perfect. I have a sneaking suspicion Waldemar will continue to want to do this. Robin actually already reviewed it, but may need to re review it with new changes. +JRL: Perfect. I have a sneaking suspicion Waldemar will continue to want to do this. Robin actually already reviewed it, but may need to re review it with new changes. JRL: Okay, so I think everyone is good then we can keep the same slate reviewers. and so we can move into any open questions that might be in tcq. diff --git a/meetings/2021-12/dec-15.md b/meetings/2021-12/dec-15.md index 6b7aedb0..134e43e0 100644 --- a/meetings/2021-12/dec-15.md +++ b/meetings/2021-12/dec-15.md @@ -67,7 +67,7 @@ MM: The decision tree is talking about Realms. I'm pointing out that the conclus BT: We need additional Note Takers. Hold on. We need. we can't, we need, we need some note-takers before we can continue. Ideally two people would be nice. Actually. It is fun to take notes. Taking notes helps helps a lot. I appreciate it. I’ve taken notes before. It's not too difficult. Thank you, RPR. Let us know what it looks like, if you think that would be nice. - Have be really nice to have someone. Help with notes. RPR will likely have things he needs to help with chairing. Luca Casonato. Just broke that. He can help. Okay, Jesus chopped. I missed that. Thank you for calling that out. Okay. All right. Sounds good. Then let's thanks for the point of order. Kevin. And note-takers. Let us know if you're falling behind. +Have be really nice to have someone. Help with notes. RPR will likely have things he needs to help with chairing. Luca Casonato. Just broke that. He can help. Okay, Jesus chopped. I missed that. Thank you for calling that out. Okay. All right. Sounds good. Then let's thanks for the point of order. Kevin. And note-takers. Let us know if you're falling behind. WH: The invariant in the decision tree is only about restricting objects from other Realms. @@ -77,7 +77,7 @@ NRO: So, yes, the reason I mentioned just realms is because currently we have th WH: I'd like to see this proposal disentangled from Realms. Thank you. -RBU: Yeah, I'll try to keep this brief because I think we should keep the key moving, but I think to back up a little bit more. Generally we've been talking about this concept of membranes as an invariant of this and I'd like to see some because I talk about this with CP a little bit in the past. I'd like to see some sort of thinking into whether membranes can just be updated because we're sort of enforcing constraint around records and tuples how they can interact with Realms because of this fact that exists that membranes already exists in production that use rounds in this way, right? And my general thought thinking behind membranes is that if you have a membrane that can run new code, but you don't have the ability to update that membranes and then that membrane is forever and ultimately insecure. So is it not true that we could simply update these membranes to account for this? And if not, then are the membranes worth considering in the first place. +RBU: Yeah, I'll try to keep this brief because I think we should keep the key moving, but I think to back up a little bit more. Generally we've been talking about this concept of membranes as an invariant of this and I'd like to see some because I talk about this with CP a little bit in the past. I'd like to see some sort of thinking into whether membranes can just be updated because we're sort of enforcing constraint around records and tuples how they can interact with Realms because of this fact that exists that membranes already exists in production that use rounds in this way, right? And my general thought thinking behind membranes is that if you have a membrane that can run new code, but you don't have the ability to update that membranes and then that membrane is forever and ultimately insecure. So is it not true that we could simply update these membranes to account for this? And if not, then are the membranes worth considering in the first place. CP: Yeah, so in our case, have membranes in arcade(?) is not a problem. We already update, those of we need to update on the other ones were not suffering from these. Because when you do virtualization, you are most likely you're already using a single place holder object between the term multiple rounds. They are proxies. So for us, it's not really a problem, but I still am sympathetic with the idea of having the invariant in the language known as solid because of membranes of just because I haven't seen any other interns in which A Primitive give you access to an object. Maybe Kevin can provide more details about why. si I didn't quite get what he was saying before, but we can demonstrate, This is already the case them for me is fun. @@ -87,9 +87,9 @@ MAH: Yeah, I'd like to understand why you said that a membrane that is not updat RBU: This will hardly be the last update to the language that breaks an invariant like this. -MAH: Why? I mean, we've been, we've been trying to not break the web. We've been trying to not break deployed code. I am not aware of changes that great that have broken things like this, this, that deeply. +MAH: Why? I mean, we've been, we've been trying to not break the web. We've been trying to not break deployed code. I am not aware of changes that great that have broken things like this, this, that deeply. -RBU: I don't think that there's consensus That this is a enough break. I think that, I think that +RBU: I don't think that there's consensus That this is a enough break. I think that, I think that MM: I'm sorry. There’s not consensus that it's not a deep enough break. The proposal has to achieve consensus to go forward. @@ -109,7 +109,7 @@ MM: So, absolutely, the constraints that we're talking about all proposals need JHX: Okay. -KG: Yeah, someone mentioned about primitives giving access to objects. The way that Primitives currently give access to objects is that they inherit from objects. String.prototype is an object. So like if you type a primitive and then you type “dot blink” or whatever, now you have access to an object. Now it is an object in the same realm, so if the concern is about Realms, then I understand that this doesn't give you access to a cross-realm object, but from the conversation between MM and WH earlier, I had understood that Realms are not actually relevant. And it definitely does give you access to objects. +KG: Yeah, someone mentioned about primitives giving access to objects. The way that Primitives currently give access to objects is that they inherit from objects. String.prototype is an object. So like if you type a primitive and then you type “dot blink” or whatever, now you have access to an object. Now it is an object in the same realm, so if the concern is about Realms, then I understand that this doesn't give you access to a cross-realm object, but from the conversation between MM and WH earlier, I had understood that Realms are not actually relevant. And it definitely does give you access to objects. MM: So for this one, we need to break it down. You already acknowledged that between Realms, this does not create any contagion within a realm and membrane is only a useful, isolation boundary if the implicitly shared objects are implicitly frozen as they are in Hardened JavaScript, in which case, the only things that it's giving you access to are the already pre-Frozen things. Likewise the TC53 embedded scenario, where there's only one realm, there's never a realm boundary, but all of the primordial objects are frozen and membrane in that scenario again, both sides can access the same String.prototype, but it doesn't matter. @@ -167,7 +167,7 @@ NRO: and so, I think one of the reasons that the Symbol-as-WeakMap-leys proposal SYG: I think is subsumes a bunch of other use cases though, albeit somewhat, indirectly like complex keys in weak maps and in maps I think, you know, there have been proposals to do that directly, but if you have symbols, you could just do that by indirection. -MAH: So, my answer to that is, I believe object placeholder solves all the problems that symbols as WeakMap keys ithout having to decide if we want, which type of symbol we want to To allow as weakmap Keys, you can build everything. The same way. So all the use cases, that that would be solved with symbols as weak map keys and I believe like usage through Shadow realm. All those are exactly the solved by ObjectPlaceholder. I am not aware of any use case that that you wouldn't be covered the same way. +MAH: So, my answer to that is, I believe object placeholder solves all the problems that symbols as WeakMap keys ithout having to decide if we want, which type of symbol we want to To allow as weakmap Keys, you can build everything. The same way. So all the use cases, that that would be solved with symbols as weak map keys and I believe like usage through Shadow realm. All those are exactly the solved by ObjectPlaceholder. I am not aware of any use case that that you wouldn't be covered the same way. RBU: Except for the fact that this invariant, the record, the record of, excuse me, the realm invariant causes problems. That's difference. @@ -187,7 +187,7 @@ JHD: Yeah, I was just going to say that to me, the least dangerous approach here MAH: I would just like to point out that you did say Say, if primitive new objects invariant holds in which is the only red part of the tree for the object placeholder. -JHD: right, but then the only built-in function that by default doesn't work across different Realms is something that I at least hold as something that should be red, even though not everyone necessarily agrees. That’s all. +JHD: right, but then the only built-in function that by default doesn't work across different Realms is something that I at least hold as something that should be red, even though not everyone necessarily agrees. That’s all. YSV: I'd really like to get to topics towards the bottom, but I just wanted to say, devtools can do a lot. Speaking from my experience as a DevTools engineer. So for example, doing a lookup of whether or not is like if we have a well-formed pattern of how weak maps are being accessed from records and tuples or it's a well known relationship DevTools would be able to do a lookup of that sort. For example, we have specialized code in Firefox devtools to do lookups on React, like that's stuff that we've done. So I'm not really compelled by it's more difficult to debug. @@ -227,7 +227,7 @@ MAH: No, because you cannot pass the weak map through [many interruptions] So le CP: No, I disagree. I mean, when you, when you get a record and that record happens to be one of these object placeholder. You still don't know what what it is. So I think to answer YSV’s question. Yes, you did with the symbols. You'll be able to achieve the same in my opinion, whether that is a membrane across different realm to the role of membrane, in the same realm, or something like that. We'll all work the same way, you know, the symbol you will be able to use it to identify in a WeakMap. What is the object that corresponds to the object on the other side? Whether that's a proxy or not, so I believe the symbol will solve this problem. I want to add also that again. There may be what I say before. It was not a clear for us. The placeholders are not really a problem for us, or the WeakMap will work the same for us. So, I don't see, I don't have any objection on just focusing on the symbol for now then continue working. If we were to add any new feature to facilitate something for them. -MM: I want to be clear: We're not standing on the use case that MAH raised, we're answering Yulia's question about there. Other use cases in the absence of object placeholders. We we may very well be able to address the use case that MAH had raised them with me. +MM: I want to be clear: We're not standing on the use case that MAH raised, we're answering Yulia's question about there. Other use cases in the absence of object placeholders. We we may very well be able to address the use case that MAH had raised them with me. BT: So we're at our time box. @@ -336,7 +336,7 @@ YSV: But to be completely honest, we haven't looked in depth at the decorators p KHG: Yeah. Absolutely. I mean, the current plan was to try to go to stage three in January, But if it sounds like that won't be enough time. We could move to next, plenary, after that -YSV: if you can, then that would be great. But that would that would give us a chance to actually fully review everything again. +YSV: if you can, then that would be great. But that would that would give us a chance to actually fully review everything again. KHG: That sounds good. We’ll do that. @@ -381,7 +381,7 @@ WH: It’s too general. It's much easier to write algorithms which we can prove SHO: So, you're saying the so before you said that, that, there were cases that bigdecimal made impossible and so it's not so much use cases as much as the use case of what's provable, not so much broader user land actions. Is that correct? -WH: It's much harder to reason about and write correct code with BigDecimal than it is with Decimal128. +WH: It's much harder to reason about and write correct code with BigDecimal than it is with Decimal128. SHO: And and so, and you feel like that trade-off is the correct - is more important than being able to say, pick up use cases from representing. I believe, it's Oracle databases, although I would have to recheck my notes that have basically numbers that are still larger than that 128 byte representation. @@ -413,7 +413,7 @@ SHO: And do you think that outside of bigdecimal in terms of decimal128? You're SYG: This is more of an unknown to me if implementation of decimal128 is in fact, straight forward. And if it doesn't seem too complex maintain, then we would withdraw all that. But as it stands I'm just not familiar enough with it. If, you know four basic arithmetic operations, there are competing algorithms that we have to choose and it takes a domain expert to maintain the code in the future to understand the performance trade-offs. That is more work than we can justify staffing an effort for. -SHO: Okay, that is helpful to know and to look into in the future. I was going to say from the user side. feel comfortable just saying that use cases for decimal are well-motivated whether that is motivated enough to get over the bump of implementing and maintaining it obviously is a bigger argument to build but I think that there is a clear pool of usage that is larger than the pool of usage for bigint. +SHO: Okay, that is helpful to know and to look into in the future. I was going to say from the user side. feel comfortable just saying that use cases for decimal are well-motivated whether that is motivated enough to get over the bump of implementing and maintaining it obviously is a bigger argument to build but I think that there is a clear pool of usage that is larger than the pool of usage for bigint. SYG: I see that would be certainly good to to see some some of you some more arguments for there. You know, it's not like we are against the use cases, but for the use cases to clear the ergonomic pains from using libraries for it to clear that bar. I don't feel like we've met that yet. @@ -709,13 +709,13 @@ FYT: So, here are the high level. This is not fixed, but probably likely to what FYT: So here, so, what are the cross-cutting concerns? Because that's what ever we need required to study it before, go to stage one. So here, I show you some of the reference material from safaris HTML5 canvas guide, right? So it's apple's documentation, but I think it's also apply for all the browser's canvases, it says that the bounding box, you know, whenever I draw the text in canvas you need to include tags that Supply at front. You may want to, you may want to break text into multiple lines. So apple is telling the developer, if they want to read using canvas with text. They may want to break text into multiple lines, but how to? Right so the how to part is that you basically have to have find out opportunity and then call the matter, text and figure out the bounding box and then you can do it. So apple or all the browser vendors. Does it supporters measure text to do that part. The missing part is, will be fulfilled by the line, granularity in segmentor. So you have to have that one piece of the recipe is already here. The other piece is whatever we try to pose the line granularity to make sure that could be fulfilled. -FYT: So look like to look at prior arts? Prior art. There are a lot of prior art way back in different programming Programming language beside the V8 break iterator. Which think about 2010, but even before that, there are a lot of prior art back OSAP. I have my two different kind of generation of similar API for the C Objective, C or something or so on so forth, Java. Our sinks 1.2 December 1998 have break iterator. Get my instance that 23 years ago. Okay. I see you. I see you foresee, and I see for Jay Happ that around 2001, which is again finding years and actually 4X recently, I think Chen and ZB and many adding Benicia many other working on that area. One thing I want to point out when I first worked on line, breaking in Gecko and back 1998. They're not much reference material or how to wrap a line while, originally our cabina a Mosaic in University of champagne 1994. He's right a lot nail and are rather poor 95, Netscape 1.2 adding just how to deal with Japanese, right? But there are no much standoff that you With that, but things changed. Around Japanese, 450. Sark's 4051 the previous version around 1998. That's the first one I can reference when I work on that, get coal. elderly PRK around the year, 2000 or 99. that's only stand on naturalist and dark and reference to Okay, fine. about line. Wrapping a kingfisher e about how the graph He's lying around that time as in can Lundy's, night-night book also mentioned, how to do that and Microsoft list extend that (?), which character should be cannot put in the bag and which character cannot put in from actually Extend the symbol of characters of Chinese and simplified Chinese traditional Chinese and Korean and in another book, which I've got. +FYT: So look like to look at prior arts? Prior art. There are a lot of prior art way back in different programming Programming language beside the V8 break iterator. Which think about 2010, but even before that, there are a lot of prior art back OSAP. I have my two different kind of generation of similar API for the C Objective, C or something or so on so forth, Java. Our sinks 1.2 December 1998 have break iterator. Get my instance that 23 years ago. Okay. I see you. I see you foresee, and I see for Jay Happ that around 2001, which is again finding years and actually 4X recently, I think Chen and ZB and many adding Benicia many other working on that area. One thing I want to point out when I first worked on line, breaking in Gecko and back 1998. They're not much reference material or how to wrap a line while, originally our cabina a Mosaic in University of champagne 1994. He's right a lot nail and are rather poor 95, Netscape 1.2 adding just how to deal with Japanese, right? But there are no much standoff that you With that, but things changed. Around Japanese, 450. Sark's 4051 the previous version around 1998. That's the first one I can reference when I work on that, get coal. elderly PRK around the year, 2000 or 99. that's only stand on naturalist and dark and reference to Okay, fine. about line. Wrapping a kingfisher e about how the graph He's lying around that time as in can Lundy's, night-night book also mentioned, how to do that and Microsoft list extend that (?), which character should be cannot put in the bag and which character cannot put in from actually Extend the symbol of characters of Chinese and simplified Chinese traditional Chinese and Korean and in another book, which I've got. FYT: that but things, change a lot around that tithing 2000, Unicode Consortium start to look at all the reference and put out. unicode standard UTS14 around year 2,000. Again about 20 years ago. So there's a standard international standard definition based on Unicode property based on the algorithm there, but also allowing Locale tailoring, which means some, the Locale may tailor right? The ICU database some become a very important piece of that because whenever you eat, add a new character, for example, you add an emoji, you have to decide where do the line break with Emoji. You have an emoji sequence. You cannot break in between, right? You don't want to have a engineer, and the end of and the color black or color in the brown in the beginning of the second line, you want to put it together. So there will be a a dark-colored engineer or something like that. Right? the line breaking is important. So CSS also have the text css3 also have mentioned above Define those thing about the keyword of the strict normal loose for lb or line-break style and uts35. They start to Define, Define, you know the lime. Break style. (?) 40. We're handling. I think the mostly they are coping for whatever in the css3, right? So thing changed a lot in the last 22 years since I work on the line break Gecko because all the standard algorithm got published and worked on by industry inmany company. And therefore, does it become an no longer? Just hey, let's look at (?)'s code and see how he wrote that in mosaic. We can reference the standard, and we have will send our library in the open source, and we can all use that. So have a standardized implementation. -FYT: So here are some example request, that's not all the thing I got one is several different web developer asking for that beside Google internally JS, PDF contributor. also Also a mansion about that. (?) did thing in order to support, JS PDF in charge Q2 General, pre-post, correctly. It without that is very difficult to info, and they have write a lot of code Futter-web, one of my colleague also mentioned that they think this will be very helpful. without that they have to compile with wasm some of ICU to JS in order to do that, right? So that will help them to improve their Code size, which means it improves improve their performance. And there are some other requests are not go, the code that you can also see from the web. There are some people asking how to do that. Sometimes people get a very wrong answer based on Assumption of a western-style line break and some of the solution didn't work, but the demand is there. Clearly, people are asking for that. +FYT: So here are some example request, that's not all the thing I got one is several different web developer asking for that beside Google internally JS, PDF contributor. also Also a mansion about that. (?) did thing in order to support, JS PDF in charge Q2 General, pre-post, correctly. It without that is very difficult to info, and they have write a lot of code Futter-web, one of my colleague also mentioned that they think this will be very helpful. without that they have to compile with wasm some of ICU to JS in order to do that, right? So that will help them to improve their Code size, which means it improves improve their performance. And there are some other requests are not go, the code that you can also see from the web. There are some people asking how to do that. Sometimes people get a very wrong answer based on Assumption of a western-style line break and some of the solution didn't work, but the demand is there. Clearly, people are asking for that. -FYT: So while the other question is what will happen, if we don't work on this proposal? Because the status quo is always the alternative, right? Let's say we just say, eh, we're not working on that, what will happen next. Well, there are couple of possibilities, right? One is what the upper canvas, for example, JS PDF or something else. They can just use string to split and do the wrapping. Well, what will happen? The consequence is that that approach will make Chinese Japanese Thai Lao Khmer Burmese and several other language line wrap incorrectly on those plan on those applications, right inputs. For example, these three are very easy to ride of char people without the thing. Well, very easy to produce, wrong results. those language. It will work for Chinese, English and French, but it will create a very disadvantaged for many other language, because very difficult to England (?). The other possibility which I think already saw. Some people try to do that. Is that they try to use the word break, we already have in the segmenter, misuse it to do the line break, right? Similar to the first approach. Again, this wall causing low line, break quality for CJTLKB languages, right? The demand is there. They're going to break It. the questions that how hard it? How good is the result they are going to produce. And how hard is that for her to do that (?) The other possibility Is they can load a big library to do the correct line break. example canvas kit, which is wasm compiled ICU into JS approach to implement a line break, right? So well, they can do it. The what happens there are going to because of the line break line break for Japanese and for all. The language remain Chinese damaging(?). They're all required dictionary, which means they are going to lower a pretty big chunk of this wasm compile into the client, right? It will affect latency and page load. I think one of my colleagues asked is that at least about 1 megabyte. want to support some of this language? I'm not sure all the languages because I so I think for Chinese Japanese, I don't think they really need a, the dictionary of Loa and Burmese, that surely need dictionary and that is about 1 MB. or the other way they can do is they combine with big JS and this hey, you know what? We have this break iterator already ship for like more than 10 years and they haven't retired. So, how about prom? We just write a rapper of Public Health and non Chrome browser of sound something like Safari or Mozilla. We were just load big giant J's Library, which doesn't right? First of all, it's harder for developer to manage be causing, you know, whatever. Incompatibility and I think it's really not good for non Chrome browser, right? In particular we go. We have a lot of occasion and we really care. It's not only our vision real fast on Chrome browser. Will also want to make sure it runs very fast on non-Chrome browser and we see this kind of approach is a really bad idea. We want to see we all run on different browser. They have a very optimized Speed and Performance and use less memory, and that will be a benefit for our application product. Right? So the along all the user of there more, all different browser platform, but if we don't act, we keep instead of school. This all the thing I mentioned here, maybe something else may happen, Ryan, so we really don't want to see those things happen. would so we therefore we think we need to take action. +FYT: So while the other question is what will happen, if we don't work on this proposal? Because the status quo is always the alternative, right? Let's say we just say, eh, we're not working on that, what will happen next. Well, there are couple of possibilities, right? One is what the upper canvas, for example, JS PDF or something else. They can just use string to split and do the wrapping. Well, what will happen? The consequence is that that approach will make Chinese Japanese Thai Lao Khmer Burmese and several other language line wrap incorrectly on those plan on those applications, right inputs. For example, these three are very easy to ride of char people without the thing. Well, very easy to produce, wrong results. those language. It will work for Chinese, English and French, but it will create a very disadvantaged for many other language, because very difficult to England (?). The other possibility which I think already saw. Some people try to do that. Is that they try to use the word break, we already have in the segmenter, misuse it to do the line break, right? Similar to the first approach. Again, this wall causing low line, break quality for CJTLKB languages, right? The demand is there. They're going to break It. the questions that how hard it? How good is the result they are going to produce. And how hard is that for her to do that (?) The other possibility Is they can load a big library to do the correct line break. example canvas kit, which is wasm compiled ICU into JS approach to implement a line break, right? So well, they can do it. The what happens there are going to because of the line break line break for Japanese and for all. The language remain Chinese damaging(?). They're all required dictionary, which means they are going to lower a pretty big chunk of this wasm compile into the client, right? It will affect latency and page load. I think one of my colleagues asked is that at least about 1 megabyte. want to support some of this language? I'm not sure all the languages because I so I think for Chinese Japanese, I don't think they really need a, the dictionary of Loa and Burmese, that surely need dictionary and that is about 1 MB. or the other way they can do is they combine with big JS and this hey, you know what? We have this break iterator already ship for like more than 10 years and they haven't retired. So, how about prom? We just write a rapper of Public Health and non Chrome browser of sound something like Safari or Mozilla. We were just load big giant J's Library, which doesn't right? First of all, it's harder for developer to manage be causing, you know, whatever. Incompatibility and I think it's really not good for non Chrome browser, right? In particular we go. We have a lot of occasion and we really care. It's not only our vision real fast on Chrome browser. Will also want to make sure it runs very fast on non-Chrome browser and we see this kind of approach is a really bad idea. We want to see we all run on different browser. They have a very optimized Speed and Performance and use less memory, and that will be a benefit for our application product. Right? So the along all the user of there more, all different browser platform, but if we don't act, we keep instead of school. This all the thing I mentioned here, maybe something else may happen, Ryan, so we really don't want to see those things happen. would so we therefore we think we need to take action. FYT: Okay, here are the sly and I'm going to talk over this second batch mode, kind of idea that original put it there, we decide to remove that. We figured out, we don't really need it, but I don't want to remove it. I just put it across here. So because I kind of put across last week. I don't want to change proposal. I just want to make sure we are not talking about this part. So now let's look at what I hear in stage zero. So we propose because the stage one, so one who go through checklist, right? So we have a champion who's me that Part's down in this presentation. I think. I'm pretty sure. I illustrate all those point I require about algorithms, reference, or thing. we also have a palpable repository for that. Which list here. So that's AA. I believe and I fulfill the entrance criteria for stage 1. So asking for stage 1, and I want to remind you. What does that mean? That means that each one. According to the process committee has respect to devote time to examine the problem space solution and cross-cutting concerns us all. I to advance the stage one. Okay? And question, answer. All right. @@ -773,13 +773,13 @@ MM: So to answer YSV. Yes. It is. aspect, but I verified that when I was looking BT: Time box. Now. There's about three minutes left. -SYG: So that would be okay. So I'll just defer on the question of data whether it's fixable in practice and whether we care to staff trying to figure that out. There's other point that I just and I think it's raising. So maybe I'll just let Justin speak. +SYG: So that would be okay. So I'll just defer on the question of data whether it's fixable in practice and whether we care to staff trying to figure that out. There's other point that I just and I think it's raising. So maybe I'll just let Justin speak. -JRL: This, my memory's a little hazy and I'm not in HTML implementer, so I don't know the exact reasons. My memory is that we changed thenables behavior explicitly because HTML was having difficulty integrating with what was specified in TC39. There's something about a dying Realm where they have to have the functions in the thenable's realm in order for something to work correctly. +JRL: This, my memory's a little hazy and I'm not in HTML implementer, so I don't know the exact reasons. My memory is that we changed thenables behavior explicitly because HTML was having difficulty integrating with what was specified in TC39. There's something about a dying Realm where they have to have the functions in the thenable's realm in order for something to work correctly. MM: So neither of us, it sounds like, actually understand the issue. I agree that such an issue would be very relevant data and I would encourage somebody to point us in the right direction, understand what the HTML issue was? -YSV: so, I just want to - how to put it. I am happy to change what we're doing in terms of the spec, so that other implementations don't need to be aware of this behavior in any way. I think that's a perfectly reasonable direction to take this but my main worry is that we won't be able to reconcile it with what we need as embedders of the engine. So it could be possible. This is something we can certainly investigate could, +YSV: so, I just want to - how to put it. I am happy to change what we're doing in terms of the spec, so that other implementations don't need to be aware of this behavior in any way. I think that's a perfectly reasonable direction to take this but my main worry is that we won't be able to reconcile it with what we need as embedders of the engine. So it could be possible. This is something we can certainly investigate could, MM: but can you explain the source of the concern about why it might not be possible? Just at this? @@ -793,13 +793,13 @@ MM: Okay, I would like to understand this. I'm not going to ask you to explain i YSV: I think. I think there may be another way to fix it. I'm not sure that that other way will not be something that requires some existence of an incumbent realm. Like I do believe we will need to have a way to fall back on to this functionality, but the way that we specify, that can be different than what we currently have. -MM: I look, I look forward to understand. Please Point me at all, and all the relevant material. +MM: I look, I look forward to understand. Please Point me at all, and all the relevant material. YSV: Okay? I'll add our bug to that original issue. JHD: Okay. All right, so it sounds like YSV. You're going to comment on spec issue and I will react to that and update the pr titles and stuff. As necessary. -YSV: Yeah, probably I also should have been the one to bring this since it was issue on Firefox that we ran into. So I'll work with you to see how we can resolve this. Okay? +YSV: Yeah, probably I also should have been the one to bring this since it was issue on Firefox that we ran into. So I'll work with you to see how we can resolve this. Okay? JHD: Awesome. Thank you. diff --git a/meetings/2023-01/jan-30.md b/meetings/2023-01/jan-30.md index f883ddab..f79eaa34 100644 --- a/meetings/2023-01/jan-30.md +++ b/meetings/2023-01/jan-30.md @@ -596,7 +596,7 @@ RPR: I think we can move on. The first question is from JHD. JHD: Yeah, I mean, all the examples in your slides if you opt into a secure mode, you have to know to do that - and if you know to do that, then you also salt your keys, or use `Object.create(null)` or `{ __proto__ null }`, or use a `Map` or something. Unless you turn on the mode by default, I don’t think it would really achieve any of the goals you want. node, for example, already has a flag that lets you remove the `__proto__` accessor and you can run it with that - but lots of arbitrary modules in the ecosystem rely on the functionality. I’m incredibly confident that trying to do this by default would break the web in sufficient quantities that it wouldn’t be viable. I don’t see a lot of value in if it’s required to be opt in, that said, obviously the exploration area is great. Even though the number of prototype pollution attacks that turn into real exploits is nonzero, I think it’s small, but still worth addressing.I feel like the biggest benefit would be removing a bunch of false positive CVEs from the ecosystem that cost a lot of developers’ time. But either way, I mean, I think it’s worth exploring - that’s a stage 1 concern - but I wanted to share my skepticism. -SYG: Noted. I want to lean on SDZ to provide a more detailed answer here. But I want to respond first to this node flag thing. So our hunch is that we’re not saying we’re going to remove `__proto__` entirely. The idea is that this is a two-parted approach where we realize having .property access to **proto** to .prototype to constructor to keep it working. The way we propose that is with automatic rewriting so we don’t have to manually migrate the entire code base. The other thing about using none prototype objects I think that speaks to the at scale deployment thing. If you had the luxury of time and whatever to basically re rewrite your whole world, then yes you could just never use prototype inheritance at all. That seems a challenge in itself. But at the very least we want to use third party libraries, you can’t really do that. As an application you could opt in the mode. With the automatic rewriting you get the benefits for free. We share your concern. Without the automatic rewriting step that that pure opt in will be difficult to get deploying and working. SDZ, do you have anything to add here? +SYG: Noted. I want to lean on SDZ to provide a more detailed answer here. But I want to respond first to this node flag thing. So our hunch is that we’re not saying we’re going to remove `__proto__` entirely. The idea is that this is a two-parted approach where we realize having .property access to `__proto__` to .prototype to constructor to keep it working. The way we propose that is with automatic rewriting so we don’t have to manually migrate the entire code base. The other thing about using none prototype objects I think that speaks to the at scale deployment thing. If you had the luxury of time and whatever to basically re rewrite your whole world, then yes you could just never use prototype inheritance at all. That seems a challenge in itself. But at the very least we want to use third party libraries, you can’t really do that. As an application you could opt in the mode. With the automatic rewriting you get the benefits for free. We share your concern. Without the automatic rewriting step that that pure opt in will be difficult to get deploying and working. SDZ, do you have anything to add here? SDZ: Yeah. I want to speak up about the idea of using create null or the literal prototype null as an integration for this. I think it’s important to understand why we think that doesn’t work. We did a few experiments with this. We found a few problems with it. So the first one is you might create an object (inaudible) that doesn’t have any prototype and you think that is secure until some function at to the object might be array or number or string or maybe another object and now that has a prototype prototype, right? What you’re doing is essentially moving the goalpost one level deeper, right? And you really don’t have a way of creating let’s say a string with no prototype or a number with no prototype or array with no prototype. All of which could be polluted if they went into a common practice function. This is code and only protecting one object apart from sort of the issues that you would have in deploying it that is sort of find everywhere where I have an object literal and replace it with this which is granted sort of something that you can do and with the person speaking and saying if you’re willing to do that you are willing to do (inaudible) but I think those would be the strongest reasons why that solution is not good enough.