Office of Government Information Services (OGIS)

September 7 - Minutes (Certified)

The Freedom of Information Act (FOIA) Advisory Committee convened virtually at 10 a.m. ET on September 7, 2023. 

In accordance with the provisions of the Federal Advisory Committee Act (FACA), 5 U.S.C. §§ 1001-1014, the meeting was open to the public from 10 a.m. to 12:26 p.m. and livestreamed on NARA’s YouTube Channel.

Meeting materials are available on the Committee’s website at https://www.archives.gov/ogis/foia-advisory-committee/2022-2024-term/meetings/foiaac-9-7-2023.

Committee members present at the virtual meeting:

  • Alina M. Semo, Director, Office of Government Information Services (OGIS), National Archives and Records Administration (NARA) (Committee Chairperson)
  • Jason R. Baron, University of Maryland
  • Paul Chalmers,  Pension Benefit Guaranty Corporation
  • Carmen A. Collins, U.S. Department of Defense
  • David Cuillier, University of Florida 
  • Allyson Deitrick, U.S. Department of Commerce 
  • Gorka Garcia-Malene, U.S. Department of Health and Human Services 
  • Michael Heise, U.S. Equal Employment Opportunity Commission
  • Alexander Howard, Digital Democracy Project
  • Stefanie Jewett, U.S. Department of Health and Human Services Office of Inspector General
  • Gbemende Johnson, University of Georgia
  • Adam Marshall, Reporters Committee for Freedom of the Press 
  • Luke Nichter, Chapman University
  • Catrina Pavlik-Keenan, U.S. Department of Homeland Security
  • Thomas Susman, American Bar Association 
  • Bobak Talebian, U.S. Department of Justice, Office of Information Policy
  • Eira Tansey, Memory Rising
  • Benjamin Tingo, OPEXUS 
  • Patricia Weth, U.S. Environmental Protection Agency

Others present or participating in the virtual meeting:

  • Debra Steidel Wall, Deputy Archivist of the United States, NARA
  • Kirsten B. Mitchell, Committee’s Designated Federal Officer, NARA
  • Daniel Levenson, Committee’s Alternate Designated Federal Officer, NARA
  • Eric Stein, Deputy Assistant Secretary for the Office of Global Information Services, Department of State
  • Giorleny Altamirano Rayo, Chief Data Scientist, Department of State
  • David Kirby, IT Program Manager, Bureau of Information Resource Management,  Department of State
  • Robert Hammond, public commenter
  • Michelle Ridley, Webex event producer 

Welcome from Deputy Archivist of the United States

Deputy Archivist of the United States Debra Steidel Wall welcomed everyone to the sixth meeting of the 2022-24 term of the FOIA Advisory Committee. The Deputy Archivist spoke about two major themes: transparency in government and artificial intelligence (AI).  She cited a poll that found only 20% of respondents believe the federal government is transparent. She noted that “Making access happen” isn't just a tagline at the National Archives, it is the first of four strategic goals in the 2022-2026 agency’s strategic plan. She also noted that AI and machine learning will help make billions of records available under FOIA. Finally, she welcomed speakers from the Department of State, noting that the challenges with FOIA and declassification do not rest with a single agency.

Welcome and Updates from the Chairperson

Chairperson Alina Semo welcomed members and those in attendance.

Ms. Semo spoke about the perceived lack of transparency, and noted that the Committee's deliberations and recommendations center on the importance of transparency. She reminded attendees that  the committee operates under the Federal Advisory Committee Act, which requires open access to meetings and operations. 

Ms. Semo confirmed with Ms. Mitchell that there was a quorum for the meeting, and that meeting materials were posted on the OGIS website.

Ms. Semo reminded those in attendance that no substantive comments should be made in the Webex chat function and asked Committee members to identify themselves by name and affiliation when speaking. Ms. Semo noted that  comments were welcome via OGIS’s  public comments form, and that those that comply with the policy will be posted and that oral public comments at the end of the meeting would be limited to three minutes per individual. 

Ms. Semo introduced the speakers from the Department of State: Eric Stein, David Kirby, and Giorleny (“Gio”) Altamirano Rayo. Mr. Stein is the Deputy Assistant Secretary for the Office of Global Information Services, responsible for records management, FOIA, the Privacy Act, classification, declassification, library, and other records and information access programs.  Mr. Kirby is the IT program manager in State's Bureau of Information Resource Management (IRM) where he is responsible for the e-records archive and for overseeing the development and maintenance of the system. Ms. Altamirano Rayo is State’s Chief Data Scientist responsible for AI. 

Briefing: “Piloting Machine Learning for Freedom of Information Act Requests”

Background

Mr. Stein reported that the Department of State has undertaken pilot programs using machine-learning for document review.  He highlighted how it was important to work with other agencies and groups. He reported how State leveraged technology to address growing volume of requests for information, and how it considered and mitigated ethical issues, biases, and other risks. Lessons from a pilot on declassification informed subsequent pilots on FOIA. He showed a graph for context depicting the number of diplomatic cables that require declassification review each year. He stated that 25 years after its creation, each cable must be reviewed. Twenty-five years ago, 100,000 cables were sent annually, and reviewing them is a resource challenge. There has been significant growth in the volume of diplomatic communications due in large part to the advent of email in the 1990s and its increased use. This compounds the challenge to review with the level of resources, and projections show continued growth.

Mr. Stein stated that the declassification machine-learning pilot was successful, and the FOIA pilots regarding customer experience and search are ongoing. Mr. Stein handed the floor to Ms. Altamirano Rayo.

Ms. Altamirano Rayo stated that the goal of State’s Center for Analytics (CFA) is to provide insight that will drive diplomacy at the highest level. CFA started three years ago and has grown based on the demand for data-informed and evidence-based decision making. Its leader, Dr. Matthew Gravis, is State’s first-ever Chief Data AI Officer. Under his leadership CFA has developed State’s first-ever enterprise data strategy and enterprise AI strategy.  State is working to ensure responsible, safe, secure, and trustworthy AI usage. The agency is working to “do it right.” Updates on the State Department website for the Center for Analytics were provided.     

Ms. Altamirano Rayo reviewed the definitions of certain terms. She stated that the Foreign Affairs Manual defines data as” recorded information regardless of form or media on which it is recorded” and gave the example of staff demographics, which they make available online. The definition for “AI” aligns with the National Definition Authorization Act. It is more complex than “data;” it has five bullets.  She showed a slide defining Generative AI as a category of AI that generates new data. Ms. Altamirano Rayo noted that a well-known example of generative AI is ChatGPT, which generates human-like text based on prompts from a text box. Other applications can create pictures and audio. “AI Use Case” refers to any department application or use of AI to advance their work. “AI Service” is an application or tool that uses AI capabilities from a third party. Finally, “Discriminative AI” remains undefined in the Foreign Affairs Manual for now, but Ms. Altamirano Rayo stated that it is a model that learns and distinguishes differences in data to predict labels or classifications.

Mr. Kirby said that the Department of State’s AI pilots were made possible because of advancements in electronic record management. E-records archives initially capture records, a streamlined workflow allows groups to retire records more easily to the archive, and the addition of 70 metadata elements help searches for emails, files, and cables. They capture over two million unique records every day and currently, the archive contains over three billion unique records.

Mr. Stein clarified that Ms. Altamirano Rayo, Mr. Kirby, and he work in different Bureaus of the State Department. He said that a common question is whether this AI application to FOIA and record management work at other organizations. There are a lot of variables and factors to consider, but the biggest question is what it takes to use machine learning and AI. There are several things that were necessary in their case: on the records front, the e-records archive created the foundation of data that allowed them to accomplish so much, and Departmental support by establishing the Center for Analytics; having an AI program with policies and partnerships; and resources.

Mr. Stein introduced a deeper dive into the pilots: one on machine learning for declassification reviews and two for FOIA. The Partnership for Public Service has training to teach senior leaders about building blocks of AI policy and give them the opportunity to collaborate with other senior leaders who are learning about AI. When Mr. Stein took the course from October 2021 to May 2022, he realized that he had the opportunity to use State’s e-records and Center for Analytics to try a pilot with declassification of records that are 25 years or older. State was previously conducting a manual review.

Mr. Kirby stated that the E-records platform currently supports over 25 use cases across different bureaus, such as FOIA, litigation, historical research, diplomatic security investigations and many others. For this specific declassification effort, they had developed a separate module, which helps by automatically queuing up the records that are eligible for review. Up until now it has been a manual effort.

Declassification Pilot

Mr. Stein reported that at the start of the pilot, they had approximately 100,000 cables to review. They took manually reviewed records from 1995-1996 as a baseline to train AI. They learned through the technology implementation that they can improve the process in the declassification review, improvements in collaboration with other partners and other agencies, and better quality control steps (or unnecessary steps) to cut out. Data scientists looked at the process and asked objective questions to help reengineer the process.

Mr. Stein showed a slide of the project charter. It was the pilot proposal from the actual course. The challenge was to use technology to review records where each year they declassified 98 to 99% of them. The pilot was to see if they could train the model to do this. They started in October 2022 and went to January 2023.

Ms. Altamirano Rayo reported that machine learning is a subset of AI. The pilot used machine learning in the declassification process. They trained an algorithm on the manually declassified cables from 1995-1996, then applied the model to unreviewed cables from 1997. Over 300,000 classified cables were used to train the model. They had five data scientists developing and training the models. Then, to process the next batch of cables, for every cable processed, the model created a confidence score from 0 to 1. The data scientists determined score thresholds. Cables that the algorithm scored from 0 to 0.1 could “confidently” be declassified; cable scores from 0.9 to 1 should most likely remain classified. Everything scored in the middle would require a close manual review. This technology does not replace human review 100%, but it leverages AI to do the tedious parts, leaving critical decisions to human beings. She reported that the model reviewed just over 78,000 cables from 1997, and they used both the model and manual human review. The comparison let them understand the effectiveness of the model. Overall, in the pilot program, the AI achieved around 96% agreement with human reviewers while reducing up to 63% of the burden of having to do this manually. She stated that the AI review of cables from 1998 was fully operationalized. There was no human reviewer for many of those cables, so there is no error rate. However, that doesn't mean there were no human reviewers in the entire process. For the 47,000 cables that AI did not have a high confidence to declassify or keep classified, there was a manual review. For quality control, humans also reviewed a random sample of the cables for which AI had confidence. State has confidence in the accuracy, and AI review is getting better over time.

Ms. Altamirano Rayo reiterated the theme that the manual review process of cables is not sustainable because of the upcoming surge in records that will need review. This small-scale pilot offered a proof of concept to scale and integrate this technology into the declassification process. Applying this model to the 1998 cables and continuing to use it in future years will allow State to face the surge. During the pilot, they learned collaborating with State Department historians would help strengthen future declassification review models, too.

She stated that they are considering auto-redaction of email records. The number of classified emails doubles every two years, rising to 12 million emails in 2018. Mr. Stein stated that AI review would allow State to proactively disclose cables, which will start later this year.  He shared some lessons learned. First, he stated the importance of quality data, and doing the initial training on a similar data set: in this case the cables, which are all similar in lay-out. There were challenges when new data sets were introduced: for example, an email with several types of attachments. They will have to identify types of records that lend themselves to AI review — emails, memos, and others — and train this model or create new ones to process them, starting small and building from there. A second lesson learned was that partnerships are critical to success. Mr. Stein stated that he enjoyed working with Mr. Kirby and Ms. Altamirano Rayo and their respective teams and that they learned from each other. He shared basic best practices for innovative projects: starting small; identifying a project and scope; being open to results and feedback; learning from failure; and sharing what did and did not work. This process has led to dialogues in different interagency communities on what is and is not working. Sharing the limits of AI helps other groups avoid jumping to conclusions that they can use AI for everything. He emphasized the importance of quality-control checks for results, stating that recurring sustainable success requires ongoing training with human input. 

FOIA Pilots

Mr. Stein reported that after the successful pilot categorizing which cables would need review, they turned to FOIA.  They looked at what worked well from the declassification pilot that could apply to work in the FOIA community. They recognized that the technology is not at the point of applying redactions yet; especially since many of FOIA’s nine exemptions are so nuanced, particularly Exemption 3 and Exemption 5. If State is training a model to apply redactions, especially under the foreseeable harm standard, it needs to ensure it can do so well. The technology so far only works well with clearly identifiable data points: email accounts, names, and other privacy data. 

Mr. Stein stated that the current technology is successful at sorting large volumes of data information, making connections, and “seeing” things that a human reviewer may miss. Their first FOIA pilot was about customer experience on public websites – how they can improve the website. They wanted to try automating the process of engaging with the public by using AI generated indexing. This would help researchers find existing records that are already available to the public. They sought to automate customer engagement early in the process by using AI indexing to help narrow FOIA requests, to point out proactively released material, and to alert requesters as new records are proactively released.

Mr. Stein reported on the second FOIA pilot: for multiple requests on a single topic or event, they wanted to try using AI for one big search to get information to requesters. This would reduce duplication of efforts while improving speed and search time. Discriminatory AI could identify the electronic records that are potentially responsive, allowing State to begin review immediately.

Mr. Stein summarized lessons learned to date from the pilots on FOIA and declassification: Managing data and records is critical to success. It is important to start small. take risks, consider bias, be open to results, and share results.

Mr. Stein concluded the presentation by inviting feedback on State’s FOIA program through State's FOIA website.

Questions

Ms. Semo announced that Ms. Altamirano Rayo would have to leave soon and solicited questions for her from Committee members.

Ms. Harper asked if there had been a component of the FOIA pilot that learned from documents that have been appealed or litigated.

Mr. Stein stated that they have not done it yet, but it is a great idea.

Ms. Altamirano Rayo thanked the participants and departed.

Dr. Nichter stated that in his experience, AI surprises with what works well and what does not work well. He asked what they learned, especially what were their pleasant surprises.

Mr. Stein shared that in March they briefed the Historical Advisory Committee, and went public with the pilots in the Chief FOIA Officers report. Initially the model had 50-60% accuracy, and the team learned that finding the right terms was vital. A major takeaway was to stay patient while they discovered the correct terms.  Also they learned that by putting out more information they got more feedback from end users, which itself was helpful data to train the AI model. The difficulties are now in training the model with emails and other record types that are not as standardized as cables, and training the model to recognize what to send to another agency for referral or consultation. The model was not always successful in determining what needs to go to other agencies, because the volume of potential training material is much smaller. The initial results sometimes had issues because of how the data was structured.

Mr. Kirby added that the cables were a good place to start because they are so structured, with uniform headers, format, tags, and captions. The data from 1997-98 did require some cleanup on the post names, embassies, consulate types, and similar things. The lack of structure in memos and email records will be a challenge, so starting small with cables was the right approach

Mr. Baron asked how much State had engaged with the e-discovery community.

Mr. Stein responded that State created e-records as part of the OMB/NARA mandate to store all records electronically. At the beginning of the pilot, they reviewed existing technology. They found great tools that were very expensive. It became a question of affordability, as well as interoperability with department networks and infrastructure. They are not partnering with anyone in the legal services industry.  AI plus AI plus AI does not equal super AI. They must be careful because layering these tools on top of each other might get a worse result in the end. They could also have problems moving things from one system to the other. He asked Mr. Baron to submit any suggestions.

Mr. Baron suggested an off-line conversation.  Mr. Baron’s second point was that he has done research with MITRE on  Exemption 5  deliberative process material. His research shows machine learning methods are about 70% accurate in sorting and ranking docs that are within or outside of Exemption 5. Lastly he noted that with 98% accuracy, on a large scale the 2% error rate could be tens of thousands of records. What would the response be to senior leadership if one of those improperly redacted records became a newspaper headline?

Mr. Stein acknowledged that mistakes happen but that the status quo of manual review was undesirable. The risk appetite changes. Items released 40 years ago could once again be sensitive today. Humans make mistakes, too.

Mr. Garcia-Malene asked that since both humans and machines make mistakes, and since the error rates indicate that the model leans toward protecting information, how did they think about what AI error rates they are comfortable with?

Mr. Stein stated that the question was similar to the previous question about a 2% error rate on a large volume. He gave the example of their previous process using Boolean logic, searching for exact words or phrases. The searches were for key terms and yielded all records that included them. The new approach with AI, gives a better understanding of connection. AI is able to learn the various nicknames and titles that may refer to the same person but the old one was not. That allows for better automated search and declassification review.

Mr. Kirby stated that they use the AI model to pre-review before having humans review, but then the humans can double check anything if they want to do a deep dive. Early on in the process, they fed to the AI model the 97 cables that were already 100% human reviewed. Then they met with the human reviewers to analyze the conflict between the human output and the AI output. In many cases, the reviewers subsequently agreed with the AI review over the manual review. And there are instances where two human reviewers get different results. So a 2% error rate from the model may be better than the human error rate.

Mr. Stein added that a human might deny an entire record under Exemption 1, when some of the information could be released. So humans may over-withhold, when AI could do a more granular job.

Ms. Jewett asked what State would say to smaller agencies that may have to rely on private sector AI. What would they say to critics who might not be comfortable with that privatization.

Mr. Stein invited those agencies to talk to the Technology Committee at the Chief FOIA Officers (CFO) Council. The Technology Committee has experts who’ve been in various situations and have seen various issues. They would advise on the requirements to include in a contract. They would find an expert who has relevant experience to advise smaller agencies.  

Ms. Weth asked the speakers to elaborate on how federal agencies could participate and benefit from the program he referenced.

Mr. Stein stated that the Partnership for Public Service has a free AI course for GS-15s and members of the Senior Executive Service to socialize concepts around artificial intelligence to the executive level and get them to think about policy. Considerations include bias, ethics, program development.  He also recommended reaching out to the CFO Council Technology Committee and reading articles and journals.

Mr. Chalmers asked about objections they encountered from enterprise architecture and privacy and how they overcame those objections.

Mr. Stein reported that they socialized it well ahead of time to understand what those concerns would be. And by doing so, they avoided speed bumps. The delay in starting the pilot gave more time to socialize it with partners. People were interested in trying something different. There was enough interest in the innovative technology, so people didn’t raise concerns. Now at the point of releasing information, some concerns are coming up about statutory information, privacy and so forth. They are implementing an additional check before they release anything. Every agency might have unique concerns and sensitivities, and it is possible as they extend the pilot to subsequent years of records, they may have to rethink the program.

Mr. Baron would strongly recommend that, to the extent federal agencies buy commercially available AI tools, it will not be the vendors who train the AI but rather in-house FOIA experts. Whether an agency uses federal employees or contractors is a separate question. But people with experience to ensure that the training goes right. With that said, every agency should consider reaching out to the private sector to survey what is possible.

Mr. Stein responded that for all the outreach, he was surprised at how little feedback they got in response to the requests for information (RFI) to see what is out there in terms of tools or technologies. The CFO Council Technology Committee reviews the feedback it gets. State’s pilots show the need for better shared platforms between agencies other than email to process FOIA requests. The current process is wildly inefficient and takes too long. If we had better technology to help collaborate, in particular, with referrals and consultations, or even internally at times, the FOIA process could be improved.

Mr. Garcia-Malene asked if the pilot did not immediately reveal opportunities for machine learning in FOIA document review.

Mr Stein stated that the pilot showed AI is not ready to apply redactions. But in terms of searching and sorting, a search across classified and unclassified networks could return two million hits. The time-savings from using AI to search and sort is very important.

Mr. Stein also stated that State reviewed the MITRE tool, and it has great possibilities. Although, how it works with a record set, archive, or case processing tool, is going to vary by agency.

Mr. Kirby added that the analytic team’s other projects have shown the value of incorporating metadata into an archive. For example, they have added entity extraction—identifying key people, places, organizations, etc.—to the metadata. They have also added a sentiment score to every record in the archives to tell if the tone of the record is positive or negative. Also, there are lots of subscription-type emails from newspapers, newsletters, and mailing lists, which create hits on searches. AI can automatically tag the top senders who exclusively send subscription type emails, to filter those out with a single click.

Ms. Pavlik-Keenan asked if the pilots were part of the project Mr. Stein started in class.

Mr. Stein confirmed. He stated that one of the slides has his actual charter from the project. He said that it was exciting to have a vision and see it come through. There was a risk because it might not have worked. It works in this instance and overall it has also sparked energy and excitement around the ways we could use technology for records access.

Ms. Semo, seeing no other questions, thanked the speakers, and began an 11-minute break until 11:50 am.

After the break, Ms. Semo turned to the subcommittee reports and introduced the Resources Subcommittee.

Subcommittee Reports

Resources Subcommittee

Dr. Johnson reported that the Resources Subcommittee had completed interviews of high-level FOIA officials and the survey of FOIA professionals. Members are planning to analyze the responses. The survey received approximately 150 complete responses. Preliminary results include:

77% of respondents noted the need for more resources to properly implement FOIA. When asked about the greatest need in their office, 53% stated more staff was the greatest need, 21% stated more technology was the greatest need; 21% stated more training was the greatest need; the remaining responses selected “other.”  Furthermore, 54% of respondents had considered leaving their positions, with the two most common reasons being higher grade opportunities and a concern over a lack of needed resources.

Dr. Johnson reported that the Resources Subcommittee is exploring a number of recommendations for practical solutions to aid agencies in bringing on additional FOIA staff resources when needed. One recommendation being explored is that the General Services Administration (GSA) add FOIA contractor services to the GSA Schedule to expedite hiring contractors for FOIA. Another potential recommendation involves modifying the career ladder for Government Information Specialists. A third potential recommendation would be allowing the direct hiring of FOIA specialists through the excepted service rather than requiring full competitive hiring

Mr. Chalmers noted that one of the main findings from respondents is the difficulty in hiring and retaining quality FOIA people. FOIA jobs tend to have a lower cap on the pay-scale than other jobs, so raising the ladder would promote retention. People would stay longer to get a higher grade. The CFO Council’s Committee on Cross-Agency Collaboration and Innovation ( COCACI) also is looking into this issue, and the Subcommittee is coordinating with COCACI. Direct Hiring authority would alleviate the lengthy hiring process to fill open positions by allowing exempt hiring. FOIA is an essential function, like IT specialists and cybersecurity experts, so FOIA offices should also have the flexibility to hire. The GSA schedule is a schedule of different goods and services and vendors that GSA  has prequalified, so agencies can simply write a task order without doing a full procurement. Full procurement is a slow process with the potential for protest by vendors who were not selected.

Ms. Jewett stated that this would not replace full-time permanent employees. Full-time employees are preferable, but there should be flexibility for when an agency needs temporary help, for example a small agency that has  a small FOIA staff  that suddenly receives many requests. They might only need someone for a very limited time. Inclusion in a GSA Schedule could save months in hiring contractors.

Implementation Subcommittee

Dr. Cuillier reported that the Implementation Subcommittee was making progress on examining the status of the 51 recommendations passed in the four previous terms of the FOIA Advisory Committee. There is a working group reviewing Chief FOIA Officer reports and planning a survey and interviews of Chief FOIA Officers to gauge progress on another dozen recommendations. The subcommittee hopes to have preliminary conclusions to report by the December meeting.

Modernization Subcommittee

Mr. Garcia-Malene stated that the Subcommittee had collaborated with NARA and DOJ to create a Chief FOIA Officers Council memorandum to remind Chief FOIA Officers of the August 2023 deadline for interoperability with FOIA.gov; to remind Chief FOIA Officers that FOIAonline was being decommissioned;  and to share best practices of customer service.

Mr. Marshall provided an update on the Model Determination Letter and stated that the Subcommittee  got more comments and more engagement than expected. A working group has been analyzing the comments and discussing them in their biweekly meetings, and the feedback has created a better work product.

Mr. Baron appreciated Mr. Howard's efforts in advocating for engagement with the wider federal community about the sunsetting of FOIAonline and thanked Ms. Semo and Mr. Talebian for issuing the CFO Council memorandum.

Mr. Baron reported that the Subcommittee is discussing whether there should be some follow-up by the Committee to see how agencies have implemented OMB’s goals and the goals of the memorandum in terms of preservation of FOIA requests in a transitional period to the new platforms. The Subcommittee is also considering how to foster a dialogue between agencies and requesters early in the process. The volume of records is tremendous especially in light of the 2024 mandate from OMB and NARA for the entire government to transition to electronic recordkeeping. 

Ms. Semo asked for the status of the Model Determination Letter, and if it can be presented  at December’s meeting.

Mr. Baron answered that the Subcommittee cannot commit the Subcommittee at this time on whether the letter would be ready by December and noted that members  want to consider comments from the public and from OIP. Mr. Garcia-Malene stated that the subcommittee is dedicating quite a bit of energy to finalizing a good product as soon as possible.

Public Comments

Ms. Semo opened the floor to public comments and  stated that all oral comments are captured in the transcript of the meeting.  Oral comments are also captured in the NARA YouTube recording, available on the NARA YouTube channel.

Ms. Semo reminded participants that public comments are limited to three minutes per person. She asked Ms. Mitchell if there had been any relevant questions or brief comments via Webex chat.

Ms. Mitchell read two questions into the record. First, why did OGIS and OIP disable the chat function [in YouTube]?

Ms. Mitchell responded that she cannot speak for the Department of Justice, but the National Archives values citizen participation. Under FACA, any member of the public may file a written statement with the Committee. Also any member of the public may speak or otherwise address the committee if agency guidelines permit, which the National Archives’ guidelines do.

Ms. Mitchell read the second question into the record. What funding levels do OIP and OGIS need to execute their missions and develop employees?

Ms. Mitchell responded that she cannot speak for the Department of Justice. She shared with Webex attendees a link to NARA’s  congressional budget justification for Fiscal Year 2024: https://www.archives.gov/files/about/plans-reports/performance-budget/2024-nara-congressional-justification.pdf.

Ms. Semo thanked Ms. Mitchell and gave Mr. Talebian an opportunity to answer.

Mr. Talebian responded that DOJ and OIP very much value public participation and engagement. The public comments periods in the CFO Council meetings are important. As far as budget and funding, all organizations could use more resources. The Department is very invested in the mission of OIP. While Mr. Talebian did not have OIP’s budget handy, he confirmed OIP is well supported by the Department.

Ms. Ridley did not see any callers in queue to make public comments.

Ms. Mitchell read into the record a comment regarding minutes of the meetings of the FOIA Advisory Committee and the Chief FOIA Officers Council. Ms. Mitchell clarified that they are two separate bodies and that FOIA Advisory Committee minutes are governed by the Federal Advisory Committee Act while Chief FOIA Officer Council minutes are governed by FOIA.

Ms. Ridley announced that there was a caller.

Mr. Robert Hammond stated that he has submitted written comments that were not posted. He finds the requirements of OGIS’s public posting policy unnecessary, stating that they diminish the impact of PDF presentations. He stated that NARA and DOJ OIP have disabled the chat function in YouTube, stating that it deprives citizens of the opportunity to contemporaneously participate in open meeting discussions. He asked the FOIA Advisory Committee to change the by-laws to include YouTube. He thinks that not saving comments from the chat window appears to be a violation of the FACA and other laws. Finally, he stated that he has requested additional funding for OGIS and OIP for years, and that OGIS and OIP do not advocate themselves.

Ms. Ridley did not see additional commenters in the queue.

Closing Remarks and Adjournment 

Ms. Semo reminded the Committee that the next meeting would be on Thursday, December 7, 2023 at 10 a.m. Eastern time.

Ms. Semo asked if there were any final questions or comments from the Committee members.  Hearing none, she adjourned the meeting at 12:26 p.m.

I certify that, to the best of my knowledge, the foregoing minutes are accurate and complete on November 16, 2023. 

/s/ Kirsten B. Mitchell 

Kirsten B. Mitchell

Designated Federal Officer,

2022-2024 Term

 

/s/ Alina M. Semo 

Alina M. Semo

Chairperson,

2022-2024 Term 

 

Top