So how about sharing some of the protocols and testing methods you used...is there anything like ASTM for what you were doing?
Did you randomly select masks from the general public and market retailers etc?
What type of equipment did you use to emulate breathing, coughing, sneezing etc...? what was used to monitor such...??
Gladly, but this will be long.
First, as clarification, my research is not specifically on mask effectiveness. We are not looking at that direct problem, but are trying to get a more fundamental understanding of how droplets are produced, and move through the environment when speaking (without a mask) first. Future work will then add masks to see how that influences it. Think of it as trying to answer the question "Does the 6-foot separation rule really protect you if you aren't wearing a mask." The answer to date is, "it probably depends". Outdoors, probably. Indoors with poor ventilation, probably not. Ventilation strategies, traffic patterns in buildings, etc. will also have an influence.
On to your question.
There are protocols that must be met to be considered N95 (stop a certain amount of particles with a certain size distribution). However, there are currently no testing standards for evaluating how well a mask prevents transport of the SARS-CoV-2 virus. So, researchers use the techniques they know/have to investigate, while acknowledging the shortcomings.
For example, the article I posted earlier uses a laser light sheet to illuminate in front of the mask while someone speaks/coughs/etc. If powerful enough (a key point) the laser illuminates the droplets, and a camera records them. A simple program can be written to "count" the droplets. If the resolution of the camera is sufficient, you can even get a rudimentary estimate of the distribution of sizes. The shortcoming of this is the laser sheet is just that, a sheet of light, not a volume. So, you aren't measuring the total number of droplets, but rather, the ones that pass through the light sheet. It doesn't work for absolute counts, but for making relative comparisons of one mask to the other, it's not bad. The other challenge is the laser power has to be sufficiently high to illuminate the very very small droplets, and the camera resolution has to be sufficiently high. This is not trivial, and likely leads to undercounting the smaller droplets. Still, this is a down and dirty, cheap, effective approach.
The gold standard is using an Aerodynamic Particle Sizer (APS) (not cheap) (
Error -. Think of it as speaking/coughing/breathing into a funnel. The funnel is connected to the APS, which draws all of the expelled into the machine, which is able to measure the droplet size, and count how many there are as well. Challenges such as calibration, accounting for droplet deposition in the funnel, evaporation of droplets before they are counted (which decreases their diameter) etc. means this is not an accessible piece of equipment that someone is just going to pick up and take some measurements with one afternoon. There are various groups that use these machines (including at my institution) for other particle science measurements, and so have been able to pivot to COVID-19 related research, quite quickly. As previously mentioned, Linsey Marr at Virginia Tech is the preeminent expert in this field. Measurements with APS's show the same results. Face covering prevent roughly 60-90% of droplets from being expelled into the surroundings.
The problem with both of these approaches is they tell the size of the droplets and how many are produced, but they don't tell the velocity they exit the mouth with, which influences how they mix into the environment. That is what my work is focused on. We primarily use a technique called particle image velocimetry (PIV) (
FlowMaster). Again, not cheap, and not something you can't just pick up and make measurements with. It's actually a very simple principle though. We purposely put very small particles in the flow so it can be assumed the particles "follow the flow". If you create a light sheet using a VERY powerful laser, the particles show up as speckles in the image. If you take two successive images, you get two images with speckle patterns. You can then use algorithms to track how far the particles moved from one image to the next. If you know the time between when the two images were taken, you can then calculate the velocity as V_x = dx/dt (horizontal component of the velocity vector), and V_y = dy/dt (vertical component of the velocity vector) and you have a snapshot of the quantifiable velocity field. The problem is you still only have a plane of data. I received a grant last year for a tomographic PIV system, which is the same principle, but measures the velocity in a volume, rather than a plane (even more expensive). The problem is it was delivered 1 week after our University shut down, and so we are still waiting to get it up and running, but should in the next month or so.
We make these measurements using both human volunteers, as well as physical models that replicate the human airway. Again, there are pluses/minus with both approaches. The models provide repeatability, which provides large samples sizes to be assembled, which statistically reduces uncertainty/error in the measurement systems, which should be around 2-3% for a well designed experiment. Human data gives the "real" scenario. But, lack of repeatability means you can't assemble large samples, and so the uncertainty in the measurement technique may be higher in parts of the flow. Again, it's a trade-off, so you use both approaches.
The last challenge is there is no known way to simultaneously measure the droplet sizes and distribution (e.g. using APS) and the velocity field (using PIV) at the same time. So, we address this problem by using Computational Fluid Dynamics (CFD), which solves the governing equations of fluid motion numerically on a computer. Again, way more complicated than just push a button to run it (It's an entirely different field of study/research within the field of fluid dynamics, which is one very narrow field within the broader field of mechanical engineering). The problem is you have to "validate" the CFD code. We do this by first comparing its results with the experimental velocity results from the PIV measurements to ensure it is giving accurate data. Then, numerically, it is easy to put in ad-hoc particle sizes and distributions, and track how they move in the flow field of speech/coughing/sneezing/etc, and how they are distributed throughout enclosures. We use the results from the particle size distributions from the APS measurements to ensure we have correct particle sizes and distributions to input into the CFD simulations, again, ensuring the particle dynamics are accurate.
Probably more information than you wanted to know.