Getty

A new grant awarded to a researcher from Virginia Tech illuminates an important but poorly understood fact about science and engineering – it’s all done by humans and humans are not objective creatures.

In this case, the research will be done over the next five years by Alejandro Salado, assistant professor in the Grado Department of Industrial and Systems Engineering in the College of Engineering at Virginia Tech. He’s been awarded a $500,000 National Science Foundation Faculty Early Career Development award to study “how engineers form subjective opinions from objective data.”

It’s a question long addressed by historians and philosophers of science who have been pointing out for decades that scientists do more than simply crunch numbers in a vacuum. In fact, coming up with a question to research, choosing a method, and interpreting data are all matters of judgment and skill.

The same goes for engineers.

With news stories about algorithmic bias, for example, becoming a weekly occurrence, more people recognize that tools once seen as objective were in fact built by humans with their own social and cultural values. If we don’t recognize those values and attempt to correct for them, we end up with flawed products that can do just as much – if not more – harm than good.

Any system made by a human involves a series of decisions and unconscious biases can influence these at every level. Even artificial intelligence – which is, by definition, designed to be better than humans at interpreting data – has been found to be flawed on many levels. Why? Because humans built it.

In creating a deep learning AI algorithm, an engineer first has to consider what they want to achieve – the question itself (judging anything from attractiveness to creditworthiness to criminal behavior) is already a value-laden judgment. Who stands to benefit from this data? Why would we try to identify these elements in the first place?

Algorithms then need to be populated by data so they can be “trained” to recognize patterns. As we’ve seen from racist and sexist algorithms, data – even though it’s designed to be objective – can be collected in a subjective way. Facial recognition technologies, for example, did not receive enough information on black or Asian faces and are therefore now much more skilled at identifying Caucasian features.

Bad algorithms – and bad products in general – are simply a huge waste of money. Take Amazon’s recently scrapped recruiting tool. Insiders told Reuters that they were building a program that would be able to scan applicant resumes to identify the best talent. The idea was that a computer program would be a lot more objective than a human. Except it’s not. The anonymous sources said that when the system scored candidates on a scale of 1-5 stars, it did so based on training data from resumes submitted to the company over a 10-year period, most of which came from men. The system then taught itself that the language used in men’s resumes was preferable and therefore so were male candidates. It got to the point where the algorithm penalized resumes that included the word “women’s.”

Any system created by engineers is therefore subject to human values in some way. Even the interpretation of data is subject to bias. In a press release announcing the new grant, Virginia Tech acknowledged this:

Given the same data, different engineers will make different decisions, but there are associated risks since the quality and safety of an engineered system or product is driven by the subjective mental processes of the engineer.

Salado’s research will hopefully help illuminate “how those subjective mental processes form and how they affect engineers’ decisions precisely.” He plans to run experiments on groups of middle school students, college undergraduates, and professional engineers to study the formation of their beliefs at each stage, specifically when they’re performing systems checks and verifying whether a system is working properly.

During this verification stage, patterns of thinking will emerge that Salado hopes will help engineers recognize when subjective and biased thinking is taking place.

Virginia Tech said:

Salado’s research can enhance the scientific understanding of verification in systems engineering, which will benefit society by enabling efficient system development with a more robust verification coverage and could lead to increased safety and efficacy of commercial products and services.

In the middle school group, Salado will study how students’ ideas about math problems evolved as they’re exposed to other ways of thinking about what is largely considered to be objective material.

Math gives the wrong impression that engineering problems have a unique solution,” said Salado. “This is far from the truth. Instead, there are several solutions that may be correct, depending upon the mental model or problem definition that the engineer comes up with first.

Salado will study undergraduate students throughout their college careers by exposing them to problems that can be solved in different ways by people with different mindsets, showing them that there is no one right answer to many real-world problems. He said the way we currently teach engineering only reinforces the idea that engineering is objective.

I believe that the way in which we expose students to engineering problems in college is very misleading from the real practice of engineering. In college, we give a problem to a student and we expect an answer that is as close as possible to a solution key.

In his research with the group of professional engineers, Salado plans to examine the technical documents used to make predictions about how products should work before being implemented. The experiments will focus on how an engineer’s predictions might change if they are exposed to new information “which will allow Salado to understand how an engineer forms a subjective guess from objective data.”

This is by no means the only project with these goals going on right now. Hundreds of academics all over the world – including humanists and social scientists – are working on ways to illuminate the human element in science and engineering.

But the key to change lies in incorporating this information into engineering education (and the continuing education of established engineers). While professional associations and tech companies alike are calling for more transparency in engineering decision-making, the complex workings of the human mind have proven to be the ultimate opaque system, making it difficult to implement real change.

In the past, we’ve had to do damage control to correct for the assumption that engineers and their systems are objective. We can only hope that teaching engineers to recognize and anticipate bias at the beginning of their training is the way of the future.

” readability=”123.83607148004″>

< div _ ngcontent-c14 ="" innerhtml ="(* )(** )(**** )

(******** )Getty

A brand-new grant awarded to a scientist from Virginia Tech lights up a crucial however badly comprehended reality about science and engineering– it’s all done by human beings
and human beings are not unbiased animals. (*********** )(********** )In this case, the research study will be done over the next 5 years by Alejandro Salado, assistant teacher in the Grado Department of Industrial and Systems Engineering in the College of Engineering at Virginia Tech. He’s been granted a $500,000 National Science Structure Professors Early Profession Advancement award to study “how engineers form subjective viewpoints from unbiased information.”

It’s a concern long resolved by historians and theorists of science who have actually been mentioning for years that researchers do more than just crunch numbers in a vacuum. In reality, creating a concern to research study, picking an approach, and translating information are all matters of judgment and ability.

The
very same chooses engineers.(*********** )

With newspaper article about algorithmic predisposition, for instance, ending up being a weekly event, more individuals acknowledge that tools when viewed as unbiased remained in reality developed by human beings with their own social and cultural worths. If we do not acknowledge those worths and effort to remedy for them, we wind up with flawed items that can do simply as much– if not more– damage than excellent.

Any system made by a human includes a series of choices and unconscious predispositions can affect these at every level. Even expert system– which is, by meaning, developed to be much better than human beings at translating information– has actually been discovered to be flawed on numerous levels. Why? Due to the fact that human beings developed it.

(************** )