Qual das afirmações abaixo sintetiza corretamente a
discussão sobre os riscos do uso de robôs na função de
clérigos, tal como exposta no texto?
Robot priests can bless you, advise you,
and even perform your funeral
By Sigal Samuel Updated Jan 13, 2020,
11:25am EST
A new priest named Mindar is holding forth at
Kodaiji, a 400-year-old Buddhist temple in Kyoto,
Japan. Like other clergy members, this priest can
deliver sermons and move around to interface with
worshippers. Mindar is a robot, designed to look like
Kannon, the Buddhist deity of mercy, and cost $1
million.
As more religious communities begin to incorporate
robotics — in some cases, AI-powered — questions
arise about how technology could change our
religious experiences. Traditionally, those experiences
are valuable in part because they leave room for the
spontaneous and surprising, the emotional and even
the mystical. That could be lost if we mechanize
them.
Another risk has to do with how an AI priest would
handle ethical queries. Robots whose algorithms learn
from previous data may nudge us toward decisions
based on what people have done in the past,
incrementally homogenizing answers and narrowing the scope of our spiritual imagination. One could
argue, however, that risk also exists with human
clergy, since the clergy is bounded too — there’s
already a built-in nudging or limiting factor.
AI systems can be particularly problematic in that
they often function as black boxes. We typically don’t
know what sorts of biases are coded into them or
what sorts of human nuance and context they’re
failing to understand. A human priest who knows
your broader context as a whole person may gather
this and give you the right recommendation.
Human clergy members serve as the anchor for a
community, bringing people together. They provide
human contact, which is in danger of becoming a
luxury good as we create robots to more cheaply do
the work of people. Robots, notwithstanding, might
be able to transcend some social divides, such as race
and gender, to enhance community in a way that’s
more liberating.
Ultimately, in religion as in other domains, robots
and humans are perhaps best understood not as
competitors but as collaborators. Each offers
something the other lacks.
(S. Samuel, Robot priests can bless you, advise you, and even
perform your funeral. Vox, 9/9/2019. Disponível em
https://www.vox.com/ future-perfect/2019/9/9/20851753/ai-religionrobot-priest-mindar-budd hism-christianity. Acessado em 05/08/2020.)
Robot priests can bless you, advise you, and even perform your funeral
By Sigal Samuel Updated Jan 13, 2020, 11:25am EST
A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. Mindar is a robot, designed to look like Kannon, the Buddhist deity of mercy, and cost $1 million.
As more religious communities begin to incorporate robotics — in some cases, AI-powered — questions arise about how technology could change our religious experiences. Traditionally, those experiences are valuable in part because they leave room for the spontaneous and surprising, the emotional and even the mystical. That could be lost if we mechanize them.
Another risk has to do with how an AI priest would handle ethical queries. Robots whose algorithms learn from previous data may nudge us toward decisions based on what people have done in the past, incrementally homogenizing answers and narrowing the scope of our spiritual imagination. One could argue, however, that risk also exists with human clergy, since the clergy is bounded too — there’s already a built-in nudging or limiting factor.
AI systems can be particularly problematic in that they often function as black boxes. We typically don’t know what sorts of biases are coded into them or what sorts of human nuance and context they’re failing to understand. A human priest who knows your broader context as a whole person may gather this and give you the right recommendation.
Human clergy members serve as the anchor for a community, bringing people together. They provide human contact, which is in danger of becoming a luxury good as we create robots to more cheaply do the work of people. Robots, notwithstanding, might be able to transcend some social divides, such as race and gender, to enhance community in a way that’s more liberating.
Ultimately, in religion as in other domains, robots and humans are perhaps best understood not as competitors but as collaborators. Each offers something the other lacks.
Gabarito comentado
Resposta correta: D
Tema central: riscos e vantagens do uso de robôs (com IA) na função de clérigos — trata-se de interpretar como o autor distingue limitações tecnológicas (viés, “black box”, mecanização) e funções humanas (contato, contexto, papel comunitário) e como propõe uma relação complementar entre ambos.
Resumo teórico e estratégia de leitura: ao resolver questões de interpretação sintetizadora:
- Procure sinais de conclusão no texto (palavras como “ultimately”, “perhaps best understood”) — o autor normalmente resume aí sua posição.
- Identifique argumentos pró e contra: riscos (mecanização, homogeneização, caixas‑pretas, perda de nuance) vs benefícios potenciais (superar divisões sociais, colaboração).
- Evite alternativas que distorçam ou generalizem além do que o texto afirma (p.ex. “iguais em tudo”, ou “sabemos exatamente como o algoritmo raciocina”).
Justificativa da alternativa D (correta): o texto conclui que robôs e humanos são melhor entendidos como colaboradores, pois cada um oferece algo que o outro não tem — robôs podem transcender certas divisões, humanos oferecem contexto, nuance e papel comunitário. A alternativa D sintetiza exatamente essa ideia de complementaridade mútua.
Análise das alternativas incorretas:
- A — afirma que têm as mesmas capacidades e limitações. Errado: o texto destaca diferenças claras e propõe complementaridade, não identicidade.
- B — diz que robôs são piores em promover relações sociais mas dão melhores conselhos. Errado: o texto aponta que algoritmos podem homogeneizar respostas e operar como “black boxes”, podendo falhar em entender nuances — não afirma que dão conselhos melhores.
- C — afirma que robôs permitem saber exatamente seus códigos e raciocínios. Errado e contraditório: o texto enfatiza o problema da caixa‑preta e a incerteza sobre vieses embutidos.
Fontes e leituras úteis: o artigo citado (Vox) e documentos sobre ética em IA (p.ex. UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2021) tratam de vieses, transparência e impacto social — conceitos relevantes para interpretar o texto.
Dica final para concursos: sempre leia o parágrafo de conclusão do autor; muitas vezes a opção que melhor sintetiza o texto reproduz a ideia final de equilíbrio, contraste ou recomendação.
Gostou do comentário? Deixe sua avaliação aqui embaixo!






