It would be good to also take into account the types of dreams and how advertising content can manifest depending on the sleep stage, etc.
Hey Adam - in re-reading this I’m wondering whether the points we outline would benefit from being developed more, so we support each proposed guideline with empirical and historical research.
That also opens up more avenues for publishing, eg see a couple on neuroethics in Trends:
Hi Michelle, agreed! I think this needs a rewrite based on all the comments. There seem to be clear issues of a) being specific about dream engineering ethical issues versus issues for all of science and b) sticking to empirically supported guidelines. I’d love to publish it, get it out there somehow. Yup.
These two are not ethics for dream engineering, they are ethics for all scientists already.
that said #4 I think could have some specific reference to social/cultural variations in dream practices. ie our points should speak specifically to how this is an ethical consideration for dream engineering, not just general science
this is a bit problematic in that many dream research experiments do seek to influence dreams in ways that may be unknown to participants.
Not sure what this refers to exactly; meaning in dreams is addressed (or at times even imposed) with non-tech methods as is. Also implies that each dream contains unique truths instead of multiple ways of being appreciated or understood. We could, however, develop techs that help people better understand their dreams.
You may want to add a line (or rephrase slightly) to cite an example of impact of olfactory stimulation on emotions in dreams, like the study by Schredl et al:
“Manipulation” may be more accurate than “control.” Also avoids the double use of “control'“ and “controlled” within the same sentence.
#1) I'm skeptical that dreams have a deep, hidden meaning in the first place, however if they had, I don't see why technologies helping to decrypt these would ethically problematic? Naturally, privacy issues would have to be considered etc., however that's true already for encrypted versions.
#2) This seems somewhat vague to me, but maybe this is on purpose.
#3, #4) To me these don't seem very dream engineering specific, but apply to other research fields similarly. I very much hope that dream engineers do not strive for world domination or genocide, and will not eat children - however these wouldn't need to be explicit part of any dream engineering ethics, even though ethically much more problematic than a WEIRD focus or risks of cultural appropriation (the latter being a very US-centric debate in my experience).
#5) Somewhat the same issue as #3/#4, even though here I see slightly more reason to believe that dream engineering results might have a higher chance of being oversold than some other fields of research. Would still doubt that it's so particularly special that it needs to be stated as an explicit dream engineering guideline.
#7) This seems to focus very much on consumer technology. In lab settings, where participants will spend a single night or two of their lives, I don't see why particularly strong efforts (and thus opportunity costs) should be spend on minimizing impact on sleep quality.
[add to paragraph]: These signals could have powerful effects on the content and functions of dreaming.
Although I still need to read the whole document, to me including ‘sleep’ this way dilutes the focus of the document. Presumably, influences on sleep are being included here because some cognitive sleep processes might underlie or modify dreaming? (that is only implied…)
How about keeping the focus on dreaming at first, e.g., ‘…developments that enable the intentional modification of dream mechanisms.’? Then later describe how sleep processes, too, might be implicated…
Another way to do this would be to cite more dream-related research in this intro (e.g., work by Carr, et al., Konkoly, et al., Picard-Deland, et al., Stumbrys et al, Appel, et al., Mota-Rolim et al., etc), rather than (or in addition to) the work on strictly sleep-related effects.
Is there an order in these points? E.g. is point 2) more important than point 7)? Should we re-order them?
change to “negative impact on sleep quality and sleep functions”?
I love that sentence!
First of all, thank you a lot for this text! It’s good to see that we share the same thoughts (and worries) about the implications of our research, and I think it is a good idea to publicly state that we keep the ethical aspects of our work in our minds.
I think we could guide the readers a bit better through the text, so that non-dream-engineers, non-native-speakers, non-academics can grasp our message more easily and don’t get lost. For example, we could insert something like this at the beginning of the text:
”This manifesto is written in response to new scientific developments that enable the direction of dreams -- dream engineering. In this text, we briefly describe what dream engineering is, why today’s technologies enable a quantum leap in dream engineering, what positive and negative implications this could have, and what the social consequences are if dream engineering is conducted without any ethical guiding principles. At the end of this text, we list a set of ethical principles that we as dream engineers agree on to follow. “
Just a first draft - not sure if I understood the meaning of each paragraph (or the whole text) right. And I guess this could be phrased better. But I think such an intro helps us to align internally and later to get our message across to all kinds of readers.
Might be redundant to say absolutely clearly, clear is clear already.
If you could rewrite this more simply it might be useful
The main point –brain science has informed the sale and manipulation of the waking attentional self.
Raising concerns of privacy and targeted advertising in dream engineering?
On top of what Paul and Ryan have said (which are all great points), I have the following general comments
There is a lot of overlap in this document between augmenting manipulating dreams, and then benefits of manipulations to improve memory etc. These benefits do not necessarily come about through altering dream content. Most TMR benefits seems to happen via SWS, when dreaming is least often reported (not entirely absent, granted). I guess what I am getting at is some of these points go beyond dream engineering, and into more broadly sleep engineering. Imagine in some dystopian hellscape, your phone played pro-Apple sounds at you doing SWS. It could arguably be more insidious if these messages could be sent without any conscious awareness at all than if they had to infiltrate your dreams, where you would at least know your phone is playing propaganda at you…
In the age of big data storage, I think it would be worth adding a point to your constitution about data storage. In point 1 you say “your dream is your own, always”. Along those lines, where would people’s dream reports be stored? who would be able to read/listen to them? How can participants get their records removed from the system if requested?
We might want to acknowledge that we do not have a grasp yet on what physiological processes we are inducing when we manipulate dreams, and that we will be cautious/mindful of this. For instance, what we do know is that the state of a lucid dream is somewhere in between REM and wake, and it is not a physiological state that occurs naturally very often. We obviously don’t know if there are long-term consequences to fundamentally altering the physiology of REM via lucid dreaming. It has also been suggested that lucid dreaming disrupts/diminishes sleep quality, and we know that poor sleep quality has a lot of negative side effects. Bjorn Rasch has also shown TMR delivery at home impairs memory if sleep quality is disturbed. None of this is meant to be fear-mongering, and my personal hunch is that these new techniques can be administered without side effects (plus the potential upsides are huge, as we all know). Still, I think in an ethic like this, it should be acknowledged what we don’t know, our research will be mindful, and we will be vigilant, of how our research impacts the quality of participant’s nocturnal sleep. I drafted a point to this effect (please change as desired)
I think the approach is admirable, but I do worry that it’s perhaps too vague to achieve its goal.
It sounds like we’re saying “We don’t want to use this technology to result in harm.” Again, that’s an admirable stance, although aside from explicitly saying “I have no intention to cause you, the participant, any harm” (and somehow having them take our word for it), I don’t know what the intended message is.
Also, as I noted earlier, even if the intention is to avoid harm, that doesn’t necessitate that it will never follow (so that’s not something that can be promised): Our intentions are often at odds with the outcomes of our research (put differently, sometimes shit doesn’t pan out as we expect, and that could include cases of inducing negative dreams/feelings, despite our best intentions to avoid doing so).
So I return to what appears to be the message here: “We promise that we won’t intentionally try to fuck with you.” It’s a good message, but I don’t know that we can go beyond it in any meaningful way (that is, we can promise to use our best judgement, but oftentimes our best judgement doesn’t yield the results that we want, and the results that are being promised here).
Happy to chat further about this. I do think it’s really important – I just think that we’ll want to try to add some more substance beyond “I promise not to attempt to hurt you with my manipulation.”
I agree with Paul that this is a good start to outlining an ethic. There is always risk of misuse of the techniques that could potentially be out of our control. Like all psychological interventions, however, we can emphsize empiricism and make explicit the potential benefits and costs, hopefully concluding, on balance, that progress in this area is good and not bad.
Here are some that come to mind:
Basic sleep science will benefit from a clear way to quantify conscious states in sleep.
Dreams can be of a positive or negative nature and can influence our waking decisions, perceptions and emotions. We aim toward the good – though we may consider inducing bad dreams or nightmares, the benefits of such experiments will always be weighed against potential harms.
We will aim for multidisciplinary collaborations including the psychological sciences, engineering, medicine, philosophy, etc.. I think this will ensure we are considering all potential benefits and costs.
Induced dream phenomenology may be fundamentally different from spontaneous dream activity. Knowing this, we should aim not to overgeneralize findings that may pertain to human wellbeing or disease unless such findings can be determined empirically.
Commercial interests won’t superced commitment to the scientific method (I think you say this, but probably is good to emphasize given the huge market for consumer devices these days.
#6 is understood, embraced, and powerful— especially as an ender— but if I took a stab, it’d be with the same voice, above.
This is tricky – we can’t know exactly how a dream will unfold after a manipulation. We may, for instance, aim to induce fanciful dreams, but accidentally induce traumatic dreams.
I agree with Paul here. While we might be arriving at a point where we can initiate a dream, and even guide its content, we still do not have any control over how the dream will unfold. This is always going to be a risk that will need to be disclosed during consent
Rod Mullen: Sentence 2 of Statement #1’s “incubation of any dreams” may need a bit of lawyerly-like clarification, given our window into your mind.
Something about the storage/access to people’s dream reports. See my general comment at the end for more detail…
it seems to me the main body up top is mostly there to contextualize the need for the stated Ethics goal here in terms of, “If you thought privacy issues were sketchy with browsers, this tech is a whole OTHER level into your interior world, thus we’re assuring you these self-imposed limits [1-6].”
these are two major concerns - do the below guidelines adequately confront these? are there other guidelines people could suggest on this point?
I agree—we’re pilot testing a dream app now and we got a LOT of sensitive information in dream reports. I would suggest a couple of points:
1. We recognize that dreams are often personal and private. We avoid collecting identifying information whenever possible, and always disclose what information is collected and how it is used. We protect information using standard practices for secure data storage.
2. We do not sell or otherwise commercialize data from users
I think I would place this after the concrete example given about the smart-speaker… and edit the language to fit the example. Sensors are not necessarily feeding back images of ourselves… there is a feedback loop but its more an attempt to influence subsequent subjective experience
I might edit to - “In addition to the sleep and dream engineering techniques above, parallel developments in sensor technology enable …”
this would include things like auto-suggestion techniques etc which are part of dream engineering, too
I wonder how inclusive you want to be here with technologies. For example, TMR/auditory stimulation of oscillations can be used to strength and weaken memories, but they are not necessarily related to dreams (though they could be). They feel like they fall outside the realm of specifically dream engineering and are more broadly sleep engineering. Not sure if you want to make that distinction, but I think it is important.