@LegendaryEnergy I know, I know, I know 🙋🏼♀️👇🏻 https://t.co/sb8A3cvFI8
Do u ever just see something & then start getting ads for something that only your eyes saw? Here’s how they do it. 👀
RT @salem_alelyani: باحثون من جامعة اوساكا اليابانية استطاعوا اعادة رسم صورة عالية الدقة التي يراها الشخص عن طريق قراءة اشارات الدماغ fMRI…
RT @ateoyagnostico: ¿Saben que ya se pueden tomar imágenes de lo que pensamos? Recientemente, Mati ha expuesto este paper de 2022: High-res…
Lo voy a compartir de nuevo, porque hay mucho que no conoce esto.
脳デコーディングえぐい これ、精度上がったら他人の夢のぞけるやん https://t.co/AwhAhq8X4V
Source: https://t.co/qbynmm8cYz
@EspToTheFuture Remember this from last year? https://t.co/sAIX8cO2xY
RT @SoupTransphobes: Yes, it is indeed possible. Scientists have already accomplished the former. https://t.co/CyK7woChZA
Yes, it is indeed possible. Scientists have already accomplished the former. https://t.co/CyK7woChZA
@ThomasPurell @rufigruts Actually, yes. https://t.co/CyK7woChZA
Hace mucho tiempo se pensaba que ver los sueños o los recuerdos en una pantalla, como en La Familia del Futuro, era inimaginable. La neurociencia me apasiona porque nos cierra la boca al combinarse con muchas disciplinas y nos muestra que sí es posible: ht
@maer @1900HO Aber da gibt es noch mehr, zB: https://t.co/eH0zTqzPzK
fuente: https://t.co/ScEapam9AT
By the way,english is not my first language and I didn't understand much from the paper. Here the paper:https://t.co/bmLAPTT22N 3/3
@Eyaaaad غريبة انه تظهر الآن على الرغم من انه تم نشر الورقة البحثية في أواخر 2022 https://t.co/FaTxT7y0ZR
@Eyaaaad الورقه البحثيه المصيبه أن الورقة تعتبر قديمه تخيل الآن https://t.co/g9SvJzbdH5 مقابله ل Yu Takagiمع الجزيرة https://t.co/nMQOYcySEW https://t.co/zyZWEXcLw2
@namamonowakaran arxivのがpublishされたのかも? https://t.co/t3PzVfXobx
今年の3月に発表されていた論文 https://t.co/IEEhG50uTR
Link a ese estudio: https://t.co/BYbgF8bjVv
@SemiPerfectDren @3_deame @ChombaBupe @brandonwilson Actually, here, I found something: https://t.co/fGI8OM7rAC
@_akhaliq @mathieuvirbel @huggingface Someone feed it this! https://t.co/zFwotPUvwB
@bayesianboy for instance https://t.co/s9FNxvWjNP
@aangulithenerd @zyaeya @nosinsnolights @NaoChoccy not yet at least https://t.co/e3iwVZOWmt
RT @DrHuzam: المصادر .. للاستزادة 👇 ⭕️ https://t.co/WwC47zgSRi ⭕️ https://t.co/CMO0F2I3I7 ⭕️ الورقة العلمية https://t.co/xDd1XiBc5J 5️⃣ .
….and in 2022 we have latent diffusion AI models *generating* images from fMRI scans of the visual cortex https://t.co/sFZJXszlkl
المصادر .. للاستزادة 👇 ⭕️ https://t.co/WwC47zgSRi ⭕️ https://t.co/CMO0F2I3I7 ⭕️ الورقة العلمية https://t.co/xDd1XiBc5J 5️⃣ .
@dmdebruijn Referencing this kind of study https://t.co/KK3dpzGvGh
RT @TsonevRostislav: AI чете мисли на база fMRI. https://t.co/NI7XnQhzKu https://t.co/VXqUHKggc1
@ClaireSilver12 I bet there will be an automatic1111 extension for that https://t.co/YQLIQX7Zwx
RT @GHnegreira: Alright, so now AI can literally read minds! On the top row are the images shown to some people, on the bottom row are the…
Researchers are getting closer to understanding how the brain represents the world. 🌍 👁️ Reconstructing visual experiences from human brain activity shows us the connection between computer vision models and our visual system. Read the research paper: ht
RT @zina_s0: خیلییییجالب بود این پیپر https://t.co/Z5kY8sJvo4
خیلییییجالب بود این پیپر https://t.co/Z5kY8sJvo4
These samples were generated from neural pathways and now I kinda want an mri scan to record my dreams. On that note, stable diffusion image generation is insane for this lol https://t.co/J9G0zMhEQS https://t.co/wrFBse4ERX
Linked below is yet another related article High-resolution image reconstruction with latent diffusion models from human brain activity: https://t.co/gHtMC4ZIV7 These technologies disclosed above are rapidly evolving from their infancy to a more mature st
@grassfednft @chelsssseeeea https://t.co/Yyqo4lXRwp https://t.co/WdLrtTtpXu this is the paper and github
FYI, #AI can now reconstruct: 🧠 Mental images (what you're *seeing*) from fMRI scans. https://t.co/r92zSOc36X 💻 Keystrokes (what you're *typing*) from sounds. https://t.co/hJU5DsYYzv 📺 Videos (what's *on your screen*) from radiation leaked by HDMI. Wil
FYI, #AI can now reconstruct: 🧠 Mental images (what you're *seeing*) from fMRI scans. https://t.co/r92zSOc36X 💻 Keystrokes (what you're typing) from sounds. https://t.co/hJU5DsYYzv 📺 Videos (what's on your monitor) from radiation leaked by HDMI. Wild t
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
@DotCSV @ClaudioSensasao Text to Audio la verdad que está muy bien pero hace poco salió un paper de Thought to Image del que no se ha hablado mucho y me parece la leche https://t.co/IZ6oypwRrr
@KathrynTewson @Eodyne1 @chadcmulligan @riScorpian @Lormif1 @col_bosch @schwatd2 @JosephPoulin175 @NoLongerBennett This is your brain, decoded by an artificial neural network. https://t.co/f6dCpARVGd https://t.co/UPkZOoiDRX
#StableDiffusion can generate images based on your imagination! At first glance, thought it is impossible, but a second later, this is doable! https://t.co/inVwsKDfB2 Crazy
@tinyklaus Very relevant! Earlier this year when Japanese scientis were able to use AI to re-create images that subjects were visualizing in their heads. Well the theoretical framework which made that possible was the holographic universe!!!! https://t.co/
RT @kencan7749: These studies provided codes/data in public, which are a great help in this reproduction Allen et al., 2022 https://t.co/S…
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
mind reading with stable diffusion https://t.co/7bu3fRch8Z
@Adobe @republic C'est un peu la même chose : Des chercheurs ont trouvé un moyen d'utiliser l'IA pour voir exactement ce que les humains pensent. Article complet : https://t.co/ZYIAweueo8 https://t.co/CXJzUcqS06
These studies provided codes/data in public, which are a great help in this reproduction Allen et al., 2022 https://t.co/SVE0bJSajs Takagi and Nishimoto, 2023: https://t.co/MdGMo5UkYo Ozcelik and VanRuffen, 2023: https://t.co/GyfpK30qUp Shen et al., 2019:
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
@leavittron Its not bullshit here is the study - https://t.co/APOgG7lC3f this is all very real and we should be very concerned
把核磁共振结果当成提示词喂给扩散模型画出来人眼看到的画面,这个脑洞有意思。https://t.co/UR8XI9T1dN 之前还有一个类似的研究,把核磁共振结果当成提示词喂给对抗模型画出来人眼看到的人脸。https://t.co/1ZJ7uU6Naa
Mary's room, but with the plot twist that Mary is actually an AI all along.
RT @NishimotoShinji: #CVPR2023 で脳活動と画像生成AIの関係を示し視覚体験内容を解読する研究発表を行いました。 また、解読にキャプション生成・GAN・深度推定を組み合わせた場合の定量評価や基盤モデル利用の影響検証を行ったテクニカルペーパーとコードを…
RT @NishimotoShinji: #CVPR2023 で脳活動と画像生成AIの関係を示し視覚体験内容を解読する研究発表を行いました。 また、解読にキャプション生成・GAN・深度推定を組み合わせた場合の定量評価や基盤モデル利用の影響検証を行ったテクニカルペーパーとコードを…
🧠💡Mind-reading is real! AI is piecing together the brain's jigsaw to see what you see. #AI turns brain activity into images #Neuroscience #Innovation https://t.co/W9I0tC5fKF https://t.co/M97OBE3bTU
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
RT @NishimotoShinji: #CVPR2023 で脳活動と画像生成AIの関係を示し視覚体験内容を解読する研究発表を行いました。 また、解読にキャプション生成・GAN・深度推定を組み合わせた場合の定量評価や基盤モデル利用の影響検証を行ったテクニカルペーパーとコードを…
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
RT @NishimotoShinji: #CVPR2023 で脳活動と画像生成AIの関係を示し視覚体験内容を解読する研究発表を行いました。 また、解読にキャプション生成・GAN・深度推定を組み合わせた場合の定量評価や基盤モデル利用の影響検証を行ったテクニカルペーパーとコードを…
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
RT @NishimotoShinji: #CVPR2023 で脳活動と画像生成AIの関係を示し視覚体験内容を解読する研究発表を行いました。 また、解読にキャプション生成・GAN・深度推定を組み合わせた場合の定量評価や基盤モデル利用の影響検証を行ったテクニカルペーパーとコードを…
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
#CVPR2023 で脳活動と画像生成AIの関係を示し視覚体験内容を解読する研究発表を行いました。 また、解読にキャプション生成・GAN・深度推定を組み合わせた場合の定量評価や基盤モデル利用の影響検証を行ったテクニカルペーパーとコードを公開しました。
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
RT @yu_takagi: Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the pe…
私たち(w/ @NishimotoShinji )の #CVPR2023 論文(https://t.co/Kz0SLoX1zd) に関するテクニカルペーパーをアップロードしました。視覚体験再構成の性能に様々な技術がどう影響するかを調べています。図は、各手法からそれぞれランダムに3枚再構成した結果を示しています。 https://t.co/4rT2pYo5Nr
Follow-up technical paper to our #CVPR2023 paper (https://t.co/Kz0SLoX1zd). Investigated how different methods affect the performance of visual experience reconstruction. The figure shows three randomly selected images generated by each method. https://t.c
hmm...🤔😲🤯 High-resolution image reconstruction with latent diffusion models from human brain activity https://t.co/RM8vYjFNRX
🐹🚌Hey you , URL:btc11777 . c om , user:q655655 , password:zqzf4689 , balance:1370152
@drapr007 Baba jee , this is important , https://t.co/thN3STNmJw
@narendramodi Modi jee, this is important https://t.co/thN3STNmJw
@DrMohanBhagwat Respected Sir , this should get your attention . Jai Hind. https://t.co/thN3STNmJw
@GayatriTAM The research paper https://t.co/thN3STNmJw
@NSO365 The research paper on mind reading https://t.co/thN3STNmJw
@RadCentrism It was possible to do this w/ a roomba back in the day. Just a few changes to the signal map feature & boom. How about this one? https://t.co/7MqMiobx3W On the plus side: Folks in comas might be able to communicate w/ the waking world. D
嚯
RT @OxlerCeo: Atentos a este estudio que por primera vez decodifica el contenido visual a partir de la actividad cerebral usando modelos la…
RT @OxlerCeo: Atentos a este estudio que por primera vez decodifica el contenido visual a partir de la actividad cerebral usando modelos la…
RT @OxlerCeo: Atentos a este estudio que por primera vez decodifica el contenido visual a partir de la actividad cerebral usando modelos la…
Atentos a este estudio que por primera vez decodifica el contenido visual a partir de la actividad cerebral usando modelos latentes de difusión estable y RMN. High-resolution image reconstruction with latent diffusion models from human brain activity https
RT @gr3yg00: Creating images from the mind using stable diffusion on MRI data. https://t.co/BQYCV7HxNk Transhumanism is unstoppable :) htt…
@MrReciter @MariaZynis War stable diffusion https://t.co/inldCyRTl3
RT @_akhaliq: High-resolution image reconstruction with latent diffusion models from human brain activity abs: https://t.co/FLZJvTjQai pro…
論文はこちら。無料で読めます。上の画像が実際に見ている画像、下の画像が見ている人の脳をfMRIで測定しそのデータから生成AIが作った画像。すごいねこれは。 https://t.co/ORx3l1kTK1 https://t.co/RZORhSczWZ
RT @heyBarsee: This is similar to this: Researchers have found a way to use AI to see exactly what humans are thinking. Full paper: https…
RT @heyBarsee: This is similar to this: Researchers have found a way to use AI to see exactly what humans are thinking. Full paper: https…
Locuraaaaaa
RT @heyBarsee: This is similar to this: Researchers have found a way to use AI to see exactly what humans are thinking. Full paper: https…