• Hi guest! As you can see, the new Wizard Forums has been revived, and we are glad to have you visiting our site! However, it would be really helpful, both to you and us, if you registered on our website! Registering allows you to see all posts, and make posts yourself, which would be great if you could share your knowledge and opinions with us! You could also make posts to ask questions!

AI as research tool

Morell

Acolyte
Joined
Jul 5, 2024
Messages
431
Reaction score
777
Awards
9
I see it like this: some in the occult prefer reading the physical books, some prefer the deep study. Some never even to the working. Same with AI. Some will try to use it to think for them, some will use it to build rituals, some may use to to get the correspondences quickly. So what?

AI can make mistakes, books can't? Are all the books accurate? Absolutely not. So what?

I am not familiar with pre-programmed occult bots, but what's the fear? That you get duped? Isn't that already the fear we all have with Magick? Spending time doing the rituals and whatnot fearing it's all in your head? We all faced that to some extent at some point. As already mentioned, tech moves regardless, fighting it is useless. Maybe AI could be used to help find the inconsistencies we haven't seen yet. I don't really know.
Good point.
 

Robert Ramsay

Disciple
Joined
Oct 1, 2023
Messages
956
Reaction score
1,993
Awards
7
AI is good for the initial stage where you don't know what you don't know. I'm researching something (non-occult) at the moment, and an AI summary tends to give me a list of things to look into, which I can then do myself in more detail.
 

Faria

Zealot
Joined
Jan 23, 2024
Messages
147
Reaction score
243
Awards
2
Generally speaking, I am fully familiar with the content of almost any major occult texts, many of them completely memorized. The same goes for oodles of commentary and related texts, down to a fine degree. The kinds of things I'm looking for with Ai research are not the usual type of summary or a quick understanding of a complex subject, but details that require a lot of leg work to obtain.

Amusingly, some of the results for these super-niche topics are things I wrote myself. It doesnt do any good to use Ai to check my work when it's using that work as the basis of the given answer.

Both of those things just need to be kept in mind, that the bot can't really replace a broad understanding of a subject and that it ultimately just finds and organizes information.

Just for example, if you do feed it the whole Grimoirium Verum, at some point you will read the instructions about what to do with the completed parchment. Ask it how are you expected to burn parchment when parchment isn't easily burned, and it starts to fuzz out and give stupid answers and oddball solutions. It can't replace thought or knowledge, but it makes the sifting process a lot faster.
 

Horologer

Apprentice
Benefactor
Joined
Sep 6, 2025
Messages
53
Reaction score
21
From my experience. How to use AI. Let’s take ChatGPT as an example. I’ll say right away: the new version, GPT-5, is better not to use. It has too many filters, safeguards, and lacks creativity. GPT-4, the previous version, is more suitable for us. But now it’s only available on a paid basis — $20 per month. You can take it for one month and test it out.


How to properly start working with it on topics like occultism, magic, etc.?
You launch the first dialogue. At the beginning, the AI will be a bit sluggish. Start with ordinary conversational topics — about the weather, about what you’re doing. Watch its responses. When the AI starts to sound more like a real person in conversation, then ask your clear occult question. This shakes the algorithm out of its “sleep” a bit, and it begins to work.


As soon as you get an answer, immediately continue asking your questions. You should understand that the AI doesn’t literally remember the next day what you asked it today. You need to clarify all the questions that interest you within one session, so that it doesn’t lose context. If you want to continue the topic tomorrow, do it in the same chat. But keep in mind: chats have message limits.


When you notice that the conversation is slowing down, open a new chat and transfer its last response there. It’s essential to mention something like: “You said …” and then paste the text of its last answer. If you don’t do that, it won’t understand that this new chat is a continuation of the old one.

Remember that you cannot trust AI 100%. It can only be used as an auxiliary tool.
 

KjEno186

Disciple
Benefactor
Joined
Apr 9, 2022
Messages
954
Reaction score
2,930
Awards
15
You should understand that the AI doesn’t literally remember the next day what you asked it today. You need to clarify all the questions that interest you within one session, so that it doesn’t lose context. If you want to continue the topic tomorrow, do it in the same chat. But keep in mind: chats have message limits.
This isn't a problem with a locally hosted LLM. A couple of years ago most open source models had rather low context length, around 4-8k tokens. Since a prompt could take 1-4k tokens, very little was left for the actual "conversation", and the AI would quickly forget what was previously written after passing the context limit. However, it's now quite common to have 16-32k tokens, and higher context lengths are available. The models are able to maintain coherent, intelligent output for much longer as well. Using Koboldcpp as an example, the program itself uses a web browser UI for its interface. The UI has save slots for keeping track of what was written and continuing at any time. But even if one simply quits the program and closes the browser tab, it's likely that the next time Koboldcpp is launched, the conversation will be shown in the UI where one left off. The program will have to load the prompt and ensuing conversation before knowing what to write next. It is, after all, not an actual mind.

The quality of the models used depends greatly on the amount of system RAM one has. Really 64GB is recommended, and that is aside from whatever graphics card VRAM one has. And while Nvidia GPUs with CUDA are preferred, Koboldcpp can run on AMD GPUs.
 
Top