http://dbd.game/killswitch
ChatGBT says survivors bring map offerings more often than killers.
Comments
-
why the ######### would i care what an ai says about dbd, respectfully.
17 -
because it’s true statistically. And BHVR doesn’t play their own game and asking AI about it proves that
-17 -
how tf is it true statistically?
How would the AI have any access to that sort of data?
10 -
I asked ChatGBT what the strongest chase build was, and it said:
"Balanced Landing, Sprint Burst, Quick and Quiet and Dance with Me"
Additionally, it told me the best genrushing build:
Hyperfocus
Stake Out
Prove Thyself
Fast Track (Yoichi Asakawa)
yeah this thing aint stealin our jobs any time soon
11 -
the rising number of people who take whatever AI shits out as fact is deeply, deeply concerning
do yourself a favor and please get off the internet. and furthermore educate yourself and practice critical thinking skills.
10 -
It is super depressing that somebody actually believes this
7 -
Because it's true statistically.
ChatGPT is a Generative AI Language Model.
If you dont know what that means. ChatGPT only predicts the likelihood of the what word is likely to go next and inserting the most likely result, often relying on resources and information found on the information to train.
If the topic is Dead By Daylight, it will use Dead By Daylight as a source and use it compute the likelihood of each word to generate a response. To put it bluntly, it does not understand what DBD is, it's not trained to, it's just calculating what words fits the best next, which can be subject to bias based on what it uses as a source.
So no… it's not statistically right. It does not even use statistics to come to a solid conclusion, it just uses statistics to predict which words look the nicest on a screen so it can get virtual points that internally reward the AI for doing a thing "correctly".
And BHVR doesnt play their own game and asking AI about it proves it.
- BHVR does play their own game. They played against many different content creators to showcase 2v8 and have many times played their game in front of other people. As to rather they are skillful or not is subjective, but they definitely do play their own game.
- As mentioned before, asking ChatGPT is nothing to go by.
Dear god is this really what the internet is becoming. Istg if I see another "I used AI to determine (thing)" Im going to lose my mind… "Critical thinking is dead." "@Grok what does this mean?" "Context?" AAAAAAAAHHHHHHHHHHH.
7 -
If BHVR so much play their own game then why did Chucky get nerfed to the ground??. I’m sure if I asked chatgbt to rework Chucky the right way that he should be I’m sure chatgbt would rework him correctly better than what BHVR could do
0 -
lmao.
That's really not a very well-reasoned answer. Maybe you should ask ChatGPT for a better one.
0 -
Because Killers can be strong or weak but still have issues in design that can be addressed.
Original Legion was AWFUL but they had an uncounterable 1v1 that resulted in them getting nerfed and eventually reworked. Chucky is a far cry from Legion's balancing issues, but they brought many issues and core design and the developers felt the need to adjust them because of said issues. Now, you can agree/disagree with the changes, but BHVR does play their own game and does make difficult decisions with the intent of trying to keep the game intact.
0 -
I agree that current AI models can't provide authoritative information about nearly any domain of knowledge. ChatGPT definitely makes mistakes when it's prompted with queries from my field of expertise. And it doesn't seem to have enough clear information from its training data to provide expert knowledge about DBD.
But isn't this a bit of an oversimplification of how current AI models approach responses to prompts?
My general understanding is that current generation LLMs are presented with training data which they classify by detecting properties of different entries in the training data. That's done by application of a complex set of filters which are used to produce a set of values in an 'n-element' vector for each entry in the training data. The filters use a variety of processes to probe different types of patterns in their training set. After the training process, the resulting vector representation of the training data tells how the different elements of the training data are correlated. I imagine the training process is also designed to categorize individual elements in parts of its training data (i.e. words and letters), but I haven't looked under the hood so to speak.
Nevertheless, when a user provides a prompt, the model then uses a similar process to classify ('understand') the prompt in the context of the information it has been trained on. Then it structures a reply based on how the prompt correlates with its classificationof the training data. If I understand correctly, there are some papers published in recent months that show current models are designed to generate a context for each response prior to performing the generation of each word in its reply. That is, the AI determines a set of parameter values to define a broad structure of the theme it will respond with. Then it begins the process of determining which words lead to a representation of the parameters it has calculated that represent the broad structure it will reply with. In that process, certain predefined parameters are optimized which instruct the AI to perform in a certain way. Those parameters are used to inform the type of reply the AI provides in the context of its classification of the prompt with respect to the training data.
That suggests that such models are doing more than determining the statistically likely next word. Instead, they create a large database of relationships between different entries in the training data to determine what things are related to other things in different ways. Then they're designed to place each prompt into the context of the training data, then structure a reply that most likely tells the optimal response to the prompt given the correlation of the training data to the prompt, and the way that text is statistically ordered. In that sense, we can think of it like autocomplete, but the processes used to do that are much more complex than just determining the next word based on statistics.
Correct me if I have that wrong; I'm not an expert in the field :D
Either way, I agree with you. ChatGPT is not an authority on DBD...
2 -
AI doesn't have access to that data. It's information is based on what it gathers from internet users, not from the game itself.
0 -
But isn't this a bit of an oversimplification of how current AI models approach responses to prompts?
Yes, yes it is.
But keep in mind that if I provided an insanely technical explanation most people would either fall asleep or just not read it, OR in the case of OP, probably shove it into an AI and ask it to explain it in simpler terms. I kind of just have to dumb things down a bit as a result, as I mentioned before, critical thinking is dead… actually thinking as a whole is dead, just shove it into an AI and let the computer think for you. :(
1 -
lol That's fair. "ChatGPT! Please tell me what I think!"
1 -
AI will be the downfall of humanity. We are approaching WALL-E society at an alarming rate.
0 -
A AI known for making up random crap whenever it feels like and has zero access to DBD’s in-match data should not be how you form your opinion on this game lol
0 -
Chucky didn't get nerfed to the ground he got balanced after releasing overtuned
1
