Adding Word Frequencies in Various Text Files

I am having four files i.e. Text Files. I have already calculated their individual word frequencies. Now, I wish to add their frequencies like if Kill appears 10 times in File1 then I will check this word Kill in other files too and will add to its current frequency. So, comparing all four files and making one single file in the end.
I need to add frequencies of related words I mean indexes they appear. Like if in frequencies file the word kill is on 10th index, it maybe on 100th index in other file. So total frequency of kill will be calculated. That way I need to work, In Simple, if word Kill appears 10 times in file1 and 4 times in file2, I need 14 against this Kill word and for all other words same scenario.
Right now I am doing this for words frequencies calculation of a single txt file:
fid = fopen('File.txt');
words = textscan(fid, '%s');
status = fclose(fid);
Then: Unique words and their frequencies are below calculated...…
unique_words = unique(words{1,1});
frequencies = zeros(numel(unique_words), 1);
for i = 1:numel(unique_words)
if max(unique_words{i} ~= ' ')
for j = 1:numel(words{1,1})
if strcmp(words{1,1}(j), unique_words{i})
frequencies(i) = frequencies(i) + 1;
end
end
end
end
Please guide me if anyone can help.
Thanks a lot.

6 Comments

Why don't you simply make the total when you compute the frequencies from the individual files?
Or, if they're processed at different times and must have a way to retain values across sessions, how about making another file to hold that information so can simply query/update it instead of all the files?
No actually I need to dd frequencies of related worsd i mean indexes they appear. Like if in frequencies file kill is on 10th index, it maybe on 100th index in other file. So total frequency of kill will be calculated. That way I need to work
I need to add frequencies of related words I mean indexes they appear. Like if in frequencies file the word kill is on 10th index, it maybe on 100th index in other file. So total frequency of kill will be calculated. That way I need to work
Please guide me if anyone can help.
Thanks a lot.
What format are you storing the information in?
I am using Matlab where I am calculating unique words as well as their frequencies. Its in matrix form.
Frequencies are calculated like :
for i = 1:numel(unique_words)
if max(unique_words{i} ~= ' ')
for j = 1:numel(words{1,1})
if strcmp(words{1,1}(j), unique_words{i})
frequencies(i) = frequencies(i) + 1;
end
end
end
end
This is for one file right.. Now word appearing in one file is suppose 10th times and in other file it appears 4 times, its frequency will be 14. I want to calculate this for each word.
I still don't see anything against the secondary file -- create it and store the word (or better yet, a hash function value to the word for quicker lookup) and the associated accumulated frequency for the file(s) processed.
If you have the Statistics Toolbox there's the dataset object that has a lot of the functionality builtin--it's basically just a structure with named fields with some higher-level stuff builtin so that's one way fairly simply within base Matlab.
As Walter asks, how, specifically the data is currently actually stored plus a better view of how you're doing the processing could help -- not the (rather trivial) counting of a given file, but the larger picture of how you get the files, which are processed at any given time, how to know when need another update, etc., etc., etc,. ...
To reiterate, why not the central database for the results--seems as though would simplify life significantly.

Sign in to comment.

Answers (2)

You cannot calculate the joint frequencies with the information you are storing in the files.
In order to calculate the joint frequencies, you need to also store the information about which word each count corresponds to.

Asked:

on 31 Dec 2013

Answered:

on 30 Apr 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!