Ok So I have looked at a couple of different questions on here but I haven't been able to find anything to help to solve this problem. I split up 303 lines with 13 rows in them between healthy patients and sick patients. I was able to get the averages of both but now I need to get the median of those 2 averages (to make things clear this is what the output should look like).
Averages of Healthy Patients:
[52.59, 0.56, 2.79, 129.25, 242.64, 0.14, 0.84, 158.38, 0.14, 0.59, 1.41, 0.27, 3.77, 0.00]
Averages of Ill Patients:
[56.63, 0.82, 3.59, 134.57, 251.47, 0.16, 1.17, 139.26, 0.55, 1.57, 1.83, 1.13, 5.80, 2.04]
Seperation Values are:
[54.61, 0.69, 3.19, 131.91, 247.06, 0.15, 1.00, 148.82, 0.34, 1.08, 1.62, 0.70, 4.79, 1.02]
I have tried different methods for trying to get the median but I have failed on all my attempts so I've officially run out of ideas of how to get it. So if you can look and see if maybe I was on the right track and just missed something small or am completely way off I would appreciate any insight on this problem.
ill_avg = [ill / len(iList) for ill in iList_sum]
hlt_avg = [ hlt / len(hList) for hlt in hList_sum]
median = [(b / len(bList) for b in bList_sum) //2 ]
print('Total of lines Processed: ' + str(numline))
print("Total Healthy Count: " + str(HPcounter))
print("Total Ill Count: " + str(IPcounter))
print("Averages of Healthy Patients:")
print(str(hlt_avg))
print("Averages of Ill Patients ")
print('[' + ', '.join(['{:.2f}'.format(number) for number in ill_avg]) + ']')
print("Seperation Values are:")
print(median)
tried to get the median by adding both averages but I couldn't get it to work and my latest try was to makes a solo average(bList which is all patients) and get the median in that. If I can make the first way work without the bList I would prefer it that way since it will make the code less redundant and hopefully smaller.
I apologize I forgot to mention I am not suppose to use numpy or panda since we have not gone over those 2 in class yet.
Use numpy:
import numpy
a = numpy.array([[52.59, 0.56, 2.79, 129.25, 242.64, 0.14, 0.84, 158.38, 0.14, 0.59, 1.41, 0.27, 3.77, 0.00],
[56.63, 0.82, 3.59, 134.57, 251.47, 0.16, 1.17, 139.26, 0.55, 1.57, 1.83, 1.13, 5.80, 2.04]])
print numpy.mean(a, axis=0)
or use pure Python if you have to avoid numpy:
from __future__ import division
def mean(a):
return sum(a) / len(a)
a = [[52.59, 0.56, 2.79, 129.25, 242.64, 0.14, 0.84, 158.38, 0.14, 0.59, 1.41, 0.27, 3.77, 0.00],
[56.63, 0.82, 3.59, 134.57, 251.47, 0.16, 1.17, 139.26, 0.55, 1.57, 1.83, 1.13, 5.80, 2.04]]
print map(mean, zip(*a))
Related
I have data as real and imaginary parts of complex number, and I want to fit them according to a complex function. More in detail, they are data from electrochemical impedance spectroscopy (EIS) experiments, and the function comes from an equivalent circuit.
I am using Octave 7.2.0 in a Windows 10 computer. I need to use the leasqr algorithm, present in the Optim package. The leasqr used the Levenberg-Marquardt nonlinear regression, typical in EIS data fitting.
Regarding the data, xdata are linear frequency, y data are ReZ + j*ImZ.
If I try to fit the complex data with the complex fitting function, I get the following error:
error: weighted residuals are not real
error: called from
__lm_svd__ at line 147 column 20
leasqr at line 662 column 25
Code_for_StackOverflow at line 47 column 73
I tried to fit the real part of the data with the real part of the fitting function, and the imaginary parts of the data, with the imaginary part of the function. The fits are successfully performed, but I have two sets of fitted parameters, while I need only one set.
Here the code I wrote.
clear -a;
clf;
clc;
pkg load optim;
pkg load symbolic;
Linear_freq = [1051.432, 394.2871, 112.6535, 39.42871, 11.59668, 3.458659, 1.065641, 0.3258571, 0.1000221];
ReZ = [84.10412, 102.0962, 178.8031, 283.0663, 366.7088, 431.3653, 514.4105, 650.5853, 895.9588];
MinusImZ = [27.84804, 59.56786, 116.5972, 123.2293, 102.6806, 117.4836, 178.1147, 306.256, 551.2337];
Z = [88.5946744, 118.2030626, 213.4606653, 308.7264008, 380.8131426, 447.0776424, 544.3739605, 719.0646495, 1051.950932];
MinusPhase = [18.32042302, 30.26135402, 33.1083029, 23.52528583, 15.64255593, 15.23515301, 19.09841797, 25.2082044, 31.60167787];
ImZ = -MinusImZ;
Angular_freq = 2*pi*Linear_freq;
xdata = Angular_freq;
ydata = ReZ + j*ImZ;
Fitting_Function = #(xdata, p) (p(1) + ((p(2) + (1./(p(3)*(j*xdata).^0.5))).^-1 + (1./(p(4)*(j*xdata).^p(5))).^-1).^-1);
p = [80, 300, 6.63E-3, 5E-5, 0.8]; # True parameters values, taken with a dedicated software: 76, 283, 1.63E-3, 1.5E-5, 0.876
options.fract_prec = [0.0005, 0.0005, 0.0005, 0.0005, 0.0005].';
niter=400;
tol=1E-12;
dFdp="dfdp";
dp=1E-9*ones(size(p));
wt = abs(sqrt(ydata).^-1);
#[Fitted_Parameters_ReZ pfit_real cvg_real iter_real corp_real covp_real covr_real stdresid_real z_real r2_real] = leasqr(xdata, ReZ, p, Fitting_Function_ReZ, tol, niter, wt, dp, dFdp, options);
#[Fitted_Parameters_ImZ pfit_imag cvg_imag iter_imag corp_imag covp_imag covr_imag stdresid_imag z_imag r2_imag] = leasqr(xdata, ImZ, p, Fitting_Function_ImZ, tol, niter, wt, dp, dFdp, options);
[Fitted_Parameters pfit cvg iter corp covp covr stdresid z r2] = leasqr(xdata, ydata, p, Fitting_Function, tol, niter, wt, dp, dFdp, options);
#########################################################################
# Calculate the fitted functions, with the fitted paramteres array
#########################################################################
Fitted_Function_Real = real(pfit_real(1) + ((pfit_real(2) + (1./(pfit_real(3)*(j*xdata).^0.5))).^-1 + (1./(pfit_real(4)*(j*xdata).^pfit_real(5))).^-1).^-1);
Fitted_Function_Imag = imag(pfit_imag(1) + ((pfit_imag(2) + (1./(pfit_imag(3)*(j*xdata).^0.5))).^-1 + (1./(pfit_imag(4)*(j*xdata).^pfit_imag(5))).^-1).^-1);
Fitted_Function = Fitted_Function_Real + j.*Fitted_Function_Imag;
Fitted_Function_Mod = abs(Fitted_Function);
Fitted_Function_Phase = (-(angle(Fitted_Function))*(180./pi));
################################################################################
# Calculate the residuals, from https://iopscience.iop.org/article/10.1149/1.2044210
# An optimum fit is obtained when the residuals are spread randomly around the log Omega axis.
# When the residuals show a systematic deviation from the horizontal axis, e.g., by forming
# a "trace" around, above, or below the log co axis, the complex nonlinear least squares (CNLS) fit is not adequate.
################################################################################
Residuals_Real = (ReZ-Fitted_Function_Real)./Fitted_Function_Mod;
Residuals_Imag = (ImZ-Fitted_Function_Imag)./Fitted_Function_Mod;
################################################################################
# Calculate the chi-squared - reduced value, with the fitted paramteres array NOVA manual page 452
################################################################################
chi_squared_ReZ = sum(((ReZ-Fitted_Function_Real).^2)./Z.^2)
chi_squared_ImZ = sum(((ImZ-Fitted_Function_Imag).^2)./Z.^2)
Pseudo_chi_squared = sum((((ReZ-Fitted_Function_Real).^2)+((ImZ-Fitted_Function_Imag).^2))./Z.^2)
disp('The values of the parameters after the fit of the real function are '), disp(pfit_real);
disp('The values of the parameters after the fit of the imaginary function are '), disp(pfit_imag);
disp("R^2, the coefficient of multiple determination, intercept form (not suitable for non-real residuals) is "), disp(r2_real), disp(r2_imag);
###################################################
## PLOT Data and the Function
###################################################
#Set plot parameters
set(0, "defaultlinelinewidth", 1);
set(0, "defaulttextfontname", "Verdana");
set(0, "defaulttextfontsize", 20);
set(0, "DefaultAxesFontName", "Verdana");
set(0, 'DefaultAxesFontSize', 12);
figure(1);
## Nyquist plot (Argand diagram)
subplot(1,2,1, "align");
plot((ReZ), (MinusImZ), "o", "markersize", 2, (Fitted_Function_Real), -(Fitted_Function_Imag), "-k");
axis ("square");
grid on;
daspect([1 1 2]);
title ('Nyquist Plot - Argand Diagram');
xlabel ('Z'' / \Omega' , 'interpreter', 'tex');
ylabel ('-Z'''' / \Omega', 'interpreter', 'tex');
## Bode Modulus
subplot (2, 2, 2);
loglog((Linear_freq), (Z), "o", "markersize", 2, (Linear_freq), (Fitted_Function_Mod), "-k");
grid on;
title ('Bode Plot - Modulus');
xlabel ('\nu (Hz)' , 'interpreter', 'tex');
ylabel ('|Z| / \Omega', 'interpreter', 'tex');
## Bode Phase
subplot (2, 2, 4);
semilogx((Linear_freq), (MinusPhase), "o", "markersize", 2, (Linear_freq), (Fitted_Function_Phase), "-k");
set(gca,'YTick',0:10:90);
grid on;
title ('Bode Plot - Phase');
xlabel ('\nu (Hz)' , 'interpreter', 'tex');
ylabel ('-\theta (°)', 'interpreter', 'tex');
figure(2)
## Bode Z'
subplot (2, 1, 1);
semilogx((Linear_freq), (ReZ), "o", "markersize", 2, (Linear_freq), (Fitted_Function_Real), "-k");
grid on;
title ('Bode Plot Z''');
xlabel ('\nu (Hz)' , 'interpreter', 'tex');
ylabel ('Z'' / \Omega', 'interpreter', 'tex');
## Bode -Z''
subplot (2, 1, 2);
#subplot (2, 2, 4);
semilogx((Linear_freq), (MinusImZ), "o", "markersize", 2, (Linear_freq), -(Fitted_Function_Imag), "-k");
grid on;
title ('Bode Plot -Z''''');
xlabel ('\nu (Hz)' , 'interpreter', 'tex');
ylabel ('-Z'''' / \Omega', 'interpreter', 'tex');
figure(3)
## Residuals Real
subplot (2, 1, 1);
semilogx((Angular_freq), (Residuals_Real), "-o", "markersize", 2);
grid on;
title ('Residuals Real');
xlabel ('\omega (Hz)' , 'interpreter', 'tex');
ylabel ('\Delta_{re} / \Omega', 'interpreter', 'tex');
## Residuals Imaginary
subplot (2, 1, 2);
#subplot (2, 2, 4);
semilogx((Angular_freq), (Residuals_Imag), "-o", "markersize", 2);
grid on;
title ('Residuals Imaginary');
xlabel ('\omega (Hz)' , 'interpreter', 'tex');
ylabel ('\Delta_{im} / \Omega', 'interpreter', 'tex');
Octave should be able to handle complex numbers. What do I do wrong?
I was thinking to fit the real part of the data with the real part of the fitting function, and then using the Kramers-Kronig relations to get the imaginary part of the fitted function, but I would like to avoid this method, if possible.
Any help would be greatly appreciated, thanks in advance.
From your data drawing the complex impedances diagram makes appear a rather common shape that can be model with a lot of equivalent circuits :
Reference : https://fr.scribd.com/doc/71923015/The-Phasance-Concept
You chose the model n°2 probably according to some physical considerations. This is not the subject to be discussed here.
Also according to physical consideration and/or by graphical inspection you correctly assumed that one phasance is of Warbourg kind (Phi=-pi/4 ; nu=-1/2).
The problem is to fit an equation with five adjustable parameters. This is a difficult problem of non linear regression of a complex equation. The usual method consists in an iterative process starting from "guessed values" of the five parameters.
The "guessed values" have to be not far from the unknown correct values. One can find some approximates from graphical inspection of the impedances diagram. Often this is a cause of failure of convergence of the iterative process.
A more reliable method consists in using a combination of linear regression wrt most of the parameters and non-linear regression wrt only few parameters.
In the present case it is shown below that the nonlinear regression can be reduced to only one parameter while the other parameters can be handled by a simple linear regression. This is a big simplification.
A software for mixed linear and nonlinear regression (in cases involving several phasors) was developed in years 1980-1990. Infortunately I have no access to it presently.
Nevertheless in the present case of one phasor only we don't need a sledgehammer to crack a nut. The Newton-Raphson method is sufficient. Graphical inspection gives a rough approximate of (nu) between -0.7 and -0.8 The chosen initial value is nu=-0.75 giving the next first run :
Since all calculus are carried out in complex numbers the resulting values are complex instead of real as expected. They are noted ZR1, ZR2, ZP1, ZP2 to distinguish from real R1, R2, P1, P2. This is because the value of (nu) isn't optimal.
The more (nu) converges to the final value the more the imaginary parts vanishes. After a few runs of the Newton-Raphson process the imaginary parts become quite negligible. The final result is shown below.
Publications :
"Contribution à l'interprétation de certaines mesures d'impédances". 2-ième Forum sur les Imédances Electrochimiques, 28-29 octobre 1987.
"Calcul de réseau électriques équivalents à partir de mesures d'impédances". 3-ième Forum sur les Imédances Electrochimiques, 24 novembre 1988.
"Synthèse de circuits électiques équivalents à partir de mesures d'impédances complexes". 5-ième Forum sur les Imédances Electrochimiques, 28 novembre 1991.
I am trying to train a model using Yolo V5.
I have the issue of Data base not found.
I have a train, test and valid files that contain all the image and labels files.
I have tested the files on googlecolap and it dose work. However, on my local machine it shows the issue of Exception: Dataset not found.
(Yolo_5) D:\\YOLO_V_5\Yolo_V5\yolov5>python train.py --img 416 --batch 8 --epochs 100 --data /data.yaml --cfg models/yolov5s.yaml --weights '' --name yolov5s_results --cache
Using torch 1.7.0 CUDA:0 (GeForce GTX 1080, 8192MB)
Namespace(adam=False, batch_size=8, bucket='', cache_images=True, cfg='models/yolov5s.yaml', data='.\\data.yaml', device='', epochs=100, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[416, 416], local_rank=-1, log_imgs=16, multi_scale=False, name='yolov5s_results', noautoanchor=False, nosave=False, notest=False, project='runs/train', rect=False, resume=False, save_dir='runs\\train\\yolov5s_results55', single_cls=False, sync_bn=False, total_batch_size=8, weights="''", workers=16, world_size=1)
Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/
Hyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'anchors': 3, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0}
WARNING: Dataset not found, nonexistent paths: ['D:\\me1eye\\Yolo_V5\\valid\\images']
Traceback (most recent call last):
File "train.py", line 501, in <module>
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 78, in train
check_dataset(data_dict) # check
File "D:\me1eye\YOLO_V_5\Yolo_V5\yolov5\utils\general.py", line 92, in check_dataset
raise Exception('Dataset not found.')
Exception: Dataset not found.
Internal process exited
(Olive_Yolo_5) D:\me1eye\YOLO_V_5\Yolo_V5\yolov5>
there is a much simpler solution. Just go into data.yaml wherever you saved it and change the relative paths to absolut - i.e. just write the whole path! e.g.
train: C:\hazlab\BCCD\train\images
val: C:\hazlab\BCCD\valid\images
nc: 3
names: ['Platelets', 'RBC', 'WBC']
job done - note, as you are in Windows, there is a known issue in the invocation of tain.py - do not use quotes on the file names in the CLI e.g.
!python train.py --img 416 --batch 16 --epochs 100 --data C:\hazlab\BCCD\data.yaml --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results --cache
Well! I have also encountered this problem and now I fix it.
All you have to do is to keep train, test, validation (these three folders containing images and labels), and yolov5 folder (that is cloned from GitHub) in the same directory. Also, another thing is that the 'data.yaml' file has to be inside the yolov5 folder.
Command to train the model would be like this:
!python train.py --img 416 --batch 16 --epochs 10 --data ./data.yaml --cfg ./models/yolov5m.yaml --weights '' --name yolov5m_results
The issue is due to not found actual dataset path. I found same issue when i trained the Yolov5 model on custom dataset using google colab, I did the following to resolve this.
Make sure provide correct path of data.yaml of dataset.
Make sure path of dataset in data.yaml should be be corrected.
train, test, and valid key should contain path with respect to the main path of the dataset.
Example data.yaml file given below.
path: /content/drive/MyDrive/car-detection-dataset
train: train/images
val: valid/images
test: test/images
nc: 1
names: ['car']
I am trying to fit a curve to a set of data points but would like to preserve certain characteristics.
Like in this graph I have curves that almost end up being linear and some of them are not. I need a functional form to interpolate between the given data points or past the last given point.
The curves have been created using a simple regression
def func(x, d, b, c):
return c + b * np.sqrt(x) + d * x
My question now is what is the best approach to ensure a positive slope past the last data point(s) ??? In my application a decrease in costs while increasing the volume doesn't make sense even if the data says so.
I would like to keep the order as low as possible maybe ˆ3 would still be fine.
The data used to create the curve with the negative slope is
x_data = [ 100, 560, 791, 1117, 1576, 2225,
3141, 4434, 6258, 8834, 12470, 17603,
24848, 35075, 49511, 69889, 98654, 139258,
196573, 277479, 391684, 552893, 780453, 1101672,
1555099, 2195148, 3098628, 4373963, 6174201, 8715381,
12302462, 17365915]
y_data = [ 7, 8, 9, 10, 11, 12, 14, 16, 21, 27, 32, 30, 31,
38, 49, 65, 86, 108, 130, 156, 183, 211, 240, 272, 307, 346,
389, 436, 490, 549, 473, 536]
And for the positive one
x_data = [ 100, 653, 950, 1383, 2013, 2930,
4265, 6207, 9034, 13148, 19136, 27851,
40535, 58996, 85865, 124969, 181884, 264718,
385277, 560741, 816117, 1187796, 1728748, 2516062,
3661939, 5329675, 7756940, 11289641, 16431220, 23914400,
34805603, 50656927]
y_data = [ 6, 6, 7, 7, 8, 8, 9, 10, 11, 12, 14, 16, 18,
21, 25, 29, 35, 42, 50, 60, 72, 87, 105, 128, 156, 190,
232, 284, 347, 426, 522, 640]
The curve fitting is simple done by using
popt, pcov = curve_fit(func, x_data, y_data)
For the plot
plt.plot(xdata, func(xdata, *popt), 'g--', label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
plt.plot(x_data, y_data, 'ro')
plt.xlabel('Volume')
plt.ylabel('Costs')
plt.show()
A simple solution might just look like this:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import least_squares
def fit_function(x, a, b, c, d):
return a**2 + b**2 * x + c**2 * abs(x)**d
def residuals( params, xData, yData):
diff = [ fit_function(x, *params ) - y for x, y in zip( xData, yData ) ]
return diff
fit1 = least_squares( residuals, [ .1, .1, .1, .5 ], loss='soft_l1', args=( x1Data, y1Data ) )
print fit1.x
fit2 = least_squares( residuals, [ .1, .1, .1, .5 ], loss='soft_l1', args=( x2Data, y2Data ) )
print fit2.x
testX1 = np.linspace(0, 1.1 * max( x1Data ), 100 )
testX2 = np.linspace(0, 1.1 * max( x2Data ), 100 )
testY1 = [ fit_function( x, *( fit1.x ) ) for x in testX1 ]
testY2 = [ fit_function( x, *( fit2.x ) ) for x in testX2 ]
fig = plt.figure()
ax = fig.add_subplot( 1, 1, 1 )
ax.scatter( x1Data, y1Data )
ax.scatter( x2Data, y2Data )
ax.plot( testX1, testY1 )
ax.plot( testX2, testY2 )
plt.show()
providing
>>[ 1.00232004e-01 -1.10838455e-04 2.50434266e-01 5.73214256e-01]
>>[ 1.00104293e-01 -2.57749592e-05 1.83726191e-01 5.55926678e-01]
and
It just takes the parameters as squares, therefore ensuring positive slope. Naturally, the fit becomes worse if following the decreasing points at the end of data set 1 is forbidden. Concerning this I'd say those are just statistical outliers. Therefore, I used least_squares, which can deal with this with a soft loss. See this doc for details. Depending on how the real data set is, I'd think about removing them. Finally, I'd expect that zero volume produces zero costs, so the constant term in the fit function doesn't seem to make sense.
So if the function is only of type a**2 * x + b**2 * sqrt(x) it look like:
where the green graph is the result of leastsq, i.e. without the f_scale option of least_squares.
I wasn't entirely sure what to search for, so if this question has been asked before, I apologize in advance:
I have two functions
R := Rref*mx(mx^(4/3) - C0)^(-1)
M := Mref*(mx + C1*mx^(-1)*((1 - C0*mx^(-4/3))^(-3) - 1))
where Rref, Mref, C0 and C1 are constants and mx is the variable. I need to plot R as a function of M. Surely there must be something available in Mathematica to do such a plot - I just can't seem to find it.
The comment is correct, in that what you have is a set of two "parametric equations". You would use the ParametricPlot command. However, the syntax of functions with parameters is sometimes tricky, so let me give you my best recommendation:
R[Rref_, C0_, C1_][mx_] = Rref*mx (mx^(4/3) - C0)^(-1);
M[Mref_, C0_, C1_][mx_] = Mref*(mx + C1*mx^(-1)*((1 - C0*mx^(-4/3))^(-3) - 1));
I like that notation better because you can do things like derivatives:
R[Rref,C0,C1]'[mx]
(* Output: -((4 mx^(4/3) Rref)/(3 (-C0 + mx^(4/3))^2)) + Rref/(-C0 + mx^(4/3)) *)
Then you just plot the functions parametrically:
ParametricPlot[
{R[0.6, 0.3, 0.25][mx], M[0.2, 0.3, 0.25][mx]},
{mx, -10, 10},
PlotRange -> {{-10, 10}, {-10, 10}}
]
You can box this up in a Manipulate command to play with the parameters:
Manipulate[
ParametricPlot[
{R[Rref, C0, C1][mx], M[Mref, C0, C1][mx]},
{mx, -mmax, mmax},
PlotRange -> {{-10, 10}, {-10, 10}}
],
{Rref, 0, 1},
{Mref, 0, 1},
{C0, 0, 1},
{C1, 0, 1},
{mmax, 1, 10}
]
That should do it, I think.
I used the fastgreedy algorithm in igraph for my community detection in a weighted, undirected graph. Afterwards I wanted to have a look at the modularity and I got different values for different methods and I am wondering why. I included a short example, which demonstrates my problem:
library(igraph)
d<-matrix(c(1, 0.2, 0.3, 0.9, 0.9,
0.2, 1, 0.6, 0.4, 0.5,
0.3, 0.6, 1, 0.1, 0.8,
0.9, 0.4, 0.1, 1, 0.5,
0.9, 0.5, 0.8, 0.5, 1), byrow=T, nrow=5)
g<-graph.adjacency(d, weighted=T, mode="lower",diag=FALSE, add.colnames=NA)
fc<-fastgreedy.community(g)
fc$modularity[3]
#[1] -0.05011095
modularity(g,membership=cutat(fc,steps=2),weights=get.adjacency(g,attr="weight"))
#[1] 0.07193047
I would expect both of the values to be identical and if I try the same with an unweighted graph, I get the same values.
d2<-round(d,digits=0)
g2<- graph.adjacency(d2, weighted=NULL, mode="lower",diag=FALSE, add.colnames=NA)
fc2<-fastgreedy.community(g2)
plot(fc2,g2)
fc2$modularity[3]
#[1] 0.15625
modularity(g2,membership=cutat(fc2,steps=2))
#[1] 0.15625
Another user had a similar problem, but I have the current version of igraph, so that should not be the problem. Can someone explain to me why there is a difference or is there a problem with my code I don't see?
The line
modularity(g,membership=cutat(fc,steps=2),weights=get.adjacency(g,attr="weight"))
is wrong. If you want to pass the weights of edges to modularity(), do it with E(g)$weight:
modularity(g, membership = cutat(fc, steps = 2), weights = E(g)$weight)
# [1] -0.05011095