Manim Diagonal bar_names - bar-chart

I want to animate a bar chart in manim and it works just fine. However, the bar_names are long and have to be displayed rather small. Is there a way to rotate them so they can be displayed bigger?
CONFIG = {
"max_value" : 100,
"bar_names" : ["Fleisch von Wiederkäuern","Anderes Fleisch, Fisch","Milchprodukte","Früchte", "Snacks, etc.","Gemüse","Pflanzliche Öle","Getreideprodukte", "Pflanzliche Proteine"],
"bar_label_scale_val" : 0.2,
"bar_stroke_width" : 0,
"width" : 10,
"height" : 6,
"label_y_axis" : False,
}
def construct(self):
composition = [96.350861, 18.5706488, 14.7071608, 8.25588773, 7.33856028, 4.24083463, 1.65574964, 1.36437485, 1]
chart = BarChart(values=composition, **self.CONFIG)
self.play(Write(chart), run_time=2)```

Maybe something like this?
(I just made new labels so delete the old ones, or delete their size)
(also I modified some of the names because my Latex crashed with some characters)
CONFIG = {
"height": 4,
"width": 10,
"n_ticks": 4,
"tick_width": 0.2,
"label_y_axis": False,
"y_axis_label_height": 0.25,
"max_value": 100,
"bar_colors": [BLUE, YELLOW],
"bar_fill_opacity": 0.8,
"bar_stroke_width": 0,
"bar_names": ["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte","Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"],
"bar_label_scale_val": 0
}
def construct(self):
bar_names=["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte","Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"]
Lsize=0.55
Lseparation=1.1
Lpositionx=-5.4
Lpositiony=2
bar_labels = VGroup()
for i in range(len(bar_names)):
label = TexMobject(bar_names[i])
label.scale(Lsize)
label.move_to(DOWN*Lpositiony+(i*Lseparation+Lpositionx)*RIGHT)
label.rotate(np.pi*(1.5/6))
bar_labels.add(label)
composition = [96.350861, 18.5706488, 14.7071608, 8.25588773, 7.33856028, 4.24083463, 1.65574964, 1.36437485, 1]
chart = BarChart(values=composition, **self.CONFIG)
chart.shift(UP)
self.play(Write(chart),Write(bar_labels), run_time=2)

# Manim Community Version 0.7.0 in Google Colab
%%manim -qm -v WARNING BarChartExample2
import numpy as np
mobject.probability.np = np
class BarChartExample2(Scene):
CONFIG = {
"height": 4,
"width": 10,
"n_ticks": 4,
"tick_width": 0.2,
"label_y_axis": False,
"y_axis_label_height": 0.25,
"max_value": 100,
"bar_colors": [BLUE, YELLOW],
"bar_fill_opacity": 0.8,
"bar_stroke_width": 0,
"bar_names": ["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte",
"Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"],
"bar_label_scale_val": 0
}
def construct(self):
bar_names=["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte",
"Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"]
Lsize=0.55
Lseparation=1.1
Lpositionx=-5.4
Lpositiony=2
bar_labels = VGroup()
for i in range(len(bar_names)):
#label = TexMobject(bar_names[i])
label = MathTex(bar_names[i])
label.scale(Lsize)
label.move_to(DOWN*Lpositiony+(i*Lseparation+Lpositionx)*RIGHT)
label.rotate(np.pi*(1.5/6))
bar_labels.add(label)
composition = [96.350861, 18.5706488, 14.7071608, 8.25588773, 7.33856028, 4.24083463, 1.65574964, 1.36437485, 1]
chart = BarChart(values=composition,**self.CONFIG)
chart.shift(UP)
self.play(Write(chart),Write(bar_labels), run_time=2)

Related

Chart.js graphs rendered incorrectly on Chrome and Edge

Something I just noticed, is that graphs in Chart.js are rendered with double X and Y axes on Chrome and Edge. They are looking as I want them on Firefox. I'm sure they were ok on Chrome a few weeks ago.
Is this a bug in Graph.js or do I have my code wrong?
This is how it looks in Firefox:
And this is in Edge and Chrome:
There is a second Y axis ranging from 0 to 1, and the X axis has the days in what looks like Unix time and a secondary axis with times per 4 hours.
The website is http://www.maasluip.nl/energyprice/SPOT.html
The script for the first graph is this:
var ctx = document.getElementById('SpotWeek');
var PVDay = new Chart(ctx, {
type: 'line',
data: {
labels: [
'2022-11-12 00:00:00','2022-11-12 01:00:00','2022-11-12 02:00:00','2022-11-12 03:00:00','2022-11-12 04:00:00','2022-11-12 05:00:00','2022-11-12 06:00:00','2022-11-12 07:00:00','2022-11-12 08:00:00','2022-11-12 09:00:00','2022-11-12 10:00:00','2022-11-12 11:00:00','2022-11-12 12:00:00','2022-11-12 13:00:00','2022-11-12 14:00:00','2022-11-12 15:00:00','2022-11-12 16:00:00','2022-11-12 17:00:00','2022-11-12 18:00:00','2022-11-12 19:00:00','2022-11-12 20:00:00','2022-11-12 21:00:00','2022-11-12 22:00:00','2022-11-12 23:00:00','2022-11-13 00:00:00','2022-11-13 01:00:00','2022-11-13 02:00:00','2022-11-13 03:00:00','2022-11-13 04:00:00','2022-11-13 05:00:00','2022-11-13 06:00:00','2022-11-13 07:00:00','2022-11-13 08:00:00','2022-11-13 09:00:00','2022-11-13 10:00:00','2022-11-13 11:00:00','2022-11-13 12:00:00','2022-11-13 13:00:00','2022-11-13 14:00:00','2022-11-13 15:00:00','2022-11-13 16:00:00','2022-11-13 17:00:00','2022-11-13 18:00:00','2022-11-13 19:00:00','2022-11-13 20:00:00','2022-11-13 21:00:00','2022-11-13 22:00:00','2022-11-13 23:00:00','2022-11-14 00:00:00','2022-11-14 01:00:00','2022-11-14 02:00:00','2022-11-14 03:00:00','2022-11-14 04:00:00','2022-11-14 05:00:00','2022-11-14 06:00:00','2022-11-14 07:00:00','2022-11-14 08:00:00','2022-11-14 09:00:00','2022-11-14 10:00:00','2022-11-14 11:00:00','2022-11-14 12:00:00','2022-11-14 13:00:00','2022-11-14 14:00:00','2022-11-14 15:00:00','2022-11-14 16:00:00','2022-11-14 17:00:00','2022-11-14 18:00:00','2022-11-14 19:00:00','2022-11-14 20:00:00','2022-11-14 21:00:00','2022-11-14 22:00:00','2022-11-14 23:00:00','2022-11-15 00:00:00','2022-11-15 01:00:00','2022-11-15 02:00:00','2022-11-15 03:00:00','2022-11-15 04:00:00','2022-11-15 05:00:00','2022-11-15 06:00:00','2022-11-15 07:00:00','2022-11-15 08:00:00','2022-11-15 09:00:00','2022-11-15 10:00:00','2022-11-15 11:00:00','2022-11-15 12:00:00','2022-11-15 13:00:00','2022-11-15 14:00:00','2022-11-15 15:00:00','2022-11-15 16:00:00','2022-11-15 17:00:00','2022-11-15 18:00:00','2022-11-15 19:00:00','2022-11-15 20:00:00','2022-11-15 21:00:00','2022-11-15 22:00:00','2022-11-15 23:00:00','2022-11-16 00:00:00','2022-11-16 01:00:00','2022-11-16 02:00:00','2022-11-16 03:00:00',
'2022-11-16 04:00:00','2022-11-16 05:00:00','2022-11-16 06:00:00','2022-11-16 07:00:00','2022-11-16 08:00:00','2022-11-16 09:00:00','2022-11-16 10:00:00','2022-11-16 11:00:00','2022-11-16 12:00:00','2022-11-16 13:00:00','2022-11-16 14:00:00','2022-11-16 15:00:00','2022-11-16 16:00:00','2022-11-16 17:00:00','2022-11-16 18:00:00','2022-11-16 19:00:00','2022-11-16 20:00:00','2022-11-16 21:00:00','2022-11-16 22:00:00','2022-11-16 23:00:00','2022-11-17 00:00:00','2022-11-17 01:00:00','2022-11-17 02:00:00','2022-11-17 03:00:00','2022-11-17 04:00:00','2022-11-17 05:00:00','2022-11-17 06:00:00','2022-11-17 07:00:00','2022-11-17 08:00:00','2022-11-17 09:00:00','2022-11-17 10:00:00','2022-11-17 11:00:00','2022-11-17 12:00:00','2022-11-17 13:00:00','2022-11-17 14:00:00','2022-11-17 15:00:00','2022-11-17 16:00:00','2022-11-17 17:00:00','2022-11-17 18:00:00','2022-11-17 19:00:00','2022-11-17 20:00:00','2022-11-17 21:00:00','2022-11-17 22:00:00','2022-11-17 23:00:00'],
datasets: [{
label: 'SPOT prijs ex BTW',
data: [
'0.13780','0.13421','0.13001','0.12680','0.12944','0.12968','0.15800','0.17250','0.17410','0.16693','0.14185','0.13030','0.13378','0.13782','0.14988','0.20001','0.18990','0.24070','0.23800','0.19971','0.17807','0.16760','0.16127','0.15870','0.16880','0.15221','0.14300','0.12700','0.13407','0.13529','0.13529','0.15075','0.15596','0.13157','0.12031','0.13300','0.13013','0.12500','0.13746','0.14145','0.14913','0.19890','0.19797','0.17310','0.15872','0.15000','0.14200','0.13300','0.10454','0.10411','0.11910','0.10800','0.11955','0.11999','0.16720','0.21661','0.22470','0.19257','0.17470','0.16812','0.15866','0.17510','0.20732','0.21285','0.21315','0.25418','0.27039','0.19794','0.17530','0.16800','0.15791','0.15000','0.13565','0.12818','0.12899','0.10828','0.10863','0.12594','0.16405','0.21193','0.22140','0.21000','0.20010','0.19868','0.18983','0.20565','0.22216','0.22299','0.21735','0.22167','0.21501','0.21493','0.17498','0.15000','0.13876','0.13600','0.10180','0.10040','0.14255','0.09686','0.07959','0.09110','0.12074','0.22990','0.28794','0.21540','0.17442','0.16378','0.16682','0.17400','0.20107','0.22058','0.23610','0.27746','0.26486','0.20207','0.15420','0.11410','0.09948','0.06491','0.05116','0.02777','0.01633','0.00510','0.00588','0.01106','0.09491','0.18000','0.22300','0.26180','0.20550','0.17990','0.17050','0.16500','0.18000','0.17950','0.17990','0.29473','0.30000','0.23200','0.19500','0.14684','0.15150','0.13242'],
backgroundColor: ['rgba(54, 162, 235, 1)'],
borderColor: ['rgba(54, 162, 235, 1)'],
borderWidth: 1,
pointRadius: 1,
},{
label: 'Consumentenprijs',
data: [
'0.24816','0.24382','0.23873','0.23485','0.23804','0.23833','0.27260','0.29015','0.29208','0.28341','0.25306','0.23908','0.24329','0.24818','0.26278','0.32343','0.31120','0.37267','0.36940','0.32307','0.29689','0.28422','0.27656','0.27345','0.28567','0.26560','0.25445','0.23509','0.24365','0.24512','0.24512','0.26383','0.27013','0.24062','0.22700','0.24235','0.23888','0.23267','0.24775','0.25258','0.26187','0.32209','0.32096','0.29087','0.27347','0.26292','0.25324','0.24235','0.20791','0.20739','0.22553','0.21210','0.22608','0.22661','0.28373','0.34352','0.35331','0.31443','0.29281','0.28485','0.27340','0.29329','0.33228','0.33897','0.33933','0.38898','0.40859','0.32093','0.29353','0.28470','0.27249','0.26292','0.24556','0.23652','0.23750','0.21244','0.21286','0.23381','0.27992','0.33786','0.34931','0.33552','0.32354','0.32182','0.31112','0.33026','0.35023','0.35124','0.34441','0.34964','0.34158','0.34149','0.29315','0.26292','0.24932','0.24598','0.20460','0.20290','0.25391','0.19862','0.17772','0.19165','0.22752','0.35960','0.42983','0.34205','0.29247','0.27959','0.28327','0.29196','0.32472','0.34832','0.36710','0.41715','0.40190','0.32593','0.26800','0.21948','0.20179','0.15996','0.14332','0.11502','0.10118','0.08759','0.08854','0.09480','0.19626','0.29922','0.35125','0.39820','0.33008','0.29910','0.28773','0.28107','0.29922','0.29862','0.29910','0.43804','0.44442','0.36214','0.31737','0.25910','0.26474','0.24165'],
backgroundColor: ['rgba(255, 73, 17, 1)'],
borderColor: ['rgba(255, 73, 17, 1)'],
borderWidth: 2,
pointRadius: 1,
}]
},
options: {
animation: false,
scales: {
yAxis: {
position: 'left',
beginAtZero: true,
title: {
text: 'Euro',
display: true
},
},
xAxis: {
type: 'time',
ticks: {
align: "center",
source: "data",
callback: function(value, index, values) {
return ((index % 24) == 0) ? value : null;}
},
offset: false,
padding: 1,
time: {
tooltipFormat: 'DD-MM-YYYY HH:mm',
displayFormats: {
hour: 'DD MMM'
}
}
}
}
}
});
I found out one thing: I have my x- and y-axis named xAxis and yAxis, and apparently that does not work anymore. Renaming them to x and y did fix a lot of errors but I still have some graphs that behave differently. Maybe they changed the logic of their processing, I don't know. Can't find anything about that.

I am experiencing an error while tuning catboost with optuna

This is my code.
def cb_objective(trial):
cb_param = {
'grow_policy' : trial.suggest_categorical('grow_policy',[
# 'SymmetricTree',
# 'Depthwise',
'Lossguide']),
'learning_rate' : trial.suggest_loguniform('learning_rate', 0.01, 0.8),
'n_estimators' : trial.suggest_int("n_estimators", 300,3000),
'max_depth' : trial.suggest_int("max_depth", 3, 16),
'random_strength' :trial.suggest_int('random_strength', 0, 100),
'l2_leaf_reg' : trial.suggest_loguniform("l2_leaf_reg",1e-6,3.0),
'max_bin' : trial.suggest_int("max_bin", 25, 300),
'od_type' : trial.suggest_categorical('od_type', ['IncToDec', 'Iter']),
'bootstrap_type' : trial.suggest_categorical("bootstrap_type", ["Bayesian", "Bernoulli", "Poisson"])}
if cb_param['grow_policy'] == 'Lossguide' or cb_param['grow_policy'] == 'Depthwise':
cb_param['min_child_samples'] = trial.suggest_int('min_child_samples',1,100)
if cb_param['grow_policy'] == 'Lossguide':
cb_param['num_leaves'] = trial.suggest_int('num_leaves',20,50)
if cb_param['bootstrap_type'] =='Bayesian':
cb_param['bagging_temperature'] = trial.suggest_loguniform('bagging_temperature', 0.01, 100.00)
elif cb_param['bootstrap_type'] =='Bernoulli' or cb_param['bootstrap_type'] =='Poisson':
cb_param['subsample'] = trial.suggest_discrete_uniform('subsample', 0.6, 1.0, 0.1)
_fit_params={'early_stopping_rounds':100,
'eval_set': [(X,y)],
'verbose':0}
cbr = cb.CatBoostRegressor(
random_state=42,
task_type = 'GPU',
**cb_param
)
I'm sorry the code is too long. When the growth_policy is Lossguide, it operates without any problems, but in the case of SymetricTree or Depthwise, the colab kernel is down without outputting an error message.(What's interesting is that the kernel goes down after a few moves. I checked that there is no problem with the memory.)
I think there's a parametric impulse or out of gpu memory . Do you know anyone well in catboost?
No matter how many times I look up the official document, it seems to be beyond my ability.
picture 1 : grow_policy = symmetricTree,Depthwise , stop at 6 step
picture 2 : grow_policy = Lossguide , working well (same parameters)

python : Parsing json file into list of dictionaries

I have the following json file annotations
and here is a screenshot form it.tree structure of the json file
I want to parse it and extract the following info
here is a link which I take this screenshot form it Standard Dataset Dicts
I tried to use this code which is not working as expected.
def get_buildings_dicts(img_dir):
json_file = os.path.join(img_dir, "annotations.json")
with open(json_file) as f:
imgs_anns = json.load(f)
dataset_dicts = []
for idx, v in enumerate(imgs_anns):
record = {}
filename = os.path.join(img_dir, v["imagePath"])
height, width = cv2.imread(filename).shape[:2]
record["file_name"] = filename
record["image_id"] = idx
record["height"] = height
record["width"] = width
annos = v["shapes"][idx]
objs = []
for anno in annos:
# assert not anno["region_attributes"]
anno = anno["shape_type"]
px = anno["points"][0]
py = anno["points"][1]
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
obj = {
"bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
"bbox_mode": BoxMode.XYXY_ABS,
"segmentation": [poly],
"category_id": 0,
}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
return dataset_dicts
here is an expected output of the final dict items:
{
"file_name": "balloon/train/34020010494_e5cb88e1c4_k.jpg",
"image_id": 0,
"height": 1536,
"width": 2048,
"annotations": [
{
"bbox": [994, 619, 1445, 1166],
"bbox_mode": <BoxMode.XYXY_ABS: 0>,
"segmentation": [[1020.5, 963.5, 1000.5, 899.5, 994.5, 841.5, 1003.5, 787.5, 1023.5, 738.5, 1050.5, 700.5, 1089.5, 663.5, 1134.5, 638.5, 1190.5, 621.5, 1265.5, 619.5, 1321.5, 643.5, 1361.5, 672.5, 1403.5, 720.5, 1428.5, 765.5, 1442.5, 800.5, 1445.5, 860.5, 1441.5, 896.5, 1427.5, 942.5, 1400.5, 990.5, 1361.5, 1035.5, 1316.5, 1079.5, 1269.5, 1112.5, 1228.5, 1129.5, 1198.5, 1134.5, 1207.5, 1144.5, 1210.5, 1153.5, 1190.5, 1166.5, 1177.5, 1166.5, 1172.5, 1150.5, 1174.5, 1136.5, 1170.5, 1129.5, 1153.5, 1122.5, 1127.5, 1112.5, 1104.5, 1084.5, 1061.5, 1037.5, 1032.5, 989.5, 1020.5, 963.5]],
"category_id": 0
}
]
}
I think the only tricky part is dealing with the nested lists but a handful of coprehensions can probably make life easier for us.
Try:
import json
new_images = []
with open("merged_file.json", "r") as file_in:
for index, image in enumerate( json.load(file_in)):
#height, width = cv2.imread(filename).shape[:2]
height, width = 100, 100
new_images.append({
"image_id": index,
"filename": image["imagePath"],
"height": height,
"width": width,
"annotations": [
{
"category_id": 0,
#"bbox_mode": BoxMode.XYXY_ABS,
"bbox_mode": 0,
"bbox": [
min(x for x,y in shape["points"]),
min(y for x,y in shape["points"]),
max(x for x,y in shape["points"]),
max(y for x,y in shape["points"])
],
"segmentation": [coord for point in shape["points"] for coord in point]
}
for shape in image["shapes"]
],
})
print(json.dumps(new_images, indent=2))

Curve Fitting past last data point(s)

I am trying to fit a curve to a set of data points but would like to preserve certain characteristics.
Like in this graph I have curves that almost end up being linear and some of them are not. I need a functional form to interpolate between the given data points or past the last given point.
The curves have been created using a simple regression
def func(x, d, b, c):
return c + b * np.sqrt(x) + d * x
My question now is what is the best approach to ensure a positive slope past the last data point(s) ??? In my application a decrease in costs while increasing the volume doesn't make sense even if the data says so.
I would like to keep the order as low as possible maybe ˆ3 would still be fine.
The data used to create the curve with the negative slope is
x_data = [ 100, 560, 791, 1117, 1576, 2225,
3141, 4434, 6258, 8834, 12470, 17603,
24848, 35075, 49511, 69889, 98654, 139258,
196573, 277479, 391684, 552893, 780453, 1101672,
1555099, 2195148, 3098628, 4373963, 6174201, 8715381,
12302462, 17365915]
y_data = [ 7, 8, 9, 10, 11, 12, 14, 16, 21, 27, 32, 30, 31,
38, 49, 65, 86, 108, 130, 156, 183, 211, 240, 272, 307, 346,
389, 436, 490, 549, 473, 536]
And for the positive one
x_data = [ 100, 653, 950, 1383, 2013, 2930,
4265, 6207, 9034, 13148, 19136, 27851,
40535, 58996, 85865, 124969, 181884, 264718,
385277, 560741, 816117, 1187796, 1728748, 2516062,
3661939, 5329675, 7756940, 11289641, 16431220, 23914400,
34805603, 50656927]
y_data = [ 6, 6, 7, 7, 8, 8, 9, 10, 11, 12, 14, 16, 18,
21, 25, 29, 35, 42, 50, 60, 72, 87, 105, 128, 156, 190,
232, 284, 347, 426, 522, 640]
The curve fitting is simple done by using
popt, pcov = curve_fit(func, x_data, y_data)
For the plot
plt.plot(xdata, func(xdata, *popt), 'g--', label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
plt.plot(x_data, y_data, 'ro')
plt.xlabel('Volume')
plt.ylabel('Costs')
plt.show()
A simple solution might just look like this:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import least_squares
def fit_function(x, a, b, c, d):
return a**2 + b**2 * x + c**2 * abs(x)**d
def residuals( params, xData, yData):
diff = [ fit_function(x, *params ) - y for x, y in zip( xData, yData ) ]
return diff
fit1 = least_squares( residuals, [ .1, .1, .1, .5 ], loss='soft_l1', args=( x1Data, y1Data ) )
print fit1.x
fit2 = least_squares( residuals, [ .1, .1, .1, .5 ], loss='soft_l1', args=( x2Data, y2Data ) )
print fit2.x
testX1 = np.linspace(0, 1.1 * max( x1Data ), 100 )
testX2 = np.linspace(0, 1.1 * max( x2Data ), 100 )
testY1 = [ fit_function( x, *( fit1.x ) ) for x in testX1 ]
testY2 = [ fit_function( x, *( fit2.x ) ) for x in testX2 ]
fig = plt.figure()
ax = fig.add_subplot( 1, 1, 1 )
ax.scatter( x1Data, y1Data )
ax.scatter( x2Data, y2Data )
ax.plot( testX1, testY1 )
ax.plot( testX2, testY2 )
plt.show()
providing
>>[ 1.00232004e-01 -1.10838455e-04 2.50434266e-01 5.73214256e-01]
>>[ 1.00104293e-01 -2.57749592e-05 1.83726191e-01 5.55926678e-01]
and
It just takes the parameters as squares, therefore ensuring positive slope. Naturally, the fit becomes worse if following the decreasing points at the end of data set 1 is forbidden. Concerning this I'd say those are just statistical outliers. Therefore, I used least_squares, which can deal with this with a soft loss. See this doc for details. Depending on how the real data set is, I'd think about removing them. Finally, I'd expect that zero volume produces zero costs, so the constant term in the fit function doesn't seem to make sense.
So if the function is only of type a**2 * x + b**2 * sqrt(x) it look like:
where the green graph is the result of leastsq, i.e. without the f_scale option of least_squares.

How do I plot a function and data in Mathematica?

Simple question but I can't find the answer.
I want to combine a ListLinePlot and a regular Plot (of a function) onto one plot. How do I do this?
Thanks.
Use Show, e.g.
Show[Plot[x^2, {x, 0, 3.5}], ListPlot[{1, 4, 9}]]
Note, if plot options conflict Show uses the first plot's option, unless the option is specified in Show. I.e.
Show[Plot[x^2, {x, 0, 3.5}, ImageSize -> 100],
ListPlot[{1, 4, 9}, ImageSize -> 400]]
shows a combined plot of size 100.
Show[Plot[x^2, {x, 0, 3.5}, ImageSize -> 100],
ListPlot[{1, 4, 9}, ImageSize -> 400], ImageSize -> 300]
Shows a combined plot of size 300.
An alternative to using Show and combining two separate plots, is to use Epilog to add the data points to the main plot. For example:
data = Table[{i, Sin[i] + .1 RandomReal[]}, {i, 0, 10, .5}];
Plot[Sin[x], {x, 0, 10}, Epilog -> Point[data], PlotRange -> All]
or
Plot[Sin[x], {x, 0, 10}, Epilog -> Line[data], PlotRange -> All]