According to pandoc(1), pandoc supports internal links in HTML slides. But nothing happens for me when I click one.
A minimal example:
% A minimal example
% moi
% 2015-04-04
# Section 1
la la la
# Section 2
cf. [Section 1](#section-1)
I save the foregoing as example.md. Then in bash I run
file=example && \
pandoc -fmarkdown -tslidy --standalone --self-contained -o$file.html $file.md
Having opened the resulting HTML slides in a web browser, I click "Section 1" on slide "Section 2", but nothing happens. This I have tried in multiple browsers on multiple devices: xombrero on a Macbook running Arch Linux, Chrome on a Moto X running Android and Chrome on a Sony laptop running Windows 8.1. The results are the same. I am using pandoc version 1.13.2.
The link produced by pandoc for the internal reference is different from the link of the relevant slide: in the present example, the former ends in #section-1 and, the latter, in #(2). I suppose that this is why clicking the internal link does not return to the relevant slide. Is there some way to achieve that internal links do go to their relevant slides?
Here's the relevant HTML:
<body>
<div class="slide titlepage">
<h1 class="title">A minimal example</h1>
<p class="author">
moi
</p>
<p class="date">2015-04-04</p>
</div>
<div id="section-1" class="slide section level1">
<h1>Section 1</h1>
<p>la la la</p>
</div>
<div id="section-2" class="slide section level1">
<h1>Section 2</h1>
<p>cf. Section 1</p>
</div>
</body>
Thanks for any help!
Your problem is not with Pandoc but with Slidy. Pandoc is creating the right HTML for an ordinary HTML page but the Slidy slide software does not support going to a <div> - only going to a slide number.
If you change your link to cf. [Section 1](#(2)) ('2' being the number of the slide with 'Section 1') then it will work fine.
BTW - It works perfectly in a reveal.js slideshow created by Pandoc.
Although the question is stated more than five years ago, I recently had the same problem and created a postprocessing script in Python, which works for me. Essentially it is reading the Pandoc -> Slidy html output, scanning for internal links and replacing them with the slide number on which the link id is defined.
def Fix_Internal_Slidy_Links(infilename, outfilename):
"""Replaces all internal link targets with targets of the respective slidy page number
"""
page_pattern = ' class=\"slide';
id_pattern = ' id=\"';
internal_link_pattern = '<a href=\"#';
id_dict = dict();
whole_text = [];
cur_page = 0;
#
# First read all ids and associate them with the current page in id_dict
with open(infilename, 'r', encoding='utf-8') as filecontent:
for idx_cur_line, cur_line in enumerate(filecontent):
whole_text += [cur_line];
if (page_pattern in cur_line):
cur_page += 1;
#
if (id_pattern in cur_line):
while (id_pattern in cur_line):
startidx = cur_line.index(id_pattern);
cur_line = cur_line[startidx+len(id_pattern):];
lineparts = cur_line.split('"');
# Check if the current id is properly ended
if (len(lineparts) > 1):
id_dict.update([(lineparts[0], cur_page)]);
#
# Then process the code again and replace all internal links known in id_dict
with open(outfilename, 'w', encoding='utf-8') as filecontent:
for cur_line in whole_text:
if (internal_link_pattern in cur_line):
temp_line = '';
offset = 0;
while (internal_link_pattern in cur_line):
startidx = cur_line.index(internal_link_pattern);
# Extract name
temp_line += cur_line[offset:startidx+len(internal_link_pattern)];
cur_line = cur_line[startidx+len(internal_link_pattern):];
lineparts = cur_line.split('"');
if (len(lineparts) < 2):
# It seems that the id is not properly finished
break;
#
link = lineparts[0];
try:
# Create a link to the page assigned to that id
replacement_link = '(' + str(id_dict[link]) + ')"';
except:
# The link reference is not known in id_dict so do not change it
replacement_link = lineparts[0] + '"';
#
temp_line += replacement_link;
cur_line = cur_line[len(lineparts[0])+1:];
#
cur_line = temp_line + cur_line;
#
filecontent.write(cur_line);
#
Related
new to doxygen and its config!
I have created a new alias within my external.cfg config file.
ALIASES += frmnam{1}="\#frmnam=\1"
within the header of all our markdown pages we have a copyright notice, as you can see I have added the alias tag to the top of the paragraph.
Markdown
# Client Details
<p style="display:none">
#frmnam{clientAttributes}
#version 1
#copyright The company.com
#date 1996-2022
#brief Client Attributes
</p>
as you can see below the generated output we get additional </p> at the end of line 1 and <p > at the start of line two.
HTML Output
<p style="display:none"></p>
<p >#frmnam=clientAttributes
#version 1
#copyright The company.com
#date 1996-2022
</p>
Required Output
<p style="display:none">
#frmnam=clientAttributes
#version 1
#copyright The company.com
#date 1996-2022
</p>
Does anyone know how to stop this from being generated.
I have tried multiple variations, putting the HTML in the alias itself but no joy.
Any help would be gratefully received
thanks
Edited, doxy -x output (had to remove some path names)
C:\Program Files\doxygen\bin>doxygen -x "C:\Dev\########\########\WMS\########\WMSDev\Core\Product\doxyman.man.external.eng.cfg"
warning: Tag 'CLASS_DIAGRAMS' at line 2238 of file 'C:\Dev\########\########\WMS\########\WMSDev\Core\Product\doxyman.man.external.eng.cfg' has become obsolete.
To avoid this warning please remove this line from your configuration file or upgrade it using "doxygen -u"
# Difference with default Doxyfile 1.9.3 (c0b9eafbfb53286ce31e75e2b6c976ee4d345473)
PROJECT_NAME = "######## External Guide"
PROJECT_BRIEF = "######## External Guide"
OUTPUT_DIRECTORY = C:\Dev\########\V60\########\doc\eng\external
STRIP_FROM_PATH = C:\Dev\########\########\WMS\########\WMSDev \
C:\Dev\########\V60\########\doc\src \
C:\Dev\########
ALIASES = "intransaction=\n<b>Transaction control is managed by calling function.</b>" \
"outtransaction=\n<b>Transaction control is managed within function. Commit and Rollback internally.</b>" \
frmnam{1}=\#frmnam=\1
EXTENSION_MAPPING = pc=C \
s=C \
msg=md \
txt=md
EXTRACT_STATIC = YES
SHOW_FILES = NO
SHOW_NAMESPACES = NO
WARN_LOGFILE = C:\Dev\########\V60\########\build\doxygen.external.eng..txt
INPUT = C:\Dev\########\########\WMS\########\WMSDev\Core\Product \
C:\Dev\########\########\WMS\########\WMSDev\WMSRep\Database\01_Tables
FILE_PATTERNS = *ClientAttributes*.md
RECURSIVE = YES
IMAGE_PATH = C:\Dev\########\########\WMS\########\WMSDev\Core\Product
FILTER_PATTERNS = *.md=C:\Dev\########\########\WMS\########\WMSDev\WMS\Utilities\python\doxy.md.imagemanual.py
USE_MDFILE_AS_MAINPAGE = Config.Main.md
MATHJAX_RELPATH = https://cdn.jsdelivr.net/npm/mathjax#2
LATEX_BATCHMODE = YES
INCLUDE_PATH = C:\Dev\########\V60\########\include
I have a folder of xx .csv timeseries that I want to graph and knit into a clean HTML document. I have a ggplot code that produces the plot that I want using a single timeseries.csv. However, when I try to put the bones of that ggplot code in a function inside of a for loop to run each of the timeseries.csv files through the function I get a some plots with pretty different formatting.
Plot generated with my test ggplot code:
Plot generated with function and for loop:
Changes I'm trying to make to the ugly Rmd plot:
Nicely space the x-axis tick marks to whole mins (i.e. "11:14:00", "11:15:00")
Connect the data points (solved with subbing geom_line() with geom_path())
Example Rmd Code Below. Please Note that the graphs produced still have nice formatting, I'm not sure how to reproduce this problem sort of posting a 500 row dataframe. I also don't know how to post my rmd code without SO using the formatting commands in this post, so I threw in at 3 of " around my header formatting and at the end of the code to disable it.
Edits and Updates
I am getting a persistent error geom_path: Each group consists of only one observation. Do you need to adjust the group
aesthetic?.
As suggested by the commenters I tried removing plot() and using the the createChlDiffPlot() directly and replacing plot() with print(). Both produce the same ugly plots as before.
Replaced geom_line() with geom_path(). The points are now connected! x-axis cluttering is still there.
Time variable is reading as hms num
Many thanks for any help on this!
```
---
title: "Chl Filtration"
output:
flexdashboard::flex_dashboard:
theme: yeti
orientation: rows
editor_options:
chunk_output_type: console
---
```{r setup}
library(flexdashboard)
library(dplyr)
library(ggplot2)
library(hms)
library(ggthemes)
library(readr)
library(data.table)
#### Example Data
df1 <- data.frame(Time = as_hms(c("11:22:33","11:22:34","11:22:35","11:22:38","11:23:00","11:23:01","11:23:02")),
Chl_ug_L_Up = c(0.2,0.1,0.25,-0.2,-0.3,-0.15,0.1),
Chl_ug_L_Down = c(0.5,0.4,0.3,0.2,0.1,0,-0.1))
df2 <- data.frame(Time = as_hms(c("08:02:33","08:02:34","08:02:35","08:02:40","08:02:42","08:02:43","08:02:49")),
Chl_ug_L_Up = c(-0.2,-0.1,-0.25,0.2,0.3,0.15,-0.1),
Chl_ug_L_Down = c(-0.1,0,0.1,0.2,0.3,0.4,0.1))
data_directory = "./" # data folder in R project folder in the real deal
output_directory = "./" # output graph directory in R project folder
write_csv(df1, file.path(data_directory, "SO_example_df1.csv"))
write_csv(df2, file.path(data_directory, "SO_example_df2.csv"))
#### Function to create graphs
createChlDiffPlot = function(aTimeSeriesFile, aFileName, aGraphOutputDirectory, aType)
{
aFile_Mod = aTimeSeriesFile %<>%
select(Time, Chl_ug_L_Up, Chl_ug_L_Down) %>%
mutate(Chl_diff = Chl_ug_L_Up - Chl_ug_L_Down)
one_plot = ggplot(data = aFile_Mod, aes(x = Time, y = Chl_diff)) + # tried adding 'group = 1' in aes to connect points
geom_path(size = 1, color = "green") +
geom_point(color = "green") +
theme_gdocs() +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
legend.title = element_blank()) +
labs(x = "", y = "Chl Difference", title = paste0(aFileName, " - ", "Filtration"))
one_graph_name = paste0(gsub(".csv", "", aFileName), "_", aType, ".pdf")
ggsave(one_graph_name, one_plot, dpi = 600, width = 7, height = 5, units = "in", device = "pdf", aGraphOutputDirectory)
return(one_plot)
}
"``` ### remove the quotes when running example
Plots - After Velocity Adjustment
=====================================" ### remove quotes when running example
```{r, fig.width=13.5, fig.height=5}
all_files_Filtration = list.files(data_directory, pattern = ".csv")
# Loop to plot function
for(file in 1 : length(all_files_Filtration))
{
file_name = all_files_Filtration[file]
one_file = fread(file.path(data_directory, file_name))
# plot the time series agains
plot(createChlDiffPlot(one_file, file_name, output_directory, "Velocity_Paired"))
}
"``` #remove quotes when running example
```
I finally figured it out.
1) Replacing geom_line() with geom_path() connected the data points when rendered in Rmd.
2) df1$Time was formatted as a difftime object. When I looked at the dataframe in the global environment, Time :hmsnum 11:11:09 .... This made me think my format was ok, but when I ran class(df1$Time) I got [1] "hms" "difftime". With a quick google I found out difftime objects are not quite the same as hms, and my original time was generated by subtracting times. I added a conversion into my mutate function:
select(Time, Chl_ug_L_Up, Chl_ug_L_Down) %>%
mutate(Chl_diff = Chl_ug_L_Up - Chl_ug_L_Down,
Time = as_hms(Time)) # convert difftime objecct to hms
ggplot I think has some auto-formatting for hms variables, which is why difftime variable was producing ugly crowded x- axes.
I am trying to webscrape some website for information. i have saved the page I want to scrape as .html file and have opened it with sublime text but there are some parts that cannot be displayed in a prettified way ; I have the same problem when trying to use beautifulsoup ; see picture below (I cannot really share full code since it's disclosing private info).
Just feed the HTML as a multiline string to BeautifulSoup object and use soup.prettify(). That should work. However beautifulsoup has default indentation to 2 spaces. So if you want custom indent you can writeup a little wrapper like this:
def indentPrettify(soup, indent=4):
# where desired_indent is number of spaces as an int()
pretty_soup = str()
previous_indent = 0
# iterate over each line of a prettified soup
for line in soup.prettify().split("\n"):
# returns the index for the opening html tag '<'
current_indent = str(line).find("<")
# which is also represents the number of spaces in the lines indentation
if current_indent == -1 or current_indent > previous_indent + 2:
current_indent = previous_indent + 1
# str.find() will equal -1 when no '<' is found. This means the line is some kind
# of text or script instead of an HTML element and should be treated as a child
# of the previous line. also, current_indent should never be more than previous + 1.
previous_indent = current_indent
pretty_soup += writeOut(line, current_indent, indent)
return pretty_soup
def writeOut(line, current_indent, desired_indent):
new_line = ""
spaces_to_add = (current_indent * desired_indent) - current_indent
if spaces_to_add > 0:
for i in range(spaces_to_add):
new_line += " "
new_line += str(line) + "\n"
return new_line
I wanted to extract an email message content. It is in html content, used the BeautifulSoup to fetch the From, To and subject. On fetching the body content, it fetches the first line alone. It leaves the remaining lines and paragraph.
I miss something over here, how to read all the lines/paragraphs.
CODE:
email_message = mail.getEmail(unreadId)
print (email_message['From'])
print (email_message['Subject'])
if email_message.is_multipart():
for payload in email_message.get_payload():
bodytext = email_message.get_payload()[0].get_payload()
if type(bodytext) is list:
bodytext = ','.join(str(v) for v in bodytext)
else:
bodytext = email_message.get_payload()[0].get_payload()
if type(bodytext) is list:
bodytext = ','.join(str(v) for v in bodytext)
print (bodytext)
parsedContent = BeautifulSoup(bodytext)
body = parsedContent.findAll('p').getText()
print body
Console:
body = parsedContent.findAll('p').getText()
AttributeError: 'list' object has no attribute 'getText'
When I use
body = parsedContent.find('p').getText()
It fetches the first line of the content and it is not printing the remaining lines.
Added
After getting all the lines from the html tag, I get = symbol at the end of each line and also   ; , < is displayed.How to overcome those.
Extracted text:
Dear first,All of us at GenWatt are glad to have xyz as a
customer. I would like to introduce myself as your Account
Manager. Should you = have any questions, please feel free to
call me at or email me at ash= wis#xyz.com. You
can also contact GenWatt on the following numbers: Main:
810-543-1100Sales: 810-545-1222Customer Service & Support:
810-542-1233Fax: 810-545-1001I am confident GenWatt will serve you
well and hope to see our relationship=
Let's inspect the result of soup.findAll('p')
python -i test.py
----------
import requests
from bs4 import BeautifulSoup
bodytext = requests.get("https://en.wikipedia.org/wiki/Earth").text
parsedContent = BeautifulSoup(bodytext, 'html.parser')
paragraphs = soup.findAll('p')
----------
>> type(paragraphs)
<class 'bs4.element.ResultSet'>
>> issubclass(type(paragraphs), list)
True # It's a list
Can you see? It's a list of all paragraphs. If you want to access their content you will need iterate over the list or access an element by an index, like a normal list.
>> # You can print all content with a for-loop
>> for p in paragraphs:
>> print p.getText()
Earth (otherwise known as the world (...)
According to radiometric dating and other sources of evidence (...)
...
>> # Or you can join all content
>> content = []
>> for p in paragraphs:
>> content.append(p.getText())
>>
>> all_content = "\n".join(content)
>>
>> print(all_content)
Earth (otherwise known as the world (...) According to radiometric dating and other sources of evidence (...)
Using List Comprehension your code will looks like:
parsedContent = BeautifulSoup(bodytext)
body = '\n'.join([p.getText() for p in parsedContent.findAll('p')]
When I use
body = parsedContent.find('p').getText()
It fetches the first line of the content and it is not printing the
remaining lines.
Do parsedContent.find('p') is exactly the same that do parsedContent.findAll('p')[0]
>> parsedContent.findAll('p')[0].getText() == parsedContent.find('p').getText()
True
I have a html file has the general design (some div's) and I need to fill this div's with some html code Using ruby script.
any suggests?
example
I have page.html
<html>
<title>html Page</title>
<body>
<div id="main">
</div>
<div id="side">
</div>
</body>
</html>
and a ruby script inside it i collect some data and doing some kind of processing on it and i want to present it in a nice format**
so I want to set the div which it's id=main with some html code to be like this
<html>
<title>html Page</title>
<body>
<div id="main">
<h1>you have 30 files in games folder</h1>
</div>
<div id="side">
</div>
</body>
</html>
** why i don't use ROR? because I don't want to build a web site I just need to build a desktop tool but it's presentation layer is html code interpreted by browser to avoid working with graphics libraries
my problem isn't "how can I write to this html file" I can handle it.
my problem that If I want to create a table in the html file inside main div
I will wrote the whole html code inside the ruby script to print it to the html file, is there any lib or gem that i can tell it that I want a table with 3 rows and 2 columns and it generates the html code?
I historically have used ERB and REXML for things like this, since they both ship with Ruby (removing gem dependencies). You can combine one XML file (content) with one .erb file (for layout) and get simple merging. Here's a script I wrote for this (most of which is argument handling and extending REXML with some convenience methods):
USAGE = <<ENDUSAGE
Usage:
rubygen source_xml [-t template_file] [-o output_file]
-t,--template The ERB template file to merge (default: xml_name.erb)
-o,--output The output file name to write (default: template.txt)
If the template_file is named "somefile_XXX.yyy",
the output_file will default instead to "somefile.XXX"
ENDUSAGE
ARGS = {}
UNFLAGGED_ARGS = [ :source_xml ]
next_arg = UNFLAGGED_ARGS.first
ARGV.each{ |arg|
case arg
when '-t','--template'
next_arg = :template_file
when '-o','--output'
next_arg = :output_file
else
if next_arg
ARGS[next_arg] = arg
UNFLAGGED_ARGS.delete( next_arg )
end
next_arg = UNFLAGGED_ARGS.first
end
}
if !ARGS[:source_xml]
puts USAGE
exit
end
extension_match = /\.[^.]+$/
template_match = /_([^._]+)\.[^.]+$/
xml_file = ARGS[ :source_xml ]
template_file = ARGS[ :template_file] || xml_file.sub( extension_match, '.erb' )
output_file = ARGS[ :output_file ] || ( ( template_file =~ template_match ) ? template_file.sub( template_match, '.\\1' ) : template_file.sub( extension_match, '.txt' ) )
require 'rexml/document'
include REXML
class REXML::Element
# Find all descendant nodes with a specified tag name and/or attributes
def find_all( tag_name='*', attributes_to_match={} )
self.each_element( ".//#{REXML::Element.xpathfor(tag_name,attributes_to_match)}" ){}
end
# Find all child nodes with a specified tag name and/or attributes
def kids( tag_name='*', attributes_to_match={} )
self.each_element( "./#{REXML::Element.xpathfor(tag_name,attributes_to_match)}" ){}
end
def self.xpathfor( tag_name='*', attributes_to_match={} )
out = "#{tag_name}"
unless attributes_to_match.empty?
out << "["
out << attributes_to_match.map{ |key,val|
if val == :not_empty
"##{key}"
else
"##{key}='#{val}'"
end
}.join( ' and ' )
out << "]"
end
out
end
# A hash to tag extra data onto a node during processing
def _mydata
#_mydata ||= {}
end
end
start_time = Time.new
#xmldoc = Document.new( IO.read( xml_file ), :ignore_whitespace_nodes => :all )
#root = #xmldoc.root
#root = #root.first if #root.is_a?( Array )
end_time = Time.new
puts "%.2fs to parse XML file (#{xml_file})" % ( end_time - start_time )
require 'erb'
File.open( output_file, 'w' ){ |o|
start_time = Time.new
output_code = ERB.new( IO.read( template_file ), nil, '>', 'output' ).result( binding )
end_time = Time.new
puts "%.2fs to run template (#{template_file})" % ( end_time - start_time )
start_time = Time.new
o << output_code
}
end_time = Time.new
puts "%.2fs to write output (#{output_file})" % ( end_time - start_time )
puts " "
This can be used for HTML or automated source code generation alike.
However, these days I would advocate using Haml and Nokogiri (if you want structured XML markup) or YAML (if you want simple-to-edit content), as these will make your markup cleaner and your template logic simpler.
Edit: Here's a simpler file that merges YAML with Haml. The last four lines do all the work:
#!/usr/bin/env ruby
require 'yaml'; require 'haml'; require 'trollop'
EXTENSION = /\.[^.]+$/
opts = Trollop.options do
banner "Usage:\nyamlhaml [opts] <sourcefile.yaml>"
opt :haml, "The Haml file to use (default: sourcefile.haml)", type:String
opt :output, "The file to create (default: sourcefile.html)", type:String
end
opts[:source] = ARGV.shift
Trollop.die "Please specify an input Yaml file" unless opts[:source]
Trollop.die "Could not find #{opts[:source]}" unless File.exist?(opts[:source])
opts[:haml] ||= opts[:source].sub( EXTENSION, '.haml' )
opts[:output] ||= opts[:source].sub( EXTENSION, '.html' )
Trollop.die "Could not find #{opts[:haml]}" unless File.exist?(opts[:haml])
#data = YAML.load(IO.read(opts[:source]))
File.open( opts[:output], 'w' ) do |output|
output << Haml::Engine.new(IO.read(opts[:haml])).render(self)
end
Here's a sample YAML file:
title: Hello World
main: "<h1>you have 30 files in games folder</h1>"
side: "I dunno, something goes here."
...and a sample Haml file:
!!! 5
%html
%head
%title= #data['title']
%body
#main= #data['main']
#side= #data['side']
...and finally the HTML they produce:
<!DOCTYPE html>
<html>
<head>
<title>Hello World</title>
</head>
<body>
<div id='main'><h1>you have 30 files in games folder</h1></div>
<div id='side'>I dunno, something goes here.</div>
</body>
</html>
Are you trying to create a dynamic website? For that use Rails.
Are you trying to create a static website? Something like Jekyll is probably best.
Are you trying to to just create some some simple .html files you can FTP up somewhere? Jekyll might be a good option or even hand coding a quick little HTML generator might be a better option.
UPDATE:
Is this what you are looking for?
hash = {
:games => "you have 30 files in games folder",
:puppies => "you have 12 puppies in your pocket",
:pictures => "You have 9 files in pictures folder",
}
array = [
['run','x','y'],
[1,10,3],
[2,12,9],
[3,14,7],
]
hash.each do |key, value|
myfile = File.new("#{key}.html", "w+")
myfile.puts "<html>"
myfile.puts "<title>html Page</title>"
myfile.puts "<body>"
myfile.puts "<div id=\"main\">"
myfile.puts "<h1>#{value}</h1>"
myfile.puts "<table border=\"1\">"
array.each do |row|
myfile.puts "<tr>"
row.each do |cell|
myfile.puts "<td> #{cell} </td>"
end
myfile.puts "<tr>"
end
myfile.puts "</div>"
myfile.puts "<div id=\"side\">"
myfile.puts "</div>"
myfile.puts "</body>"
myfile.puts "</html>"
end
Continuing from #Phrogz's work, the ERB idea is a great idea. I was able to use it to build a simple Rake script that does the work for me. I find this approach to be a little easier.
rakefile.rb
task :default => :generate
task :generate do
require 'erb'
template_file = "page.erb"
output_file = "page.html"
File.open(output_file, 'w') do |o|
puts "Processing file: #{template_file}"
o << ERB.new( IO.read( template_file ), nil, '>', 'output' ).result( binding )
end
end
def render(file)
puts "Rendering file: #{file}"
IO.read(file)
end
$game_count = 30
def game_count
puts "Rendering game count: #{$game_count}"
$game_count
end
page.erb
<html>
<title>html Page</title>
<body>
<div id="main">
<h1>you have <%= game_count %> files in games folder</h1>
</div>
<div id="side">
<%= render "side.html" %>
</div>
</body>
</html>
side.html
<ul class="side">
<li>Side item 1</li>
<li>Side item 2</li>
</ul>
Running it
$ rake
Processing file: page.erb
Rendering game count: 30
Rendering file: side.html
Newly created file page.html
<html>
<title>html Page</title>
<body>
<div id="main">
<h1>you have 30 files in games folder</h1>
</div>
<div id="side">
<ul class="side">
<li>Side item 1</li>
<li>Side item 2</li>
</ul>
</div>
</body>
</html>