We are trying to build dashboard reports using Superset and got it installed. We want to build the reports based on Impala tables. When trying to configure the datasource, I used the below sqlalchemy URI
impala://host:port/dbname
I get the below error when testing the connection. masked the hostname for security reason.
we have already installed impyla
pip install impyla
Collecting impyla
Downloading impyla-0.14.0.tar.gz (151kB)
100% |████████████████████████████████| 153kB 4.7MB/s
Requirement already satisfied: six in ./venv/lib/python2.7/site-packages (from impyla)
Collecting bitarray (from impyla)
Downloading bitarray-0.8.1.tar.gz (46kB)
100% |████████████████████████████████| 51kB 5.8MB/s
Requirement already satisfied: thrift in ./venv/lib/python2.7/site-packages (from impyla)
Building wheels for collected packages: impyla, bitarray
Running setup.py bdist_wheel for impyla ... done
Stored in directory: /root/.cache/pip/wheels/96/fa/d8/40e676f3cead7ec45f20ac43eb373edc471348ac5cb485d6f5
Running setup.py bdist_wheel for bitarray ... done
Stored in directory: /root/.cache/pip/wheels/46/63/90/821699390044b2d0c5f2a01f275115e240bd06f0edc6c6a19b
Successfully built impyla bitarray
Installing collected packages: bitarray, impyla
Successfully installed bitarray-0.8.1 impyla-0.14.0
Please let me know if I am missing anything else here to configure the impala to use with Superset
The easiest way to have everything perfectly working is to use the docker image (https://hub.docker.com/r/amancevice/superset/)
If you want to install everything by yourself, you can check the list of requirements directly from the official dockerfile.(https://hub.docker.com/r/amancevice/superset/~/dockerfile/):
Specially see:
apt-get install -y \
build-essential \
curl \
default-libmysqlclient-dev \
libffi-dev \
libldap2-dev \
libpq-dev \
libsasl2-dev \
libssl-dev \
openjdk-8-jdk \
python3-dev \
python3-pip && \
apt-get clean && \
rm -r /var/lib/apt/lists/* && \
pip3 install --no-cache-dir \
flask-cors==3.0.3 \
flask-mail==0.9.1 \
flask-oauth==0.12 \
flask_oauthlib==0.9.3 \
gevent==1.2.2 \
impyla==0.14.0 \
mysqlclient==1.3.7 \
psycopg2==2.6.1 \
pyathenajdbc==1.2.0 \
pyhive==0.5.0 \
pyldap==2.4.28 \
redis==2.10.5 \
sqlalchemy-redshift==0.5.0 \
sqlalchemy-clickhouse==0.1.1.post3 \
Werkzeug==0.12.1 \
superset==${SUPERSET_VERSION}
Also, in the field Database write my_db.my_table and in SQLAlchemy URI write impala://host:port
Related
I would like some notifications from Dependabot when new versions are available. These are my two scenarios:
#1 I'm starting a docker container in a bash script like this.
local DOCKER_IMAGE="hashcorp/terraform:1.3.6"
docker run --rm \
--volume /etc/passwd:/etc/passwd:ro \
--volume /etc/group:/etc/group:ro \
--user "$(id -u):$(id -g)" \
--volume /etc/timezone:/etc/timezone:ro \
--volume /etc/localtime:/etc/localtime:ro \
--volume "$(pwd):$(pwd)" \
--workdir "$(pwd)" \
--env "GITHUB_TOKEN=$TOKEN" \
--env "TF_VAR_bw_client_id=$BW_CLIENT_ID" \
--env "TF_VAR_bw_client_secret=$BW_CLIENT_SECRET" \
--env "TF_VAR_bw_password=$BW_MASTER_PASS" \
"$DOCKER_IMAGE" "$#"
For other Dependencies like Docker Images I use Dependabot to raise a PR when I should update my base image. Is there a way to have Dependabot inspect bash scripts like the above snippet as well? I'd like a notification when there is a new terraform version.
#2 I'm installing Bitwarden CLI into a custom terraform image (needed for the Bitwarden Provider. This works fine. But since I pin the Bitwarden CLI version in my Dockerfile I would like Dependabot to raise PR for the Bitwarden CLI as well.
FROM hashicorp/terraform:1.3.6
LABEL maintainer="sebastian#sommerfeld.io"
# Install basics
RUN apk update \
&& apk --no-cache add curl=7.86.0-r1 \
&& apk --no-cache add unzip=6.0-r13
# Install Bitwarden CLI + dependencies
ARG BW_VERSION="2022.11.0"
RUN apk --no-cache add libc6-compat=1.2.3-r4 \
&& apk --no-cache add gcompat=1.1.0-r0 \
&& apk --no-cache add libgcc=12.2.1_git20220924-r4 \
&& apk --no-cache add libstdc++=12.2.1_git20220924-r4 \
&& rm -rf /var/cache/apk/* \
&& curl -sL https://github.com/bitwarden/clients/releases/download/cli-v${BW_VERSION}/bw-linux-${BW_VERSION}.zip -o bw.zip \
&& unzip bw.zip \
&& chmod +rx bw \
&& rm bw.zip \
&& mv bw /usr/local/bin
Is it possible to use Dependabot for these scenarios as well?
We have a Docker file that worked as late as 22 December 2020, but all of a sudden it crashes in runtime if we build the same Docker file again and the exception is:
PuppeteerSharp.ProcessException: Failed to launch Base! /app/.local-chromium/Linux-706915/chrome-linux/chrome: error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory
This is the relevant part of the docker file:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
#Excluded since it is not relevant
#####################
#PUPPETEER RECIPE
#####################
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y \
google-chrome-beta \
fontconfig \
fonts-ipafont-gothic \
fonts-wqy-zenhei \
fonts-thai-tlwg \
fonts-kacst \
fonts-symbola \
fonts-noto \
fonts-freefont-ttf \
--no-install-recommends \
&& apt-get purge --auto-remove -y curl gnupg \
&& rm -rf /var/lib/apt/lists/*
#####################
#END PUPPETEER RECIPE
#####################
ENV PUPPETEER_EXECUTABLE_PATH "/usr/bin/google-chrome-beta"
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Our.File.dll"]
I'm by no means experienced Docker/Linux developer, but we have this is production working well for almost a year now.
When searching for the problem we have found many things to try. Among the things I have tried and all failed are these:
https://stackoverflow.com/a/64293743/6743788
Manually adding dependencies (tried it before and after our RUN apt-get above):
RUN apt-get update && apt-get install -y \
gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 \
libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 \
libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 \
libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 \
ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget
This suggestion was first found here: https://medium.com/#ssmak/how-to-fix-puppetteer-error-while-loading-shared-libraries-libx11-xcb-so-1-c1918b75acc3
When watching the build output, we noticed that most of the dependencies already existed with the latest version.
Tried to specify an older version of Chrome (we have tried with a couple of different versions):
#####################
#PUPPETEER RECIPE
#####################
ARG CHROME_VERSION="81.0.4044.138-1"
RUN apt-get update && apt-get -f install && apt-get -y install wget gnupg2 apt-utils
RUN wget --no-verbose -O /tmp/chrome.deb http://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_${CHROME_VERSION}_amd64.deb \
&& apt-get update \
&& apt-get install -y /tmp/chrome.deb --no-install-recommends --allow-downgrades fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
&& rm /tmp/chrome.deb
#####################
#END PUPPETEER RECIPE
#####################
Tried 3 plus 2 together
Also tried to add libgbm-dev to the dependency list because we found that somewhere.
I have tried to verify that the files exist in the docker file by running these commands (and their output) in the container:
root#5c47052da1d8:/app# dpkg-query -L libx11-6
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
/usr/share
/usr/share/doc
/usr/share/doc/libx11-6
/usr/share/doc/libx11-6/NEWS.Debian.gz
/usr/share/doc/libx11-6/NEWS.gz
/usr/share/doc/libx11-6/changelog.Debian.gz
/usr/share/doc/libx11-6/changelog.gz
/usr/share/doc/libx11-6/copyright
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/libx11-6
/usr/lib/x86_64-linux-gnu/libX11.so.6
root#5c47052da1d8:/app# ls -la /usr/lib/x86_64-linux-gnu/libX11.so.6
lrwxrwxrwx 1 root root 15 Sep 11 16:16 /usr/lib/x86_64-linux-gnu/libX11.so.6 -> libX11.so.6.3.0
root#5c47052da1d8:/app# ldd libX11.so.6
ldd: ./libX11.so.6: No such file or directory
root#5c47052da1d8:/app# ldd /usr/lib/x86_64-linux-gnu/libX11.so.6
linux-vdso.so.1 (0x00007ffc432b3000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007fe8b0ad2000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe8b0acd000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe8b090c000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007fe8b0708000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007fe8b0502000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe8b0c45000)
libbsd.so.0 => /usr/lib/x86_64-linux-gnu/libbsd.so.0 (0x00007fe8b04e8000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe8b04dc000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe8b04bb000)
I have read https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md
Any help would be greatly appreciated, because I've got no clue how to solve this or what to try next.
So, I found the problem and document it here if it happens to someone else. It turned out to be how Visual Studio does the building now.
If I right-click and build the Dockerfile in Visual Studio the result will be correct just as my investigations above showed. The problem is that when I wanted to test the image, I run it by F5 (or Ctrl+F5) in VS and in that case Visual Studio does not build my Dockerfile by default. I thought that it used my recently build result (cached), but it actually uses another cached result. For performance reason it builds the project locally and take that result and adds it to the aspnet:3.1-buster-slim image, which means that my custom dependencies are not added.
This behaviour can be controlled by setting in the project file. The default value of if is Fast which does not use my Dockerfile, but setting it to Regular does, on the cost of slower start up.
Documentation of this and other settings can be found here: https://learn.microsoft.com/en-us/visualstudio/containers/container-msbuild-properties?view=vs-2019
I need to test angular dart components in chrome. Test should be executed in gitlab ci job. How can I achive this?
To achive this you can:
Create mew docker image with chrome and dart
Upload this image to gitlab container registry
Use this image in gitlab pipeline job
Here is Docker file:
FROM google/dart:2.5.0
USER root
# Install deps + add Chrome Stable + purge all the things
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
unzip \
zip \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y \
google-chrome-stable \
--no-install-recommends \
&& apt-get purge --auto-remove -y curl gnupg \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /
RUN mkdir chromedriver && cd chromedriver \
&& wget https://chromedriver.storage.googleapis.com/2.35/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& rm chromedriver_linux64.zip \
&& ln -s /usr/bin/google-chrome-stable /usr/bin/chrome
ENV CHROME_DRIVER_PATH=/chromedriver/chromedriver
And here is job:
build_web:
stage: client_build
image: registry.gitlab.com/your_org/your_proj/image_name
script:
- pub get
- pub run build_runner test --fail-on-severe --define "build_web_compilers|entrypoint=compiler=dart2js" --delete-conflicting-outputs -- -p chrome
- pub run build_runner build --define "build_web_compilers|entrypoint=compiler=dart2js" --delete-conflicting-outputs --output web:build
only:
- master
I'm trying to setup a container to test with RobotFramework on chrome.
But when I run my container I keep getting a WebDriverException.
I've searched but couldn't find any fix that actually works for me.
This is my Dockerfile
FROM python:3
RUN apt-get update -y
# Dependencies
RUN apt-get install -y \
apt-utils \
build-essential \
fonts-liberation \
gconf-service \
libappindicator1 \
libasound2 \
libcurl3 \
libffi-dev \
libgconf-2-4 \
libindicator7 \
libnspr4 \
libnss3 \
libpango1.0-0 \
libssl-dev \
libxss1 \
python-dev \
python-pip \
python-pyasn1 \
python-pyasn1-modules \
unzip \
wget \
xdg-utils \
xvfb \
libappindicator3-1 \
libatk-bridge2.0-0 \
libgtk-3-0 \
lsb-release
# Install Chrome for Selenium
RUN curl https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb -o /chrome.deb
RUN dpkg -i /chrome.deb || apt-get install -yf
RUN rm /chrome.deb
# Install chromedriver for Selenium
RUN curl https://chromedriver.storage.googleapis.com/2.42/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /usr/local/bin
RUN chmod +x /usr/local/bin/chromedriver
WORKDIR /home
COPY . .
RUN pip install -e .
CMD [ "pybot","./tests/test.robot" ]
This is the error I keep getting
WebDriverException: Message: unknown error: Chrome failed to start:
exited abnormally (unknown error: DevToolsActivePort file doesn't
exist) (The process started from chrome location
/usr/bin/google-chrome is no longer running, so ChromeDriver is
assuming that Chrome has crashed.) (Driver info:
chromedriver=2.42.591071
(0b695ff80972cc1a65a5cd643186d2ae582cd4ac),platform=Linux
4.15.0-34-generic x86_64)
My test.robot:
*** Settings ***
Library Selenium2Library
*** Variables ***
*** Test Cases ***
Connect
Open Browser https://google.es Chrome
I think I am missing something but, I just dont know what to do.
On my setup.py:
install_requires=[
'robotframework',
'robotframework-selenium2library',
'selenium'
]
I ran into this issue recently using a docker container and Amazon Linux running robot tests. I found that even though I added the required arguments within the robot framework test as in the example below chrome was crashing without even starting with the same message you received. I resolved the issue by updating the python settings in the options.py within the container.
I updated my docker container with the command below to set the options in the python selenium chrome WebDriver options.py file. In my case I'm using python version 3.7 - so you want to make sure that the path you use is correct.
RUN sed -i "s/self._arguments\ =\ \[\]/self._arguments\ =\ \['--no-sandbox',\ '--disable-dev-shm-usage'\]/" /usr/local/lib/python3.7/site-packages/selenium/webdriver/chrome/options.py
Example Robot - this is what I tried within robot framework that didn't fix the problem.
${chrome_options} = Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument headless
Call Method ${chrome_options} add_argument disable-gpu
Call Method ${chrome_options} add_argument no-sandbox
Call Method ${chrome_options} add_argument disable-dev-sim-usage ${options}= Call Method ${chrome_options} to_capabilities
${options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
open browser about:blank ${BROWSER} desired_capabilities=${options}
I'm not sure if this will address your issue. You could try updating your file manually before updating your container to see if it helps. I spent a lot of time troubleshooting this. It would be great if the error was a bit more descriptive.
Good luck.
Please change modify permission , it's will work
from
RUN chmod +x /usr/local/bin/chromedriver
to
RUN chmod 777 /usr/local/bin/chromedriver
I had the same issue and the below code fixed it
*** Settings ***
Library Selenium2Library
*** Variables ***
${URL} https://www.google.com
${CHROMEDRIVER_PATH} /usr/local/bin/chromedriver
*** Keywords ***
Open Website
${chrome_options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument --no-sandbox
Call Method ${chrome_options} add_argument --headless
Open Browser ${URl} chrome options=${chrome_options} executable_path=${CHROMEDRIVER_PATH}
*** Settings ***
Suite Setup Open Website
This is my Dockerfile have test with gitlab CI:
FROM python:3.9.6-buster
ADD ./requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt
RUN apt-get update && apt-get install -y xvfb wget unzip libnss3-tools
RUN echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN apt-get update && apt-get install -y google-chrome-stable
RUN wget -q https://chromedriver.storage.googleapis.com/91.0.4472.101/chromedriver_linux64.zip -O /tmp/chromedriver_linux64.zip \
&& unzip -qq /tmp/chromedriver_linux64.zip -d /usr/local/bin \
&& rm /tmp/chromedriver_linux64.zip \
&& chmod +x /usr/local/bin/chromedriver
You can get more information to run Robot Framework with docker:
https://github.com/dylanops/docker-robot-framework
I am trying to run tests in a Docker, which currently run on a Jenkins slave, so bare metal. To make it more portable I try to get the Robot Framework running inside it, but so far with no luck.
This is my Dockerfile to create the robot image:
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends \
dbus \
libgtk2.0-0 \
libgconf-2-4 \
libnss3 \
nginx \
python3 \
python3-pip \
xvfb
# google chrome requirements
RUN apt-get install -y --no-install-recommends \
fonts-liberation \
gconf-service \
libappindicator1 \
lsb-release \
wget \
libasound2 \
ibatk-bridge2.0-0 \
libgtk-3-0 \
libxss1 \
libxtst6 \
xdg-utils
COPY google-chrome-stable_current_amd64.deb /tmp
RUN dpkg --install /tmp/google-chrome-stable_current_amd64.deb && \
apt-get install -f -y && \
rm /tmp/google-chrome-stable_current_amd64.deb
# update pip & install robot framework
RUN pip3 install --upgrade pip \
setuptools && \
pip3 install robotframework \
robotframework-selenium2library \
robotframework-xvfb
COPY chromedriver /usr/local/bin
RUN chmod +x /usr/local/bin/chromedriver
WORKDIR /usr/src/robot
CMD /etc/init.d/nginx start && python3 -m robot --include ready BasicTest.robot
Then I run the test image by calling:
$ docker run -ti --rm -v "$PWD/src/test/robot-framework":/usr/src/robot -v "$PWD/dist":/var/www/html:ro --add-host databasehost:10.10.10.10 robot
src/test/robot-framework contains the BasicTest.robot file and the dist folder is the created dist folder for an Angular-based project I want to test.
The messages I get, after waiting a longer time, look like this:
Correct Login :: *Description:* | FAIL |
Setup failed:
WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.35.528139 (47ead77cb35ad2a9a83248b292151462a66cd881),platform=Linux 4.9.60-linuxkit-aufs x86_64)
I start the virtual display in the BasicTest.robot by:
Start Virtual Display 1024 768
${chrome_options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument headless
Call Method ${chrome_options} add_argument disable-gpu
Call Method ${chrome_options} add_argument no-sandbox
Open Browser #{url} #{browser} --auto-ssl-client-auth
I am not sure what is the purpose of your project, but here is my Dockerfile that uses puppeteer that is a Google Lib that wraps chrome_headless and allow you to use it more easily.
Here you can find an official example and here is the official documentation of puppeteer with great community suppport.
This is my Docker file that runs a npm script after building a container with puppeteer, node and chromium dependencies.
# Most part taken from https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
FROM node:8-slim
# # Manually install missing shared libs for Chromium.
RUN apt-get update && \
apt-get install -yq gconf-service libasound2 libatk1.0-0 libc6
libcairo2 libcups2 libdbus-1-3 \
libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0
libglib2.0-0 libgtk-3-0 libnspr4 \
libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1
libxcb1 libxcomposite1 \
libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2
libxrender1 libxss1 libxtst6 \
ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release
xdg-utils wget
COPY . /app/
WORKDIR app
# Install deps for server.
RUN npm install -g
# Install puppeteer so it can be required by user code that gets run in
# server.js. Cache bust so we always get the latest version of puppeteer when
# building the image.
ARG CACHEBUST=1
RUN npm install puppeteer#0.13.0
# Add pptr user.
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /app
# Run user as non privileged.
USER pptruser
EXPOSE 9222
CMD ["npm", "start"]
Inside that npm start script you can run your tests I guess.
The trick about using puppeteer inside a Docker container is that Docker does not install chromium dependencies automatically and then it fails when you try to use it.
Despite the fact of using puppeteer or not, that container will allow you to use chrome_headless inside and in my opinion is less heavy than the other examples that I found inside the documentation.
For running just type
docker run -d -i --rm --cap-add=SYS_ADMIN --network=${yourNetwork} --name ${image_name} ${container_name}
Hope that helps