Can't get the boot disk to 200GB [closed] - google-compute-engine

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I tried creating the disk first via gcutil adddisk and then assigning it to the VM when running gcutil addinstance with the --disk flag. However, this method still results in a 10GB partition even though I set it to 200GB on adddisk.
Here is the disk itself:
INFO: Zone for db2-usc1a detected as us-central1-a.
+-----------------+--------------------------------------------------------+
| name | db2-usc1a |
| description | |
| creation-time | 2014-06-11T22:45:39.654-07:00 |
| zone | us-central1-a |
| status | READY |
| source-snapshot | |
| source-image | projects/centos-cloud/global/images/centos-6-v20140606 |
| source-image-id | 6290630306542078308 |
| size-gb | 200 |
+-----------------+--------------------------------------------------------+
But, as you can see, running df -h displays it as 9.9GB:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 4.3G 5.1G 46% /
tmpfs 7.3G 0 7.3G 0% /dev/shm
I have also tried to follow these instructions here: https://developers.google.com/compute/docs/disks#repartitionrootpd
However, on reboot, the VM becomes inaccessible so I can't even SSH back onto the machine.
Why is Google enforcing a 10GB image on boot? Why is it not being set to the size I have requested? More importantly, is there a way I can automate this process for our build scripts?

One option is to use Persistent Disk Snapshots:
resize the disk
create a snapshot of the disk
in your build scripts, create new PDs from your snapshot instead of the default image
The new disks will be 200GB. Snapshots only save blocks which have changed, so the snapshot itself should be fairly small.

As the previous comment suggests, resize the disk. For those who don't know how to do this:
sudo /sbin/resize2fs /dev/sda1

Related

Alternative or better architecture of sass guidelines to build multipage site

In https://sass-guidelin.es/#architecture is adviced like best practice to build an architecture like that for a site, using partials and having only one CSS file named main.css (the only file that won't be a partial, thus the unique .css outputed from the preprocessor):
<pre>
sass/
|
|– abstracts/
| |– _variables.scss # Sass Variables
| |– _functions.scss # Sass Functions
| |– _mixins.scss # Sass Mixins
| |– _placeholders.scss # Sass Placeholders
|
|– base/
| |– _reset.scss # Reset/normalize
| |– _typography.scss # Typography rules
| … # Etc.
|
|– components/
| |– _buttons.scss # Buttons
| |– _carousel.scss # Carousel
| |– _cover.scss # Cover
| |– _dropdown.scss # Dropdown
| … # Etc.
|
|– layout/
| |– _navigation.scss # Navigation
| |– _grid.scss # Grid system
| |– _header.scss # Header
| |– _footer.scss # Footer
| |– _sidebar.scss # Sidebar
| |– _forms.scss # Forms
| … # Etc.
|
|– pages/
| |– _home.scss # Home specific styles
| |– _contact.scss # Contact specific styles
| … # Etc.
|
|– themes/
| |– _theme.scss # Default theme
| |– _admin.scss # Admin theme
| … # Etc.
|
|– vendors/
| |– _bootstrap.scss # Bootstrap
| |– _jquery-ui.scss # jQuery UI
| … # Etc.
|
`– main.scss # Main Sass file
</pre>
My strong doubt is that in this way there will be a only big main.css file that will embrace the css of others page too (thus, there will be only a main.css file to download but it could be significantly big) and second thing it will be very difficul to avoid conflicts (a simple example, the same id in 2 different pages home and contacts with different rules).
My thinking is that it would be much better to build home.scss and contact.scss (no partials) and so on, having so two different (in my simple example) scss files that will import their specific partials and in this way each page of the site will link a different css page.
Am I completely off track or maybe I have misunderstood the meaning of the guidelines, and therefore my design idea is good?
I can think of three reasons why you may not want to use the approach of using one stylesheet per page:
1) Caching
One main.css vs n <page>.css. You're either loading one file for the entire site, or one file per page. The consideration here might be first load performance vs subsequent page load performance. In my experience CSS files are not massive, so I'd venture to say that the performance gains from splitting your main.css file are negligible.
2) Redundant CSS
If you'd like _grid.scss on every page of your site, then you're loading it separately each time. The same CSS could be loaded by the user n times where n is the amount of pages they view. This may add up to a greater size/time than loading one main.css.
3) Developer experience
Every time you create a page you must include specific dependencies (i.e. #import). The creation and maintenance of this process may introduce unwanted results/complexity in your CSS.
First of all: Best practices are formed to
build optimal running pages
optimizing your workflow
save time to your work
using tested (=working) code solutions
avoid code conflicts
In most cases there are conflicts between the different goals ... and it is up to you find a solution which is the best compromise to your work and to your project.
A. One or multiple CSS files
Indeed, to the user experience (one of) the most critical attributes for a page is the time of page load. I.e. to a page of Amazon/Ebay/Google indeed HUNDREDTH OF A SECOND decides about MILLIONS of revenue. That is no joke, - there are very interesting tests they did on that. So it is not surprising that the tools to optimize your page (i.e. offered in browser Chrome) are looking the pages speed first.
For your CSS that means: best practice (solution) is the method, which speeds up your page measured in milliseconds. To achieve that most critical property is the number of files your page need to load: That is because the time which is needed to connect the Browser with the Server (PER FILE!) is more critical than download time itself.
So, simple optimized pages wrap code together and try to load only one CSS/JS file. A little bit more optimized pages not only compress the code but remove not needed CSS classes to the project. BEST OPTIMZED pages goes one step ahead: they write as much as possible CSS/JS code direct to the HTML file if possible. Just have a look to source code of Google pages ... Google tried to push that technique to the developers with their project 'Google Mobile Accelerated Pages` for some time ... (Not mentioned yet is the next step by using the 'new' technique to load the code only or to the time when it is needed i.e. by using React).
But that is not the best practice to all projects as there are other weightings more important realizing a best practice compromise to a project. So fastest load time is (really) important to eCommerce but not as much on i.e. simple landing-, product-, company-pages (depends on the project). In that case it's the other way around: If resources are restricted the SAVED WORK to build up the page is more important than page speed. And speed optimizing costs A LOT of resources.
In that case best practice is to use tested code, ready to use modules, layouts working out of the box with some little adaptions. That means: best practice now really is to speed up your page building. You do that using big frameworks like Bootstrap which generates a huge amount of not needed classes which has to be loaded to your project (hopefully only one file...) and which cannot be added to the html because of the size. And you use CMS systems like Wordpress and add modules. That way you are faster adding the content and your customer is able to do it on his own. But that means: nearly every module adds own CSS/JS mostly in separated files ... Of course you could optimize that ... but that means one more module which uses cache ... makes coding and debuging more complicate and needs more resources ... and as your customer restricts the resources the best practice compromise in that case is a project with a lot of added CSS/JS files with a huge overhead of loaded code. But to be honest: in most cases that type of projects works and so IN THAT CASE that way indeed is the best practice BECAUSE IT IS THE BEST COMPROMISE to the basic conditions for the special project.
So, what best practice is is up to your project and what attributes you want/need to optimize. Using only one main CSS file is a compromise I personally(!) would say works better in both cases. But that does not(!) need apply your work.
B. Multiple IDs in a project
But there is one thing which need to be said: using multiple IDs for different elements in your project is not a god practice at all and should not be the reason to use different CSS files at all. IDs should be unique not only in a page but in a project at all. So best practice in that case is to form a habit to name your elements (does not matter if Class or ID) in a project in a unique form due to the selector specifity.
That's because best practice compromise: THAT avoids (1) not needed work (=resources) to debug selector conflicts and (2) the need of using multiple CSS files (= better page speed).
You can achieve that:
Avoid IDs if not needed. In MOST cases classes works as well!
Use unique name structure for elements and use nested CSS to elements. That makes it easier to organize your code structure. (SASS helps to write such a code structure.)
C. File structure of project
Best practice of a file structure in a project as well has the target to speed up your works, to organize your coding sturcture which leads to better code. The shown structure is one of the commended ones which works to a lot of coders that way. So, the structure is tested and works.
But if another structure works better to you, don't hesitate to change it to your needs.
What I personal don't like on THIS common used structure are the huge number of directories. It makes me crazy to change between the directories. If I want just to change something in a basic partial file I need (1) to SEARCH and open the folder, then (2) search the file in the dir ...
More native to me is to organize the most needed partial files in ONE SINGLE directory. The trick: ADDING the NAME OF THE SUBDIR to the NAME OF THE FILE keeps the structure.
That way I am able to do the most work on one directory. I don't have to search a directory first and then scroll and search that directory ... I only need to scroll in one directory only. That speeds me up A LOT and keeps my head free when changing between the files!
Maybe you like to have a look to this:
SASS
|
| -- abstracts
= general files which serves the methods for the project
= and which are not often used
= BUT NOT the variables
= Note: most often these files are parts of (own) modules
|
| - functions
| - mixins
| ...
|
| -- myModules
= just the own modules to include (not to change / work in code)
= like your abstracts
|
| -- module-color-manager
|
| -- _files ...
|
| -- module-overlays
|
| -- _files ...
|
| -- ...
|
| -- vendors
= vendor modules
|
| -- bootstrap
| ...
| ...
|
| -- partials
= writing the css code
= working files (most used files = that is my one working folder)
= organized by adding (your) subdir names to file names
= better/easier overview
|
| -- _base-reset
| -- _base-grid
| -- _base-typography
| -- _base-forms
| -- ...
| -- _menu-main
| -- _menu-footer
| -- ...
| -- _structure-header
| -- _structure-footer
| -- _structure-sidebar
| -- _structure-sections
| --...
| -- _component-cards
| -- _component-carousel
| -- ...
| -- _breakpoint-mobile-landscape
| -- _breakpoint-tablet
| -- _breakpoint-dtp-small
| ...
|
|
_configColors (= my variables, colors separated for better overview)
_configProject (= all variables including settings for ALL (own&vendor) modules
main.scss
But once more: BEST PRATICE TO YOU is the structure which organize YOU the way to (1) work fast and (2) makes page fast loading. So do not hesitate to try out and adapt ideas from several suggestions to find the one which works best to you and your special project.

EMV Offline Data Authentication - CDA Mode 3

The EMV Spec 4.3 Vol 2 defines the different modes for CDA ("Combined Data Authentication") with a chart:
+----+-------------------+-----------------------------------+
|Mode|Request CDA on ARQC|Request CDA on 2nd GEN AC (TC) |
| | |after approved online authorisation|
+----+-------------------+-----------------------------------+
| 1 | Yes | Yes |
| 2 | Yes | No |
| 3 | No | No |
| 4 | No | Yes |
+----+-------------------+-----------------------------------+
My question:
If a PinPad is in CDA Mode 3, does it actually perform the data authentication step at all?
The PinPad I am using is in CDA Mode 3 and it appears to be doing so sometime in the ARPC validation/TC generation step as evidenced by the Byte 1, Bit 8 of the TVR being set to zero at that time. However, the chart above would lead me to believe that it is not.
Unfortunately, I don't have a UL or Collis tool to get inside the PinPad to see the PinPad/chip flow.
Short answer to your question is YES - the acceptance device will perform card authentication. When it comes to ODA, it might be also SDA (already obsolete) or DDA that will happen regardless of CDA mode.
CDA mode 3 means only that ODA will not be performed if other CAM (Card Authentication Method) is available. It will still happen for offline accepted transactions.
To clarify, the Card Authentication Methods:
Offline CAM = PKI based Offline Data Authentication which CDA is an example of
Online CAM = symmetric cryptography based verification of cryptograms during online communication.
In early days of EMV implementation acceptance devices had quite limited processing power - they were mostly based on 8-bit microcontrollers which meant it took ages to perform RSA with larger modulus. That's why CDA mode 3 was introduced - to avoid performing resource-heavy offline CAM when online CAM is available - in online transactions. That was perceived an optimization in the time and was recommended by schemes and EMVCo.
In today terms, CDA mode 1 is widely adopted and I don't remember any recent Type Approvals with CDA mode 3. If you have a device with it, you might be dealing with an old device with an expired approval.
ARPC verification (Issuer Authentication step) you mention is not reflected in TVR B1b8 - it's only an indication that ODA was not performed, which (apart from CDA mode 3 situation) might also be when card and terminal do not support any common authentication method (some online-only terminals do not need to perform ODA; some non-expiring cards do not support ODA as well). Issuer Authentication might be explicit (when AIP in the card indicates it and you received ARPC in the response), but might happen also implicitly (when AIP doesn't indicate it but card requests ARPC in CDOL2) and you might not see it indicated in TVR.

How can I improve socket hang up when connecting many devices?

I am testing to connect many devices to FIWARE in the following environment.
Each component is deployed in a container on a physical server.
+-------------------------------------------------+
|Comet - Cygnus - Orion - IoTAgentJSON - Mosquitto| - device*N
+-------------------------------------------------+
Under the condition that each device transmits data at 1 msg/sec, the following error occurs at IoTAgent when the number of devices is 350.(That is, 350 msg/sec)
{"log":"time=2018-12-16T14:57:24.810Z | lvl=ERROR | corr=ec11c37f-5194-4cb3-8d79-e04a2d1e745c | trans=ec11c37f-5194-4cb3-8d79-e04a2d1e745c | op=IoTAgentNGSI.NGSIService | srv=n/a | subsrv=n/a | msg=Error found executing update action in Context Broker: Error: socket hang up | comp=IoTAgent\n","stream":"stdout","time":"2018-12-16T14:57:24.81037597Z"}
{"log":"time=2018-12-16T14:57:24.810Z | lvl=ERROR | corr=ec11c37f-5194-4cb3-8d79-e04a2d1e745c | trans=ec11c37f-5194-4cb3-8d79-e04a2d1e745c | op=IoTAgentNGSI.Alarms | srv=n/a | subsrv=n/a | msg=Raising [ORION-ALARM]: {\"code\":\"ECONNRESET\"} | comp=IoTAgent\n","stream":"stdout","time":"2018-12-16T14:57:24.810440213Z"}
{"log":"time=2018-12-16T14:57:24.810Z | lvl=ERROR | corr=ec11c37f-5194-4cb3-8d79-e04a2d1e745c | trans=ec11c37f-5194-4cb3-8d79-e04a2d1e745c | op=IoTAgentJSON.MQTTBinding | srv=n/a | subsrv=n/a | msg=MEASURES-002: Couldn't send the updated values to the Context Broker due to an error: Error: socket hang up | comp=IoTAgent\n","stream":"stdout","time":"2018-12-16T14:57:24.810526916Z"}
The result of the requested ps ax | grep contextBroker command is as follows.
ps ax | grep contextBroker
19766 ? Ssl 29:02 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -dbhost mongodb-orion-demo -statCounters -statSemWait -statTiming
Question 1: Where is the cause? IoTAgent? Or Orion? Or MongoDB? Or kernel parameter?
Error found executing update action in Context Broker: Error: socket hang up but there is no error log displayed in Orion.
Question 2: How can I improve the processing performance of FIWARE?
Do you need the scale of IoTAgent?
Do you need to consider Orion's parameters?
I need to consider values ​​such as reqPoolSize and maxConnections with reference to the following URL?
https://fiware-orion.readthedocs.io/en/master/admin/perf_tuning/#http-server-tuning
Do you need the scale of Orion?
How to scale Orion GE?
Question 3: Is there a batch operation on IoT Agent?
On the following page, you should do a batch operation instead of opening a new connection for each entity, but is there such a function in IoTAgent?
ECONNRESET when opening a large number of connection in small time period
It is difficult to provide a right answer, as performance depends on many factors specially in complicated setups involving several components interacting among them. However, I'll try to provide some thoughts and insights based in the information you provide and my previous experience.
With regards to Orion, I'd recommend you to have a look to the documentation on performance tunning. Following the indications in that page you can increase the performance of the component.
However, having said that, I don't think that Orion is the cause of the problem in your case, based on:
Even without performance optimization Orion typically reach a throughput in the order of the ~1,000 tps. It should cope updates at 350 tps without problems.
Orion is not showing error logs. The error logs you have are produced by IOTAgent component, as far as I understand.
Thus, focusing in IOTA, maybe it would be better to use IOTA-UL instead of IOTA-JSON. The UL encoding is more efficient that JSON encoding so you can gain in efficiency. In addition, IOTA-UL allows you to send multimeasures (using # as separator) which I don't know if fits your case but can be seen as a limited form of batch update (see UL documentation for more detail).
If that doesn't work another posibility is to send data directly to Orion using its NGSIv2 API. That would have several advantages:
Simplified setup (two pieces less: MQTT broker and IOTAgent)
Under same resource conditions, Orion native performance is typically higher than IOTAgents performance (as I mention before ~1,000 tps or even higher after applying performance optimizations)
NGSIv2 API provides a batch update operation (look for POST /v2/op/update in the NGSIv2 specification document cited above)

Openshift Online issue: pod with persistent volume failed scheduling

I have a small webapp which use to run fine on Openshift Online for 9 months, which consist in a python service and a postgresql database (with, of course, a persistent volume)
All of a sudden, last tuesday, the postgresql pod stopped working, so I tried to redeploy the service. And it's been almost 2 days now that the pod scheduling constantly fail. I have the following entry in the events log:
Failed Scheduling 0/110 nodes are available: 1 node(s) had disk pressure, 5 node(s) had taints that the pod didn't tolerate, 6 node(s) didn't match node selector, 98 node(s) exceed max volume count.
37 times in the last 13 minutes
So, it looks like a "disk full" issue at RH's datacenters, which should be easy to fix but I don't see any notification of it on the status page (https://status.starter.openshift.com/)
My problem looks a lot like the one described for start-us-west-1:
Investigating - Currently Openshift SRE team trying to resolve this incident. There are high chances that you will face difficulties having pods with attached volumes scheduled.
We're sorry for the inconvenience.
Yet I'm on starter-ca-central-1, which should not be affected. Since it's been such a long time, I'm wondering if anyone at RH is aware of the issue ? But I cannot find a way to contact them for users with a starter plan
Anybody face the same issue on ca-central-1 ?
As mentioned by Graham in the comment, https://help.openshift.com/forms/community-contact.html is the way to go
A few hours (12, actually) after posting my issue to this link, I got a feedback from someone at RH who said that my request was taken into account.
This morning, my app is up at last, and the trouble notice in on the status page:
Investigating - Currently Openshift SRE team trying to resolve this incident. There are high chances that you will face difficulties having pods with attached volumes scheduled.
We're sorry for the inconvenience.
Not sure of what would have happened if I hadn't contacted them...
After at least 4 months of normal working my app running on Starter US West 1 suddenly started to get the following error message during the deployment:
0/106 nodes are available: 1 node(s) had disk pressure, 29 node(s)
exceed max volume count, 3 node(s) were unschedulable, 4 node(s) had
taints that the pod didn't tolerate, 6 node(s) didn't match node
selector, 63 Insufficient cpu.
Nothing has changed on settings until the fail started. I've realized that problem just occur on deployments with persistent volume, like PostgreSQL Persistent in my case.
I submitted this issue over the above mentioned url right now. When I got some response or some solution I'll post here.

How to quickly develop a new website (for a web dev amateur)

I have to make a recruiting website for a friend of a friend.
I have a programming background but I never did that much webdev, I know HTML, CSS, and Javascript but I don't have much experience with properly structuring websites using divs and the like.
The website requirements are
People can upload cvs, recruiters can download them
Jobseekers can search for jobs by category and location (recruiters can post jobs)
facebook integration - gonna have to get clarification on this but I think that will mean simply that you can login using your facebook account
recruiters have to pay to post jobs
Needs to be simple to use and look modern enough.
I was wondering what would be the easiest way to do this.
So I have two and a half questions:
Should I use templates? Should I use a CMS? Or should I just edit everything together from notepad++ from the ground up?
Thanks very much for any input.
People can upload cvs, recruiters can download them
This would be accomplished by providing a page for uploading to those who have privilege, then providing a page which finds files in your csv upload directory and provides links to them. You'd probably want to submit your uploading form as more than just the csv file - You'd enter a row into a database that outlined where the file is, what it is, when it was uploaded, etc. You'd then query those rows to determine how to retrieve the file, and in the process, you'd make searching and ordering the files a whole lot easier.
Jobseekers can search for jobs by category and location (recruiters can post jobs)
For this, you could set up a basic database that would be queried using some easily obtained information. Easy as in... Your users will likely expect to give it up, so you won't lose traffic upon asking for it.
Your model could be as simple as something like this:
Region Data / Geolocation
- IDs would be based on a geolocation API for consistency.
- CITY would correspond to that id.
- REGION_ABBR would be the state/province abbreviated, ususally
obtained from the geolocation API.
- REGION_FULL - This, if not provided by the API, is handy to have
ready for output on the frontend.
_______________________________________________________
|__id____city____country____region_abbr____region_full__|
| 4 | Butte | USA | ID | Idaho |
| 2 | Fresno| USA | CA | California |
| 9 | Atoka | USA | TA | Texas |
Job data
Based on the ID column from the region data, we can determine which
jobs are in a city by giving jobs a citys id. The rest is fairly
self explanatory - Add columns you will need to filter by. Expire times,
category (web, sales, carpentry, etc), whatever you and the friend of
the friend an determine will be a useful metric for narrowing results.
_____________________________________________________________
|__id____city_id____title______type_______expires______etc____|
| 1 | 7 | xyz | freelance | timestamp | whatever |
| 2 | 7 | yxz | contract | timestamp | you |
| 3 | 38 | zyx | fulltime | timestamp | require |
facebook integration - gonna have to get clarification on this but I think that will mean simply that you can login using your facebook account
If this becomes necessary, the facebook documentation is pretty solid regarding this.
recruiters have to pay to post jobs
That is a tough call - I don't have experience doing service sales online so I can't really offer any advice.
Technology for the job
I'd personally create this using a php framework for the sake of fast, easy, somewhat scalable development with little effort that can be passed on to other developers. Symfony 1.4 (or 2 if you're willing to face a slight lack of documentation) is my choice, but there are tons of great choices. If you're a python fan, Django is an excellent choice.
I'd love to try building something like this with Rails. Ruby is a new favorite of mine. It really depends on what you know best though, and I have a feeling PHP is the easiest for newcomers. If you're very unfamiliar with scripting/programming... It might not be a great idea to saddle up with a framework. It could be more confusing than helpful. Really, just do what feels comfortable.
As for making this thing look pretty, try twitter's bootstrap. It comes with really easy to use styles for everything from layout to forms and buttons. It's pretty solid. Even better, it can be customized easily and has a LESS version already built (And built well, at that). LESS is a great asset for a large project!
Also possibly relevant; twitter's bootstrap has a few javascript components you can kind of pop into project (Also easily customizable) such as modals, tabs, tooltips, what have you. Well written stuff. I personally like it for prototyping rather than production ready stuff, but it would be fine for production if you made it suit your client's design plan.
Otherwise... It's tough to say. The project you've outlined is pretty clear, but when it comes down to it, your client would be able to clarify it a lot further and give you a good idea of the direction to take.
A good CMS will offer you with all the requirments that you need. and be a good basis for you to learn from.
They will have great templating structures, and utalise "change in one place to update in loads others", this will save you lots of leg work later on.
Personally I use MODx - I think its great. Ive used loads of others and its templating system and the ability to customise whatever you want - fits me and my clients perfectly!
If you want to chat about it in more detail, feel free to drop me an email to graeme#glcreations.co.uk - its what I do for a living :)
If its very simple i would recommend php/html.
If its content driven go for a CMS. Else you will spend lot of time customising it.
If you have a db driven website - you could use php/mysql which is quite easy to implement. Especially that you are from programming background.
With php you can use simple templates which you can build yourself.
Additions:
Just seen a bit more info in your question and certainly dont use CMS. A database driven website is what i would go for. This will give you max flexibility.
There are a lot of good tutorials for creating websites with React. If you want to add a simple backend and get it hosted for free (basic features) add firebase to the project. There are a bunch of youtube tutorials and documentation online.