How can I send HTML emails using PHP through a diffrent server to avoid Shared-hosting delays - smtp

I am currently using SiteGround shared hosting to host my website. Through SiteGround I send an HTML email using PHP to my customers when they register to confirm their email address.
I am currently having issues with receiving email, with it taking over 4 hours to appear in my inbox.
Contacting Siteground they tell me SpamExperts who run their MX recorders are having some issues. I am currently still having these issues after 24 hours despite Siteground telling me it has been fixed.
I am unsure about how to fix this, it may just need time. Either way, I was wondering if there is any way of having an alternative server to send my emails through incase it happens again?
I have limited knowledge on how emails send, but I know that it helps if the from the domain name is the same one I have set up through the shared hosting. I thought about just having shared hosting with another company but then my domain wouldn't match up.
I'd appreciate any insight into this.

Related

How to get file contents of old site on drupal?

We have a site running to drupal and migrated it to Squarespace. I have to retrieve some pages of drupal site but I can no longer view the site. Do you know any way to get the old content of website on drupal? Please know that we still have access to the drupal box. Any suggestions will be a big help.
Easiest way would be to make the old Drupal site available again through the browser running on a different domain like old.example.com, login into the admin panel and start copy/pasting content.
If you know your way around your computer, know the IP address the old server is running on you could for the time being change the host file of your machine to send requests for your site to the old server and get access to the site that way.
Migrating content by automating the process is also an option, but it is not only time consuming, it requires in depth knowledge of both platforms so is mostly a very expensive solution if you are not able to do this yourself.
But if I read your question, I think the first option is the easiest option. Get a hold of the technical person/party of the server the site is running on and get them to make the site accessible on a different domain.

Preparing to switch to google compute engine for web hosting

Im currently in the process of switching to google compute engine for my web hosting because my current provider performance has been deteriorating over time. Giving me more flexibility to upgrade as I need to.
Ive got my website setup and working on the engine. But the next steps need to go smoothly to ensure my customers don't experience any downtime.
I have a few things I need to work out:
- Does google have a way of managing email addresses at your own domain? Then I can just send or receive from gmail.com or another email client on my domain? Or do I have to setup a email server in my VM? If so is there any way to setup a cpanel like management software on it?
- To my understanding I should just have to call my current provider to ask them for my SSL certificates and for them to switch my domain over to google and then point it to my VM? or is there something I'm missing here?
Are there any simple ways to ensure my server says secure when I'm managing it myself other then just updating packages manually? Like a website I can use to track known security problems with the packages I have installed?
Edit:
Please read Dan Cornilescu's comment on this question about setting up your own custom domain email. He said it can possibly be managed using google apps.
On the topic of SSL/Domains I called my current provider and they said they would help me switch over if its what I decided. They also upgrading my hosting plan and things seem better now and are comparable to the performance I was getting on my google VM so Ill be trying that for now.

Login per Local HTML form, security?

Recently I had problems with my email account (gmx.net). I have about 30 failed login attempts a day. But that is not the topic of the question (I already changed my password).
It got me thinking. Is this in automatic attack? And if so, how is it done? I took a look at the HTML code of the page and found out, that it is pretty easy to just copy the source code of the form element and do a login attempt through a local html file (copy and paste, new HTML file, open in browser, enter your credentials, submit). That means it is an easy task to automate such things (write a little script, that does a post with various values --> Brute Force attack). I was about to write an email to the mail hosted, when I found out, that the exact same process can be done on facebook.com....
I had the impression, that since we have all these new fancy web frameworks like Rails, Django and so on, we have an automatic protection against such attacks (for example the protect from forgery which Rails includes http://ruby.about.com/od/security/a/forgeryprotect.htm)
My question here is:
Is there any sane reason to allow a login attempt from another server?
Don't give me "API", the most APIs for web application require a manual login process before authorization.
I know there are many more ways to brute force attack any website login (use a framework that controls a browser etc...) and there are many ways to protect (IP-banning etc). But shouldn't disabling a remote login be one of the first security mechanisms you would take?

best way to switch between secure and unsecure connection without bugging the user

The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!
I generally advise never to skip security features, because they are there for a reason, but i found this for you to check out.
Perhaps it is time to consider retiring support for IE6?

Enterprise Service Bus is this the right solution?

C# 2008
I have developed an application that need to connect to a web server in order to work. If the web server goes offline. The the app will have to be notified so that the user using the app can know what happened.
This application will be downloaded from the internet from our clients web site. So hundreds or thousands of users could have it.
I was thinking about pinging the web server maybe every 5 seconds. However, with 100's or 1000's apps would overload the web server.
Someone has told me about ESB would be right for this problem. The way I am thinking to use this, and I am not totally sure. Is to have every app to subscribe to the ESB. If the web server goes offline it will send a message to all the apps.
However, I understand that ESB is very big and complex and maybe this is overkill for my problem.
Am I understanding correctly.
If ESB is not the correct choice is there another design pattern I could use?
Many thanks
It sounds inappropriately out of scope to spec an ESB for this simple purpose. Why not just have the client machines figure it out as they periodically need to access the website? Instead of pinging the web server over and over, in the course of their normal activities they will need to access the web server for any normal reason, if they get an error response they can branch down the "web server is down" code path.
An ESB sounds like the wrong solution.
Two possibilities come to mind:
(1) If the user doesn't need to know they're offline in real-time, defer detection to usual error handling when you try and access the server.
(2) If you must know real time, use a small proxy at each client site so that only the proxies need to ping your server, not every desktop.