My database is used for ecommerce holding data for Creating and updating products online to the likes of Wal Mart and Amazon etc.
I've been working to make the daily work of maintaining listings easier and have added hyperlink buttons that go directly to the listing page on Amazon or Wal Mart etc. Works like a charm.
Recently I created links to go directly to the edit pages for our products, making changes quicker for the staff. Yes, indeed, works like a charm for Wal Mart, not so much for Amazon. I'm fairly certain login credentials are the issue but I've no idea how to go about making this work.
The hyperlink:
https://vendorcentral.amazon.com/hz/vendor/members/products/images/manage?products=B073GPZQDS-CYCS4
It works perfectly when copied and pasted into Chrome. When the FollowHyperlink is used on a button it does not.
I realize you fine folks can't test the link as it requires login info I can't give out. But, can you tell me what might be firing differently with using Followhyperlink as opposed to a copy/paste into browser? Why one would work and the other wouldn't?
The code is simple:
Private Sub Command20_Click()
FollowHyperlink Me.StoreEditLink
End Sub
As an example the Wal Mart link works fine on a button using FollowHyperlink:
https://supplier.walmart.com/editItem/0698238533928?idType=GTIN&readonly=false&isSetup=false&product_id=4R00WBYFZBVN
Thanks!
Did you try opening the page using Shell Function?
https://msdn.microsoft.com/en-us/vba/language-reference-vba/articles/shell-function
shell("C:\Users\**USERNAME**\AppData\Local\Google\Chrome\Application\Chrome.exe -url https://vendorcentral.amazon.com/hz/vendor/members/products/images/manage?products=B073GPZQDS-CYCS4")
Related
EDIT 2: Using fiddler I was able to find out that I'm getting a 504 error, but it makes no sense to me how that could possibly be happening.
EDIT: It seems to have something to do with connecting to SQL Server. I found I can export reports that don't use a SqlDataSource just fine, but when I add one then I get this error. But I still have no idea how to debug beyond this.
My Telerik reports are exporting just fine in explorer, but in Chrome I am getting an error page that says "This webpage is not available" and below that it says "ERR_CONNECTION_RESET".
When I hit F12 and look at the network activity, this is the request that is causing the problem:
https://ourwebsite.com/api/reportresolver/clients/112517-7243/instances/112518-d54c/documents/112531-33fe?response-content-disposition=attachment
The odd thing is that I can take the above link and copy/paste it into Explorer and it will open the PDF I just tried to export from Chrome.
Has anyone else run into this? I have no idea how to even proceed in troubleshooting this further :/
When I export to PDF programmatically using the sample code provided by Telerik (http://www.telerik.com/support/kb/reporting/styling-and-formatting-reports/details/exporting-a-report-to-pdf-programmatically) I get similar results. However if I remove Response.End() then it works great.
The odd thing is that it works properly in test projects the way it's supposed to from the toolbar; it's only when it's integrated into the application I need to use it in that I have to export it programmatically (without Response.End()) in order to get it to work in browsers other than IE.
Still wish I could figure out how to get it to work from the toolbar, but at this point I don't expect any answers so this will have to do :/
EDIT: I later found that the amount of data being passed had something to do with it. If there was very little data being passed then it worked okay but as soon as the amount of data increased a little then the above solution of removing Response.End() was required.
I would like to use the Filemaker web viewer to build and style a database navigation menu. I have found a handful of samples and I have played with the code but the problem that I am having is that it launches in another window (Note that I also have several versions of Filemaker on my desktop and it also tries to launch the pop up in Filemaker 13 when I am building in Filemaker 12).
The goal is to call the script inside of the current database and current application so that it functions as a system navigation menu. In straight HTML in a site environment I would add target="_blank" or target="_parent" to the href but I can't seem to get the syntax right to try it in the web viewer and I'm not sure if this would be the solution. Can any angel from tech heaven assist or offer any advice? Here is the sample code that I currently have that calls a Filemaker script in a local system for a google map interface. I'll be using the script differently but the structure will be the same.
"data:text/html," &"
<html>
<body>
<a href='"&"FMP://" &
Case(
IsEmpty(Get(HostIPAddress)); Get(SystemIPAddress);
not IsEmpty(Get(HostIPAddress)); Get(HostIPAddress);
)
&"/"& Get ( FileName )& "?script=Open-Detail-Map¶m=" & Data::ID_Data&"'>View Map
Detail</a>
</body>
</html>"
This works for me, and it opens it in the same window. I'd recommend using FileMaker 13 for development, or uninstalling it. It launches in 13 because the URL protocol handler (FMP) is the same for both versions, so your OS uses the newest version of FileMaker to handle the URL call.
Note that triggering scripts using a URL will not work in standalone files in FileMaker Pro, only hosted files or FileMaker Go.
It is possible to call the script from another file directly in FileMaker, rather than trying to do it from a webviewer. Can you clarify why you're trying to create your navigation menu in a webviewer?
If a webviewer is not compulsory, I would recommend:
creating an External Data Source that points at the other file
Adding FileMaker buttons for your navigation
Right-click on the button you want to trigger the script, and choose "Button Setup", then choose "Perform a Script" and specify the script you want to run from the other file.
Honestly, this makes no real sense to do. I get what you are trying and it seems interesting, but build your navigation in FileMaker and display your banner ads in a web viewer. The other option, which is always available, is to just build out the solution as a PHP site using the FMP PHP API.
I realize that this is an answer for a rather old question, but I think it warrants pointing out what the solution here is...at least in modern versions of FileMaker. I don't recall exactly when this was fixed...13.05 or .06? It was present in earlier versions but wouldn't work for locally opened files, only hosted files; now it works in both.
You need to use the 'currently open file' reference in the FMP URL: "$". So your URL string should look like this:
"fmp://$/fileName?script=AScriptName¶m=..."
In your code:
<a href='"&"FMP://" &
If ( IsEmpty(Get(HostIPAddress)); "$"; Get(HostIPAddress) )
&"/"& Get ( FileName )& "?script=Open-Detail-Map¶m=" & Data::ID_Data&"'>View Map Detail</a>
I have an enterprise box account, and I was tasked with creating a crawler that would scan an account on box and save all meta information (including a direct link) in a local database. This works fine.
in PHP I have also built a function that downloads the documents (via the direct link I obtained from the api) and extracts readable text from them. This was working perfect a week ago, yesterday however this stopped working completely. I'm using the file_get_contents() function to download the file, and currently it only retrieves the document's file size rather than the document itself, which I find strange. I have tried CURL and I get the same result, it seems box is responding to my direct file requests with the file size instead of the actual file.
The files are ALL open access, so anyone with a direct link can download the file without logging in. I have also tried running this code on another server in another hosting company and I get the exact same result. I have tested my code by accessing other files from other locations (not box) and it works fine.
It's important to note that this was working fine just a week ago, but now it doesn't work at all. Nothing changed in between on my end, (that I know of). Anyone have an idea?
I've developed a new wordpress website on a testing domain name on our server.
I had set the site up ready to go live... but I had to move the wordpress website onto another domain name that we are also hosting on our server.
So what I did was copy all root folder content from the testing domain name and pasted the content into the new domain name's root folder.
I then logged into wordpress, changed all the necessary settings like WORDPRESS URL and SITE ADDRESS URL as well as image absolute urls in each and every individual pages to make sure that I've got the right URL for everything.
When I click on MEDIA, I can see all the images like normal.
Great ... then I go and check the website live and I see that there are a lot of images that are missing! They are all in the MEDIA panel - but do not show up on the website!
I then double check that all images are pathed correctly ... and they all are.
Now why do SOME images show up and others don't?
I've even tried to add a new photo and use that photo in place of another photo that isn't showing up and that new photo doesn't even show up.
Where does my problem lie?
For example, 1 slideshow on my website which isn't showing images, give me an "image not found" error for a image:
Image not found: http://www.domain.com/wp-content/themes/natural
/lib/timthumb.php?src=http://www.domain.com/wp-content/uploads/2012/08/breakfast-gallery-011.jpg&w=610&h=0&zc=1
Ok I am going to answer my question with some advice.
It's clear that some of my images were 'hard coded' as devin mentioned as I could not find a logical reason for some not showing (even when I looked at the tabular data in MySQL) and because I'm not a database engineer / developer - I wasn't prepared to dive too deep into that with the possibility of causing further issues... so I decided to take down the entire wordpress site, create a new database, re-install wordpress and I imported an exported xml file that I created and saved (luckily) before 'migrating sites'.
Advice:
1) Whether you migrate a wordpress site or not, always backup your website regularly by creating an export of your wordpress structure. It may save you a lot of work in future.
2) If you're an amateur or beginner at development and MYSQL like myself, I'd suggest you create your wordpress site on the actual domain name you want it on. This will save you from 'migration' headaches as I've just experienced ... and a lot of time. Learn from my mistakes. Although there is probably a solution to my question above, it's out of my expertise / knowledge and could be out of yours too... so make it easy for yourself :)
The issue is not that the image URls are hardcoded only. The last portion of the URL is harcoded, but you will most likely have "/wp-content/" embedded in the URL which indicates that the image's URL string is dynamically created. I looked in my wp_postmeta table and there were all of the partial image URls ( like this - 2013/03/expanse2.jpg ). Now where is the beginning part of this URL and the domain name? The domain name is the part that is actually missing from the all of the image urls in my case. I dug into the database a little deeper using phpmyadmin ( but i recommend Webmin if you can get it up and running ). I ran into the "home" field in the "wp_options" table. Asked Google what a proper "home url" would be for wordpress, which brought me to this page ( http://codex.wordpress.org/Function_Reference/home_url ) , and this line was in there ( home_url() is located in wp-includes/link-template.php. ). Went to that file and found that it controls how URLs are built, but not uploaded image urls specifically. In the end i went into a page that had an image, looked at the advanced settings and found the image URL was just missing the domain. I used the wonderful search and replace script to repair it. Done
A friend just asked me if I could help him in automating something that is taking up a lot of his time. He grades essays online so basically there is a website that he logs into and sits on a page refreshing it until some data appears (new essays to be graded). Instead of doing this he would like to simply be notified when there are new items that require his attention.
There is no API to work off of and a login is required to access website. What in your opinion is the right tool for this job?
I'll post my idea as an answer but I'm curious as to what everyone suggests.
Edit: He is running windows (Windows Vista). Browser doesn't really matter as long as the site runs in it.
My idea is to write a script for Firefox's Greasemonkey plug-in.
Basically he would log into the page and turn on the script which would constantly be refreshing and scrubbing the site for new items. If some are found it pops up a message and plays a noise (or something like that).
I've never worked with Greasemonkey before but it seems like something like this should be pretty simple.
You can write a small Ruby script using Waitr (see) which will actually use a browser and allow you to scrape data, or scrubyt, (examples).
Here is what scrubyt looks like. I'd recommend you do something like generate an email or IM message but can do whatever you like. Schedule it to run in CRON somewhere - beauty of this approach is it doesn't matter if his computer is on, browser is open, etc.
#simple ebay example
require 'rubygems'
require 'scrubyt'
ebay_data = Scrubyt::Extractor.define do
fetch 'http://www.ebay.com/'
fill_textfield 'satitle', 'ipod'
submit
record "//table[#class='nol']" do
name "//td[#class='details']/div/a"
end
end
puts ebay_data.to_xml