[Updated 4-15-2013]
I've come up with a solution for ESRI JSAPI 3.4 and AMD.
As your JavaScript projects get more and more complex, loading all of those Dojo classes can really slow down your load time. All those dojo.require calls add up in a hurry. The Dojo Build System can be a huge help in speeding up the load time and general performance of your apps. For example, a build that I ran on a recent project took the number of JavaScript requests on page load from 53 down to 5. The css request went from 16 down to 4. This ended up cutting the load time in half! Other nice features include stripping out all of the console calls, minifying your JavaScript and interning all of your widget templates.
The rest of this post assumes some familiarity with the Dojo Build System. If you haven't looked at it before, the documentation is worth reading. There's even a fancy new tutorial.
After reading all of the Dojo documentation it's easy to get excited about the possibilities. However, you will quickly find that mixing the ESRI api into the equation makes a big mess of everything. For example, the dojo build system assumes that you are hosting everything yourself. But because ESRI has not released a source/unbuilt version of their api that we can download we are stuck loading Dojo from their servers. The other problem is that when you load the ESRI api you are really loading their layer file which can have a lot of overlap with your layer file thus adding a lot of duplicate code. Not to mention the problems that the build system has when it sees: dojo.require("esri..."); and it doesn't know where to get it. Over the last few months I've developed a solution to overcome these problems and end up with a lean and mean (for the most part) product in the end.
Monday, June 6, 2011
Wednesday, May 18, 2011
Python Script To Update Current Stream Gauge Data In New AGRC Flood Map
We use a python script to scrape data from the USGS and NWS web sites to update our data in the SGID. It runs every two hours through Windows Scheduled Tasks. The script’s workflow is as follows:
The NOAA data is served up via an rss feed which means xml. The minidom object from the xml.dom library came in handy here for parsing the xml data.
So in the end we have one feature class that combines real-time data from multiple sources. You can check out a copy of the script here.
First it loops through all of the features in our stream gauges feature class (SGID93.WATER.StreamGaugesNHD).
For each feature, it uses a USGS id (SourceFeature_ID) to build a url to hit their Instantaneous Values web service.
# get json objectdata = json.loads(urllib2.urlopen(r'http://waterservices.usgs.gov/nwis/iv?format=json&site=' + id).read())
This is an example of one of the urls: http://waterservices.usgs.gov/nwis/iv?format=json&site=09413700. It then uses the json library to parse the data and get the values that we are interested in. These values are used to populate the appropriate fields in our feature class.
def getJsonValue(variableCode, data): for ts in data['value']['timeSeries']: if ts['variable']['valueType'] == variableCode: value = ts['values'][0]['value'][0]['value'] return value
The NOAA data is served up via an rss feed which means xml. The minidom object from the xml.dom library came in handy here for parsing the xml data.
# get noaa data gaugeID = row.getValue('GuageID') if gaugeID: ndata = minidom.parse(urllib2.urlopen('http://water.weather.gov/ahps2/rss/fcst/' + gaugeID.lower() + '.rss')) descriptionText = ndata.getElementsByTagName('description')[2].firstChild.nodeValue descriptionList = descriptionText.split('<br />') row.setValue('HIGHEST_FORECAST', descriptionList[5].split()[2].strip()) row.setValue('HIGHEST_FORECAST_DATE', getNOAADate(descriptionList[6].split('Time:')[1].strip())) row.setValue('LAST_FORECAST', descriptionList[8].split()[2].strip()) row.setValue('LAST_FORECAST_DATE', getNOAADate(descriptionList[9].split('Time:')[1].strip()))
Wednesday, March 16, 2011
ArcPy.Mapping Module Makes Complex PDF Creation Easy
Recently, I was presented with a problem that was a perfect opportunity for trying out ESRI's ArcPy Python site package. We have a series of map documents that are set up with Data Driven Pages to export various maps for each of the counties of Utah. Each mxd had a different theme. The goal was to develop a script that would export all of the Data Driven Pages for each mxd and then combine them by county. After a few hours of work I had 72 lines of code that did just that. Here's what I came up with:
I used only two modules for this script: arcpy.mapping and os (great for working with the file system).
The os module was great for deleting the old files and getting a list of the map documents.
The DataDrivenPages class was the key class in the ArcPy.Mapping module for this script. It is obtained through the MapDocument class. Here I start to loop through the mxd's and get a reference to the DataDrivenPages object that I am interested in.
Once I've got the DataDrivenPages object, then I start to loop through all of the pages.
Before I export the page, I check to see if there is already an existing pdf for that particular county. If there is I export the page to a temp PDF file and then use the PDFDocument::appendPages() function to add it to the existing PDF. If not, then I just export it out to a new PDF.
Then, all that's left if a little clean up.
And that's it! Here's the entire script and an example output pdf.
I used only two modules for this script: arcpy.mapping and os (great for working with the file system).
# import modules import arcpy.mapping, os # variables baseFolder = os.getcwd() # current working directory outputFolder = baseFolder + r'\PDFs'
The os module was great for deleting the old files and getting a list of the map documents.
# clear out old pdfs print '\nDeleting old PDFs...' oldPDFs = os.listdir(outputFolder) for f in oldPDFs: os.remove(outputFolder + '\\' + f) # get list of all files in the folder print '\nGetting list of mxds...' allItems = os.listdir(baseFolder) # filter out just .mxd's mxdFileNames = [(x) for x in allItems if x.endswith('.mxd')] mxdFileNames.sort()
The DataDrivenPages class was the key class in the ArcPy.Mapping module for this script. It is obtained through the MapDocument class. Here I start to loop through the mxd's and get a reference to the DataDrivenPages object that I am interested in.
# loop through mxds for name in mxdFileNames: print '\nProcessing: ' + name # get mxd mxd = arcpy.mapping.MapDocument(baseFolder + '\\' + name) # get datadrivenpages object ddp = mxd.dataDrivenPages
Once I've got the DataDrivenPages object, then I start to loop through all of the pages.
# loop through pages pg = 1 while pg <= ddp.pageCount: # change current page ddp.currentPageID = pg # get name of current page name = ddp.pageRow.getValue('NAME') print name
Before I export the page, I check to see if there is already an existing pdf for that particular county. If there is I export the page to a temp PDF file and then use the PDFDocument::appendPages() function to add it to the existing PDF. If not, then I just export it out to a new PDF.
# check to see if there is already a pdf file created for this county pdfFile = outputFolder + '\\' + name + '.pdf' if os.path.exists(pdfFile): print 'Existing pdf found. Appending...' # open PDF document pdf = arcpy.mapping.PDFDocumentOpen(pdfFile) # output to temporary file tempFile = outputFolder + '\\temp.pdf' ddp.exportToPDF(tempFile, 'CURRENT') # append to existing file pdf.appendPages(tempFile) # delete temp file os.remove(tempFile) # clean up variables del pdf else: # file does not exist, export to new file print 'No existing pdf found. Exporting to new pdf.' ddp.exportToPDF(pdfFile, 'CURRENT') # increment page number pg = pg + 1
Then, all that's left if a little clean up.
# clean up variables del mxd, ddp raw_input('Done. Press any key to exit...')
And that's it! Here's the entire script and an example output pdf.
Subscribe to:
Posts (Atom)