Boto Python Amazon EC2

Amazon web services API for python, Boto 2.6.0, 2.9.5 and 2.9.6 for Python 2.7

 First, a sample boto python 2.7 script tested on Dreamhost with their installed boto 2.6 as well as boto 2.9.5 and 2.9.6 on Heroku. The Amazon keys are imported from our custom made cred2 module, which you will make yourself for your keys.

#!/usr/bin/python

import boto.ec2
import cred2

conn = boto.ec2.connect_to_region("us-east-1", \
    aws_access_key_id=cred2.awsaki(), aws_secret_access_key=cred2.awssak() )

# conn.start_instances('i-59999x2x')

reservations = conn.get_all_instances()
print reservations
for res in reservations:
  for isx in res.instances:
    print isx

substitute what the python 2.7 functions return with your keys

Your amazon keys can be found in your Amazon Web Services account

#!/usr/bin/python

def awsaki():
  return 'AKXXXXXXXXXXXXXXXXXX'

def awssak():
  return 'vvv999ccc888xxx777zzz666lll555kkk444z/'

We used awsaki which means Amazon Web Services Access Key ID or A.W.S.A.K.I. which is the best terminology to use in search engines to find information on digging this out of your account

Google this

Amazon Web Services Access Key ID

We used awssak which means Amazon Web Services Secret Access Key or A.W.S.S.A.K. which is good terminology to use in search engines to find information on digging this out of your account

Google this

Amazon Web Services Secret Access Key

Python 2.7 script to start an AWS instance that is stopped

#!/usr/bin/python

import boto.ec2
import cred2

conn = boto.ec2.connect_to_region("us-east-1", \
    aws_access_key_id=cred2.awsaki(), aws_secret_access_key=cred2.awssak() )

reservations = conn.get_all_instances()
rsx = reservations[0]
hsx = rsx.instances[0].start()

This works great if you have an AWS account with one instance, which is the one you want to start. So we aren’t creating an instance here, we are just starting an instance that has been stopped.

Now a minimal example in flask to pass boto results to the jinja template rendering modules

@app.route('/aws')
def awshm():
 x = zzcode.botoclass(); cvx = x.botofunc()
 return render_template('aws.html',navigation=navfunc(), contentVar=cvx )

Here we use boto in the class

class botoclass:
 def botofunc(self):
  import boto.ec2
  import credec2
  conn = boto.ec2.connect_to_region("us-east-1", \
    aws_access_key_id=credec2.awsaki(), aws_secret_access_key=credec2.awssak())
  reservations = conn.get_all_instances()
  cvx = list()
  for rsx in reservations:
   for isx in rsx.instances:
    cvx.append({'reservation': str(isx), 'status': str(isx.state), \
              'dns': str(isx.public_dns_name), 'ip': str(isx.ip_address) })
  return cvx

In the midst of a template in the templates directory named aws.html is this table

<TABLE>
<TR><TH>Reservation </TH><TH>Status </TH><TH>Domain </TH><TH>ip </TH></TR>
{% for item in contentVar %}
<TR>
 <TD>{{ item.reservation }}</TD>
 <TD>{{ item.status    }}</TD>
 <TD>{{ item.dns      }}</TD>
 <TD>{{ item.ip       }}</TD>
</TR>
{% endfor %}
              </TABLE>

Shortest git client server setup

Simple git client server setup

Stuff you aren’t going to read;

  • simple git setup server client bash
  • the server is connected to the internet, like your hosting server
  • probably a linux server on the net, like Hostgator or Bluehost or Dreamhost
  • ssh access to that server
  • check for git on Mac or Ubuntu – git –version
  • Winders needs bash git from google
  • Click Here for a better beginner’s guide to git, or are using github
  • Stay here if you have hosting and want to create a private repository

Four steps, (one host & 3 client)

One step is performed on the host machine of the git repository, and three steps are performed on any client on any satellite client machine for that git repository;

  1. First, create an unmanaged repository on the server, which is on the internet addressable by url and with ssh access.
  2. Now, we have a directory initialized with no code yet, and git struggles with empty projects. So clone it and add some programs.
  3. Also on the client machine, now we edit some files,  the method you use to edit files in your directory (“gitproj”) varies.
  4. Update the repository. In this case initializing the original project version. first files on your client to start a master origin branch, and your git will work anywhere!

1. (HOST)

mkdir gitproj
cd gitproj
git --bare init

2. (CLIENT)

git clone ssh://user@host.com/~/gitproj
cd gitproj

3. (CLIENT) Optional

cat >hello.py
print 'hello world'
^D

4. (CLIENT) Update repository on server.

git add hello.py
git commit -m 'hello git'
git push origin master

Voila  – (The End)

We want to do more than this, but at first lets do some simple git client operations.

  • Example 1. Clone on another client and update.
  • Example 2. Edit files and update server repository

1. On a different client, or on the same client after renaming “gitproj” and creating a new, empty directory named “gitproj”

git clone ssh://user@host.com/~/gitproj
cd gitproj
vi hello.py
git commit -m 'changes to hello world'
git push

This is not a group approach, but if you develop a little on your work desktop, a little on your home desktop, and once in a while on your netbook, like I do, this is a big help for script development.

110 Punchdown Trick

Many amateurs have trouble with the tooling to prepare 110 blocks for punchdown.

While the punch down tool can be found for less than 40 dollars, the clip installation tool is usually over $ 100, requires multiple blades for the c5, c8 and c10 clips and is difficult to use.

This video provides a shortcut that allows home wiring enthusiasts attempting to implement cat6 for video and communications within their homes to install a tree backbone that will support internet, ethernet and voice.

110 punchdown

The Most Fascinating Compuserve SEO

Search Engine Measurement

Search Engine Optimization came about as soon as AOL and Compuserve offered search methods to find websites on the internet.

Digital Equipment Corporation developed a search system called Alta-Vista and Yahoo began to gain traction as a website portal. Originally, SEO was a method whereby website designers would influence the SERPS (search engine results page) by reverse engineering the top pages in the SERPS for a given search term.

Theories would be advanced and tested as to the elements of website design that yield high search engine rankings, and factors would be measured and large companies with marketing budgets learned to scientifically measure the promotion of a website for a given word or phrase. 

Web 2.0

In the early days, as more and more independent dial-up and then broad band providers began to supplement the AOL and Compuserve market, the prevalence of truly naked portals like Lycos and Yahoo grew.

Further growth of search engines like Infoseek, Go, and Alta Vista sparked a new need to look for alternative means of ranking their index of pages. Exotic engines like Direct hit would measure how long you stayed on the page you clicked and how long before you came back if at all. Google analytics today attempts to provide statistics as to the average time on the site.

Search Engine Optimization today can be completely ignored, as some players for certain keywords can skip the lengthy time to rank a website by using alternate means such as Facebook pages and Facebook advertising. Groupon and other coupon sites will market products and services relentlessly for the right price.

Estwing Hammer

But still, the ultimate measure of importance for established keywords requires that we consider a level of SEO to earn our entitled position in search engines. While not number one always, a search for the term ‘Hammer’ quickly leads us the the Estwing brand, an indestructible and well regarded brand of hammer for construction and general household use known by every American ever remotely interested in tools.

Similarly, a search for ‘soft drinks’ gets us Coca-Cola® somewhere on the first page of search results, every time. But, how did you send an email to Coca-Cola® back in the day?

dial up networking

CompuServe started with email addresses that were all digits, such as 1234567890 or 09876543. Some early CompuServe email addresses were only 7 characters long. But as CompuServe grew, account numbers were chosen to be 8 digits long, and in case you paid for multiple email addresses, one or two more digits would be added for each email address assigned to your account. CompuServe thought they were covered with at least tens of millions of available account numbers.

CompuServe started using alphanumeric email addresses in the early 1990s, and they eventually phased out all-digit addresses altogether. Some famous people had old CompuServe email accounts that were all digits.

One famous person who is known to have had an old CompuServe email account is Bill Gates. His account was `70707070`. Gates was a member of CompuServe from the early days of the service, and he used his account to stay connected with other computer users and to access information about the internet.

Another famous person who is known to have had an old CompuServe email account is Steve Jobs. His account was `12345678`. Jobs was also a member of CompuServe from the early days of the service, and he used his account to stay connected with other computer users and to access information about the internet.

Fundamental local marketing plan

Most every web presence we look at lacks the fundamentals that connect the customers with your business. This plan covers the repair and enhancement of the local attributes of your google, yahoo, etc marketing and is required in most cases before we proceed to more enhanced techniques. Customers who purchase this option should have an established business, and an implemented website. Failure to do this plan is like building your marketing on a quicksand foundation. Don’t risk attempts to gain search engine ranking!

The SEO basis localization package provides top visibility for your business in the local listings of yahoo, bing, ask, aol and google, as well as some mundane backlink promotion. You should have a url or domain name and a main phone number and location address before purchasing this option. Google, with its extensive mapping of IP addresses to geographic locations gives you the opportunity to pick the low hanging fruit. Whether a dry cleaner or a gutter hanger, whether customers come to you or you go to them, the reputable business owner needs to put the first business marketing effort toward his neighbors.

Cost – $2,400 or Down payment $ 600 and 6 monthly payments of $ 300

How to get started – Fill out our contact form with your URL, business email, business phone, business fax and business address, and mail the check!

To meet the objective of improved natural organic ranking, promoting traffic to your website and telephone. This program utilizes keyword analysis, standards compliance, and quality of metatags. Bing has strict requirements for metatags, google and yahoo prefer W3C compliance. This program is available for both XHTML 1.0 and HTML 4.0 index.html home pages. Requires completion of either analytics or basis with MSinc. optimization products.

Cyber 180 mainframe computer

The Cyber 175 mainframe computer was built by Control Data Corporation (CDC) in the 1970s. CDC was a major player in the computer industry at that time, and the Cyber series of computers were some of their most successful products.

DECwriter
DECwriter II LA36

With it’s PPU architecture, the Cyber 175 et.al. ushered in the use of Dec-writers and CRT terminals in high speed networks in the early to mid 1970’s.

CDC did indeed continue to build bigger and newer models of the Cyber series of computers. The Cyber 180, 185, 190, and 200 were all released in the following years, with increasing processing power and other improvements over the Cyber 175. CDC also developed other lines of computers during this time, such as the CDC 6000 series and the CDC Star. However, CDC eventually lost market share to other companies, and the company went bankrupt in 1986.

Cyber 840A CPU
Cyber 840A CPU

The Control Data Corporation (CDC) 6000 series of computers was a line of supercomputers that were introduced in the early 1960s. The series consisted of several models, including:

  1. CDC 6600: The first model in the series, introduced in 1964, was the CDC 6600. It was the world’s fastest computer at the time and was known for its innovative design, which included the use of multiple processors.
  2. CDC 6400: Introduced in 1966, the CDC 6400 was a smaller, less expensive version of the CDC 6600. It was also less powerful but still provided high performance computing capabilities.
  3. CDC 6500: Introduced in 1967, the CDC 6500 was another variant of the CDC 6600 that was designed for scientific and engineering applications.
  4. CDC 6700: Introduced in 1970, the CDC 6700 was a more powerful version of the CDC 6600, with faster processors and more memory.
  5. CDC 6800: Introduced in 1975, the CDC 6800 was a new architecture designed to replace the aging 6000 series. It was not as successful as its predecessors, and CDC eventually discontinued it.
Clam Shell CRT
Lear Sieglar CRT

Overall, the CDC 6000 series of computers were very influential in the development of supercomputers, and many of the design concepts and technologies used in these machines are still in use today.

Universities purchased Control Data Corporation (CDC) Cyber 190 computers when they were released in the 1970s. The Cyber 190 was a mid-range mainframe computer that was designed for scientific and engineering applications, and it was popular among universities and research institutions.

Some of the universities that purchased Cyber 190 computers include:

  1. University of Wisconsin-Madison: The University of Wisconsin-Madison purchased a Cyber 190 computer in the late 1970s for use in its computer science and engineering programs.
  2. University of Michigan: The University of Michigan also purchased a Cyber 190 computer in the late 1970s for use in its research programs.
  3. University of California, Berkeley: The University of California, Berkeley was another university that purchased a Cyber 190 computer in the 1970s.
  4. University of Illinois at Urbana-Champaign: The University of Illinois at Urbana-Champaign was another institution that purchased a Cyber 190 computer for use in its research programs.

These are just a few examples, and many other universities and research institutions also purchased Cyber 190 computers during this time.

Stanford University did purchase Control Data Corporation (CDC) Cyber series mainframe computers. Stanford was one of the many universities and research institutions that used CDC mainframes in the 1960s, 1970s, and 1980s.

In the early 1970s, Stanford purchased a CDC Cyber 74 mainframe computer, which was a mid-range system that was designed for scientific and engineering applications. The Cyber 74 was a popular model among universities and research institutions, and it was known for its reliable performance and ease of use.

Later on, in the late 1970s and early 1980s, Stanford upgraded to a CDC Cyber 180 mainframe computer, which was a more powerful system than the Cyber 74. The Cyber 180 was also used for scientific and engineering applications, and it was particularly well-suited for large-scale simulations and data analysis.

Overall, the CDC Cyber series mainframe computers were widely used in academia and research during this time, and they played a significant role in advancing scientific knowledge and technological innovation.

The Control Data Corporation (CDC) Cyber 180 mainframe computer could be equipped with up to 12 Process Processing Units (PPUs).

The PPUs were the heart of the Cyber 180’s processing power, and each PPU consisted of a Central Processing Unit (CPU) and associated memory. The Cyber 180 used a unique architecture known as the Peripheral Processor Unit (PPU) architecture, which allowed multiple PPUs to work together to process large amounts of data in parallel.

The Cyber 180’s processing power could be scaled by adding additional PPUs to the system. Each PPU added more processing power and memory to the system, which allowed the Cyber 180 to handle larger and more complex computing tasks.

In addition to the PPUs, the Cyber 180 could also be equipped with a variety of peripherals, including disk drives, tape drives, and communication interfaces. These peripherals allowed the system to store and retrieve large amounts of data, as well as communicate with other computer systems and devices.

The Control Data Corporation (CDC) Cyber 180 mainframe computer originally shipped with the NOS/BE operating system. NOS/BE stands for Network Operating System/Batch Environment, and it was one of the most widely used operating systems on CDC mainframes in the 1970s and 1980s.

NOS/BE was a batch processing operating system, which means that users submitted jobs to the system to be processed in batches rather than interacting with the system in real-time. NOS/BE provided a variety of features and utilities to support batch processing, including job scheduling, spooling, and file management.

Later on, CDC also developed other operating systems for its mainframe computers, including NOS/VE (Virtual Environment) and SCOPE (Supervisory Control Of Program Execution). NOS/VE was a multi-tasking, virtual memory operating system that was designed to support interactive computing, while SCOPE was a real-time operating system that was used for process control and other industrial applications.

Overall, the Cyber 180 was a powerful and versatile mainframe computer that could support a variety of operating systems and software applications.

The Control Data Corporation (CDC) Cyber 180 mainframe computer had a maximum main memory capacity of 4 megabytes (MB).

The Cyber 180 used magnetic core memory, which was a type of memory technology that was common in mainframe computers of the 1960s and 1970s. Magnetic core memory used small magnetic rings to store data, and it was relatively fast and reliable compared to other memory technologies of the time.

The 4 MB maximum memory capacity of the Cyber 180 may seem small by today’s standards, but it was actually quite large for a mainframe computer of its time. Many other mainframes of the era had memory capacities that were measured in kilobytes (KB) rather than megabytes.

In addition to main memory, the Cyber 180 also supported a variety of secondary storage devices, including disk drives, tape drives, and card readers/punches. These devices allowed the system to store and retrieve large amounts of data, which was essential for scientific and engineering applications that required extensive data processing and analysis.

The Control Data Corporation (CDC) Cyber 180 mainframe computer had a word size of 60 bits.

The Cyber 180’s Central Processing Unit (CPU) used a unique instruction set architecture known as the Peripheral Processor Unit (PPU) architecture, which was designed to support parallel processing. The CPU had a 60-bit word size, which was larger than the 36-bit word size used in many other mainframe computers of the time.

The 60-bit word size allowed the Cyber 180 to process larger amounts of data in a single operation, which was particularly useful for scientific and engineering applications that required complex calculations and data analysis. The larger word size also provided more precision for floating-point arithmetic, which was important for many scientific and engineering calculations.

The Cyber 180’s PPU architecture allowed multiple PPUs to work together in parallel, which further increased the system’s processing power. Each PPU had its own CPU and memory, and the PPUs communicated with each other via a high-speed interconnect. This allowed the system to process large amounts of data in parallel, which was essential for many scientific and engineering applications.

ERP and Accounting s/w License Agreements

Free trials.  Both Xtuple and OpenBravo are open source software, and allow free downloads for Windows computers to evaluate the software.  Consultants with Linux and Sql knowledge can even implement multi-user solutions in the free and open source format.  Licensing for commercial use should be considered to help defray development costs for the publishers.

Seats vs Users; Most software for networks should be controlled by user, i.e. my use of the software allows me to do anything, including adding and deleting other users.  The new person answering the phones and entering vendor invoices, already coded and approved with g/l distribution, does not have permission to add more users, or cut checks.

With QuickBooks when you have three users, you have three seats.  You are not to install their software on twenty different computers but use only three at once as that is a violation of their licensing terms.

With the packages we review, you may have these twenty different computers with the software installed all linked to the database server.  But only the allowed and licensed number of users may run concurrently, i.e. be logged in and entering data or inquiring, at precisely the same time.

Compared to QuickBooks, OpenBravo is quite expensive.  Quickbooks, for three seats, costs around $ 687 per year and includes 60 days of support during each annual upgrade. With discounts it can be even less.  Support can be purchased separately. However for five concurrent users they jump to $ 2,000 according to their web-site, and I have found their technical people reluctant to tout this configuration.  They also offer an Enterprise version which costs around one and a half times more, but offers a more robust and SQL styled database engine and supports much higher numbers of seats.

Five Seats – Good for Eight to 12 Users

OpenBravoXtuple
Year One$ 5,000$ 4,500
Per Year$ 5,000$ 1,850
 Add Seats Annual$ 1000$ 162

Key Software Features

OpenBravoXtuple
 Servers None, Windows, Linux None, Windows, Linux
 DatabaseSqlServer, MySql PostgresSql
 Concurrency Unlimited Users Unlimited Users
 Full Demo Free, Single Seat Free, Single Seat
 Clients Windows, Linux Windows, Mac

From Xtuple; Software maintenance – access to all upgrades, major and minor – is included in the annual license, as is direct helpdesk support from xTuple if you have 15 or more licensed users.  The annual license is like a subscription.  You are licensed to use the software as long as you are current on your payments (either annual or monthly).  If you’re more than 30 days late, you could be in default of your license and support agreements. The perpetual license is a one-time purchase of the then-current version of the software.  Software maintenance is priced separately – currently at 18% of the current per-user license cost, and required for at least the first year.  Direct support from xTuple is another 7% on top of the maintenance (which is a prerequisite for support).

ERP and Material Resource Planning

Enterprise Resource Planning systems and Customer Relationship Management systems also known as CRM and ERP are valuable tools for companies wishing to use computing efficiencies to streamline operations. ERP systems focus on accounting, inventory and production systems and CRM is a tool to help customer service, sales and prospecting.

While proprietary software systems such as QuickBooks® exist that cover part of the ERP spectrum, and additional modules have been made available, more and more companies are looking at open-source software systems based on Linux servers. While companies quite regularly surrender to the Windows environment for the desktop needs of employees, the backroom server operations still allow IT departments to choose avoid single vendor dependence by using powerful Linux based servers and databases to form the backbone of their information processing operations.

While many larger companies have a need for active directory (AD) functionality, the core Windows flavor of the AD server environment forces savvy IT managers look for back room solutions that would not be susceptible to a windows kernel worm or virus attack. Additionally in the Unix community, the carrier-grade projects have produced up-time records that will never be met by proprietary server operating systems. The memory leaks of certain proprietary systems lead to gradual degradation per unit of up-time and still persist in that environment.

Open Source software solutions for ERP and CRM are available that base the back room operations on the reliability, scalability and versatility of Linux servers while offering client computing on web browsers, Windows XP, Vista and Windows 7 as well as Apple products. Packages such as Xtuple, Compiere and OpenBravo use a variety of powerful technologies that are growing in the quality and dependability of applications as quickly as Linux variants such as Ubuntu have grown for the desktop and laptop.

Smart and careful participants should keep watch for Open Source ERP and CRM for Atlanta companies.

Does WordPress require jQuery?

Yes and no, but mostly yes. WordPress does require jQuery, which is a JavaScript library that simplifies common JavaScript tasks. WordPress includes jQuery by default in the core code, and many plugins and themes also use it. If you’re developing a plugin or theme for WordPress, you can be pretty sure that jQuery is already available and you can use it in your code.

However, there are some cases where you may need to load a specific version of jQuery, or choose not to use it at all. In these cases, you can enqueue a custom version of jQuery or dequeue the default version provided by WordPress. This can be done by adding some code to your theme’s functions.php file or in a plugin. It’s important to understand the implications of modifying the default jQuery implementation in WordPress and to test your code thoroughly before deploying it to a live site.