Wednesday, June 14, 2017

Minecraft

I have heard about it, my granddaughter plays, and so do my friends...........
Resources on how to get started:
http://www.wikihow.com/Play-Minecraft
https://www.howtogeek.com/school/htg-guide-to-minecraft/lesson1/all/

Youtube videos:
https://www.youtube.com/watch?v=DhncITqVbqE
https://www.youtube.com/watch?v=BEH_fMgRNrc

What's the difference between buying and running the game, or playing on a server??

Living Computers Museum and Labs

http://www.livingcomputers.org/

2245 First Ave S
Seattle WA 98134
206-342-2020

I want to go!
Maybe this August!!

Living Computers Minecraft server
http://mc.livingcomputers.org/

Xerox Alto




Anyone interested in the history of personal computing will surely have heard of the Xerox Alto, but when’s the last time you got to play with one? It’s been a while even for Paul Allen — long enough that he decided to have a couple restored at the Living Computer Museum in Seattle.

https://techcrunch.com/2016/08/02/living-computer-museum-restores-xerox-alto-and-debuts-new-emulator/

SALTO the Xerox Alto Simulator
http://toastytech.com/guis/salto.html

Tuesday, April 18, 2017

Classroom Doors

I had this idea back in 2010 that classrooms should have monitors embedded in them with IP addresses that could be accessed from any administration computer for classroom announcements, such as canceled class, or professor caught in traffic.  A simple phone call to the administrative assistant and your message could be posted on the monitor on your classroom door!

These are just card readers........ but it's on the right track..... If a refrigerator can get on the Internet, a classroom door should be able to also.  And they have video doorbells now too.
IP Door Access Control Systems are easy to install and provide excellent flexibility, yet many people who are used to the older type systems have concerns about using them.  This article tries to take the mystery out of network attached door access control systems, and describes how very easy they are to install and use.  You will find that all the signals that were used before are encoded on the standard Ethernet network.
From:  http://portal.isonas.com/news-education/how-to-install-an-ip-door-access-control/

List of "somethng" as a service!

1) AaaS - Analytics as a Service
On demand analytics services. Web-based analytics applications.

2) BaaS - Business as a Service  Easily "virtualize" your business.

Business Intelligence As A Service

Backup as a Service
Backup as a Service offers companies the ability to do their backups to the cloud.

3) CaaS - Computing as a Service
Verizon offers computing as a service, addressing Security, Reliability, and Control.

Communications as a Service
Gartner defines CaaS as IP telephony that is located within a third-party data center and managed and owned by a third party.

4) DaaS - Data as a Service
DaaS is based on the concept that the product, data in this case, can be provided on demand.

Desktop as a Service
Outsourcing of a virtual desktop infrastructure (VDI) to a third party service provider.

5) EaaS - Everything as a Service
From simple office documents to extensive customer service management, EaaS vendors could provide the tools through the cloud (internet).

6) FaaS - Fraud as a Service
When an online fraudulent activity such as trojans is provided as a service.

7) GaaS - Gaming as a Service
Gamers who may wish to either complement their existing gaming console or replace it altogether with a Gaming-as-a-Service (GaaS) option.

8) HaaS - Hardware as a Service
A service provision model for hardware that is defined differently in managed services and grid computing contexts.

9) IaaS - Infrastructure as a Service
One of the commonest and earliest form of -aaS. 
Provides hardware infrastruture as a service, hence in early times Hardware-as-a-Service (HaaS) was used interchangeably.

10) JaaS - Jobs as a Service
Jobs as a Service for an effective job hunting and decision making

11) KaaS - Knowledge as a Service
An on-demand, bite-sized, where-needed approach to skill acquisition.

12) LaaS - Logging as a Service
Log management in the cloud

13) MaaS - Mashups as a Service
When SOA meets web-2.0.
-- Monitoring as a Service
-- Metal as a Service (servers)

14) NaaS - Network as a Service
SaaS (software as a service) web 2.0 model to mobile operators and the intelligence stored within their networks.

15) OaaS - Oracle as a Service
Database Consolidation

16) PaaS - Platform as a Service
Something that stays in between software application and infrastructure as service (SaaS and IaaS).
As providing our WSO2 Carbon middleware Platform as a Service over the cloud, we have discussed this several times before over this blog itself. Hence omitting further discussion mow. :)

17) QaaS - Query as a Service

Quality as a Service

18) RaaS - Recovery as a Service
Allows you to keep and maintain ongoing backups and data recovery files.

Routing as a Service

Replication as a Service
Provides replication through the cloud infrastructure as a Service

19) SaaS - Software as a Service
Application in the cloud. Along with infrastructure as a service, this is first among as-a-service.
-- Storage as a Service

20) TaaS - Testing as a Service
Many organizations including Wipro and Qutesys provide testing and quality assurance using the pay-as-you-go model.

21) UaaS - Utilities as a Service

22) VaaS - Virtualization as a Service
VaaS offers low cost virtual servers using enterprise class hardware in secured datacenters.

23) WaaS - Wireless as a Service
Cisco Virtual Wide Area Application Services: Cloud-Ready WAN Optimization Solution

24) XaaS - Anything as a Service. Mostly used as an umbrella term (xaaS) to refer to all the -as a Service terminology used. pronounced "zass"

25) YaaS - You-as-a-Service

26) ZaaS - Zebra as a Service 

From: https://dzone.com/articles/life-full-aas-aka-list-aas

Wednesday, December 7, 2016

Prime Numbers

You just never know when knowledge is going to rise up and smack you in the face!
I have been doing research on computers, and this popped up!
I have a headache from trying to wrap my head around this concept......

So, a prime number is a whole number greater than 1, whose only two whole-number factors are 1 and itself.  The first few prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, and 29.

As we proceed in the set of natural numbers N = {1, 2, 3, ...}, the primes become less and less frequent in general.  However, there is no largest prime number.  For every prime number p, there exists a prime number p' such that p' is greater than p.  This was demonstrated in ancient times by the Greek mathematician Euclid.

A computer can be used to test extremely large numbers to see if they are prime.  But, because there is no limit to how large a natural number can be, there is always a point where testing in this manner becomes too great a task even for the most powerful supercomputers.

Various algorithms have been formulated in an attempt to generate ever-larger prime numbers.  These schemes all have limitations.
A Mersenne prime must be reducible to the form 2 n - 1, where n is a prime number. The first few known values of n that produce Mersenne primes are where n = 2, n = 3, n = 5, n = 7, n = 13, n = 17, n = 19, n = 31, n = 61, and n = 89.
A Fermat prime is a Fermat number that is also a prime number . A Fermat number n is of the form 2 m + 1, where m is the n th power of 2 (that is, m = 2 n , where n is an integer).

Does this make ANY sense????
Not to me.....

A Mersenne (also spelled Marsenne) prime is a specific type of prime number. It must be reducible to the form 2 n - 1, where n is a prime number. The term comes from the surname of a French monk who first defined it. The first few known values of n that produce Mersenne primes are where n = 2, n = 3, n = 5, n = 7, n = 13, n = 17, n = 19, n = 31, n = 61, and n = 89.

With the advent of computers to perform number-crunching tasks formerly done by humans, ever-larger Mersenne primes (and primes in general) have been found. The quest to find prime numbers is akin to other numerical searches done by computers. Examples are the decimal expansions of irrational numbers such as pi (the circumference-to-diameter ratio of a circle) or e (the natural logarithm base). But the 'next' prime is more difficult to find than the 'next' digit in the expansion of an irrational number.

It takes the most powerful computer a long time to check a large number to determine if it is prime, and an even longer time to determine if it is a Mersenne prime. For this reason, Mersenne primes are of particular interest to developers of strong encryption methods.

In August 2008, Edson Smith, a system administrator at UCLA, found the largest prime number known to that date. Smith had installed software for the Great Internet Mersenne Prime Search (Gimps), a volunteer-based distributed computing project.  The number (which is a Mersenne prime) is 12,978,189 digits long. It would take nearly two-and-a-half months to write out and, if printed, would stretch out for 30 miles.

Using computers, mathematicians have not yet found any Fermat primes for n greater than 4. So far, Fermat's original hypothesis seems to have been wrong. The search continues for Fermat numbers F n that are prime when n is greater than 4.

Computing History

Just some links of interest
Bytes
https://en.wikipedia.org/wiki/Byte

Wikipedia "History of Computing"
https://en.wikipedia.org/wiki/History_of_computing

History of Computing from George Mason University Technology
http://mason.gmu.edu/~montecin/computer-hist-web.htm

Computer History
http://www.computerhistory.org/timeline/computers/

Computer History from Computer Hope
http://www.computerhope.com/history/


ENIAC 1946 -- the first fully functional digital computer

Bytes were not always 8 bits

There were machines, once upon a time, using other word sizes, but today for non-eight-bitness you must look to museum pieces, specialized chips for embedded applications, and DSPs. How did the byte evolve out of the chaos and creativity of the early days of computer design?
I can imagine that fewer bits would be ineffective for handling enough data to make computing feasible, while too many would have lead to expensive hardware. Were other influences in play? Why did these forces balance out to eight bits?
(BTW, if I could time travel, I'd go back to when the "byte" was declared to be 8 bits, and convince everyone to make it 12 bits, bribing them with some early 21st Century trinkets.)
http://softwareengineering.stackexchange.com/questions/120126/what-is-the-history-of-why-bytes-are-eight-bits

Historically, bytes haven't always been 8-bit in size (for that matter, computers don't have to be binary either, but non-binary computing has seen much less action in practice). It is for this reason that IETF and ISO standards often use the term octet - they don't use byte because they don't want to assume it means 8-bits when it doesn't.

Indeed, when byte was coined it was defined as a 1-6 bit unit. Byte-sizes in use throughout history include 7, 9, 36 and machines with variable-sized bytes.


8 was a mixture of commercial success, it being a convenient enough number for the people thinking about it (which would have fed into each other) and no doubt other reasons I'm completely ignorant of.

The ASCII standard you mention assumes a 7-bit byte, and was based on earlier 6-bit communication standards.

Edit: It may be worth adding to this, as some are insisting that those saying bytes are always octets, are confusing bytes with words.

An octet is a name given to a unit of 8 bits (from the Latin for eight). If you are using a computer (or at a higher abstraction level, a programming language) where bytes are 8-bit, then this is easy to do, otherwise you need some conversion code (or coversion in hardware). The concept of octet comes up more in networking standards than in local computing, because in being architecture-neutral it allows for the creation of standards that can be used in communicating between machines with different byte sizes, hence its use in IETF and ISO standards (incidentally, ISO/IEC 10646 uses octet where the Unicode Standard uses byte for what is essentially - with some minor extra restrictions on the latter part - the same standard, though the Unicode Standard does detail that they mean octet by byte even though bytes may be different sizes on different machines). The concept of octet exists precisely because 8-bit bytes are common (hence the choice of using them as the basis of such standards) but not universal (hence the need for another word to avoid ambiguity).

Historically, a byte was the size used to store a character, a matter which in turn builds on practices, standards and de-facto standards which pre-date computers used for telex and other communication methods, starting perhaps with Baudot in 1870 (I don't know of any earlier, but am open to corrections).

This is reflected by the fact that in C and C++ the unit for storing a byte is called char whose size in bits is defined by CHAR_BIT in the standard limits.h header. Different machines would use 5,6,7,8,9 or more bits to define a character. These days of course we define characters as 21-bit and use different encodings to store them in 8-, 16- or 32-bit units, (and non-Unicode authorised ways like UTF-7 for other sizes) but historically that was the way it was.

In languages which aim to be more consistent across machines, rather than reflecting the machine architecture, byte tends to be fixed in the language, and these days this generally means it is defined in the language as 8-bit. Given the point in history when they were made, and that most machines now have 8-bit bytes, the distinction is largely moot, though it's not impossible to implement a compiler, run-time, etc. for such languages on machines with different sized bytes, just not as easy.

A word is the "natural" size for a given computer. This is less clearly defined, because it affects a few overlapping concerns that would generally coïncide, but might not. Most registers on a machine will be this size, but some might not. The largest address size would typically be a word, though this may not be the case (the Z80 had an 8-bit byte and a 1-byte word, but allowed some doubling of registers to give some 16-bit support including 16-bit addressing).

Again we see here a difference between C and C++ where int is defined in terms of word-size and long being defined to take advantage of a processor which has a "long word" concept should such exist, though possibly being identical in a given case to int. The minimum and maximum values are again in the limits.h header. (Indeed, as time has gone on, int may be defined as smaller than the natural word-size, as a combination of consistency with what is common elsewhere, reduction in memory usage for an array of ints, and probably other concerns I don't know of).

Java and .NET languages take the approach of defining int and long as fixed across all architecutres, and making dealing with the differences an issue for the runtime (particularly the JITter) to deal with. Notably though, even in .NET the size of a pointer (in unsafe code) will vary depending on architecture to be the underlying word size, rather than a language-imposed word size.

Hence, octet, byte and word are all very independent of each other, despite the relationship of octet == byte and word being a whole number of bytes (and a whole binary-round number like 2, 4, 8 etc.) being common today.

A lot of really early work was done with 5-bit baudot codes, but those quickly became quite limiting (only 32 possible characters, so basically only upper-case letters, and a few punctuation marks, but not enough "space" for digits).

From there, quite a few machines went to 6-bit characters. This was still pretty inadequate though -- if you wanted upper- and lower-case (English) letters and digits, that left only two more characters for punctuation, so most still had only one case of letters in a character set.

ASCII defined a 7-bit character set. That was "good enough" for a lot of uses for a long time, and has formed the basis of most newer character sets as well (ISO 646, ISO 8859, Unicode, ISO 10646, etc.)

Binary computers motivate designers to making sizes powers of two. Since the "standard" character set required 7 bits anyway, it wasn't much of a stretch to add one more bit to get a power of 2 (and by then, storage was becoming enough cheaper that "wasting" a bit for most characters was more acceptable as well).

Since then, character sets have moved to 16 and 32 bits, but most mainstream computers are largely based on the original IBM PC (a design that'll be 30 years old within the next few months). Then again, enough of the market is sufficiently satisfied with 8-bit characters that even if the PC hadn't come to its current level of dominance, I'm not sure everybody would do everything with larger characters anyway.

I should also add that the market has changed quite a bit. In the current market, the character size is defined less by the hardware than the software. Windows, Java, etc., moved to 16-bit characters long ago.

Now, the hindrance in supporting 16- or 32-bit characters is only minimally from the difficulties inherent in 16- or 32-bit characters themselves, and largely from the difficulty of supporting i18n in general. In ASCII (for example) detecting whether a letter is upper or lower case, or converting between the two, is incredibly trivial. In full Unicode/ISO 10646, it's basically indescribably complex (to the point that the standards don't even try -- they give tables, not descriptions). Then you add in the fact that for some languages/character sets, even the basic idea of upper/lower case doesn't apply. Then you add in the fact that even displaying characters in some of those is much more complex still.

That's all sufficiently complex that the vast majority of software doesn't even try. The situation is slowly improving, but slowly is the operative word.

Take a look at Wikipedia page on 8-bit architecture. Although character sets could have been 5-, 6-, then 7-bit, underlying CPU/memory bus architecture always used powers of 2. Very first Microprocessor (around 1970s) had 4-bit bus, which means one instruction could move 4-bits of data between external memory and the CPU.
Then with release of 8080 processor, 8-bit architecture became popular and that's what gave the beginnings of x86 assembly instruction set which is used even to these days. If I had to guess, byte came from these early processors where mainstream public began accepting and playing with PCs and 8-bits was considered the standard size of a single unit of data.
Since then bus size has been doubling but it always remained a power of 2 (i.e. 16-, 32- and now 64-bits) Actually, I'm sure the internals of today's bus are much more complicated than simply 64 parallel wires, but current mainstream CPU architecture is 64-bits.
I would assume that by always doubling (instead of growing 50%) it was easier to make new hardware that coexists with existing applications and other legacy components. So for example when they went from 8-bits to 16, each instruction could now move 2 bytes instead of 1, so you save yourself one clock cycle but then end result is the same. However, if you went from 8 to 12-bit architecture, you'd end breaking up original data into halfs and managing that could become annoying. These are just guesses, I'm not really a hardware expert.

What is Splunk?

 What is Splunk and How Does it Work?
By Helge Klein on September 3, 2014 in Splunk, uberAgent

https://helgeklein.com/blog/2014/09/splunk-work/

You have probably heard of Splunk, but can you describe what it does to a colleague in a few sentences? That is not easy. Splunk does not belong in any traditional category but stands apart from the crowd. That makes it interesting, but also the explaining harder. Here is my attempt.
Google for Logfiles

What do you do when you need information about the state of a machine or software? You look at its logfiles. They tell you the state it is in and what happened recently. Great.

What do you do when you need information about the state of all devices in your data center? Looking at all logfiles would be the right answer if it was possible in any practical amount of time. This is where Splunk comes in.

Splunk started out as a kind of “Google for Logfiles”. It does a lot more today but log processing is still at the product’s core. It stores all your logs and provides very fast search capabilities roughly in the same way Google does for the internet.
Search Processing Language

Although you can just use simple search terms, e.g. a username, and see how often that turns up in a given time period Splunk’s Search Processing Language (SPL) offers a lot more. SPL is an extremely powerful tool for sifting through vast amounts of data and performing statistical operations on what is relevant in a specific context. Think SQL on steroids. And then some.

For example you might want to know which applications are the slowest to start up, making the end user wait the longest. The following search answers that. First the relevant data is selected by specifying a so-called sourcetype (“ProcessStartup”). The result of this sub-command is piped (“|”) to another command that groups the data by application (“by Name”), calculates the average for each group (“avg(StartupTimeMs)”) and charts the results’ distribution over time (“timechart”):

index=uberagent sourcetype=uberAgent:Process:ProcessStartup | timechart avg(StartupTimeMs) by Name

Reading the above you might wonder how Splunk knows about the duration of application starts. And you are right: by itself it does not know anything. But it can receive data from a variety of sources: all kinds of log files, Windows event logs, Syslog, SNMP, to name a few. If the data you need cannot be found in any log you can write a script and direct Splunk to digest its output. If that still is not enough you should check Splunk’s App Directory for an add-on that collects the necessary data. In the example above the data was generated by uberAgent, our Windows monitoring agent. uberAgent runs on the monitored endpoints independently of Splunk and sends the data it collects to Splunk for storage and further processing.

Splunk apps can be data inputs, but they can also contain dashboards that visualize what has been indexed by Splunk. In case of uberAgent both types are used: the actual agent acts as a data input while the dashboard app presents the collected data to the user. The former runs on the monitored Windows machines, the latter on your Splunk server(s).
Index, (no) Schema, Events

When first hearing about Splunk some think “database”. But that is a misconception. Where a database requires you to define tables and fields before you can store data Splunk accepts almost anything immediately after installation. In other words, Splunk does not have a fixed schema. Instead, it performs field extraction at search time. Many log formats are recognized automatically, everything else can be specified in configuration files or right in the search expression.

This approach allows for great flexibility. Just as Google crawls any web page without knowing anything about a site’s layout, Splunk indexes any kind of machine data that can be represented as text.

During the indexing phase, when Splunk processes incoming data and prepares it for storage, the indexer makes one significant modification: it chops up the stream of characters into individual events. Events typically correspond to lines in the log file being processed. Each event gets a timestamp, typically parsed directly from the input line, and a few other default properties like the originating machine. Then event keywords are added to an index file to speed up later searches and the event text is stored in a compressed file sitting right in the file system.
Scalability, (no) Backend

That brings us to the next point: there is no backend to manage, no database to set up, nothing. Splunk stores data directly in the file system. This is great for a number of reasons:

Installation is superfast. Splunk is available for more platforms than I can name here, but on Windows you run the installer, click next a few times and you are done in less than five minutes.

Scalability is easy. If a single Splunk server is not enough you just add another one. Incoming data is automatically distributed evenly and searches are directed to all Splunk instances so that speed increases with the number of machines holding data. Optionally redundancy can be enabled, so that each event is stored on two or more Splunk servers.

No single point of failure. I have seen too many environments where an overloaded database server slowed down half the applications in the data center without anyone finding the root cause. While this is a great use case for uberAgent my point is that this will not happen with Splunk.

Infinite retention without losing granularity. Some monitoring products only allow you to keep so many months, weeks or even days worth of data. Others reduce the granularity of older events, compressing many data points into one because of capacity limits. The same is not true for Splunk. It can literally index hundreds of terabytes per day and keep practically unlimited amounts of data. If you want to or need to compare the speed of last year’s user logons with today’s: go ahead!
Licensing, Download, Getting Started

If you would like to try out Splunk or uberAgent but do not really know where to start: our installation guide walks you through it.

Licensing in a nutshell: Splunk limits the amount of new data that can be indexed per day. A free version is available that is capped at 500 MB / day. When buying Splunk Enterprise licenses you buy daily indexed data volume, in other words gigabytes that can be added to Splunk per day. The number of Splunk servers the data is being stored on, how long you keep the data or over which periods of time you search is entirely up to you. Once the data is indexed, it is yours.

Happy splunking!

Wednesday, July 6, 2016

The Archive Bit

I never knew how backups actually work! Now I do!! The archive bit is a file attribute that is set whenever a file is modified. For backups that use archive bits, this bit is turned off after the backup completes, indicating to the system that the file has been backed up. If the file is changed again before the next backup, the bit will be turned on and a Backup Manager will back up the file. Whenever a file is created or changed, the operating system activates the Archive Bit or modified bit . By default, unless you specifically select to use the archive bit, the Backup Manager uses the last modified date and time stamp to determine whether a file has been backed up. Using the archive bit in determining changed files, however, can cause confusion if the user is not careful, if the data selection for more than one backup job overlap. To explain this, consider this scenario: Jack has two backup jobs that he has scheduled to run consecutively, named Documents and Work . The folder Monthly Reports was selected to be backed up by both backup jobs. Come backup time, the job Documents, will backup the folder the turn off the archive bit. When its time for the job Work to run, it will find that the folder has already been backed up and skips the folder. When the archive bit method is used with full, increment or mirror backup, the BM will turn off the archive bit after each backup run. However, when used with differential backup, the BM will only reset the bit in the first full backup, but not in subsequent differential runs, this way, the BM will always keep backing up files that have changed since the first full backup. Some people think the archive bit is evil........ http://www.computerworld.com/article/2598081/data-center/the-windows-archive-bit-is-evil-and-must-be-stopped.html

Wednesday, May 18, 2016

NEC computers

 Here's a blast from the past.........I own two of these NEC's PowerMate 2000's and still play games on them with Windows 2000.  I wish I had a couple of the NEC PowerMate eco's
http://www.manualslib.com/manual/109666/Nec-2000-Series.html






NEC ProSpeed 386 Microcomputer Vintage 386 Laptop Computer