|Exam Name||:||Sun Certified Systems Installer for Sun Cluster 3.X|
|Questions and Answers||:||136 Q & A|
|Updated On||:||February 21, 2018|
|PDF Download Mirror||:||310-330 Brain Dump|
|Get Full Version||:||Pass4sure 310-330 Full Version|
It provides verification of the cluster install independent of Explorer.
It provides verification of cluster installation prerequisites prior to the install.
Which two activities must be accomplished before you leave the customer location after
any cluster installation? (Choose two.)
Run explorer and collect the output.
Create an Oracle database instance.
Create users to administer the cluster.
Set the terminal concentrator password.
Reboot the nodes and verify that all the nodes come up properly.
How does the start method of a scalable Apache instance know which Shared Address resource will be performing load balancing services on its behalf?
It does not know which Shared Address resource will be performing load balancing services.
It uses the SharedAddress resource in the same resource group as the Apache resource.
It uses any SharedAddress resource in the resource group on which the scalable Apache
It uses the Network_resources_used property of the Apache resource, which you are
required to provide.
Under what circumstance can you modify the Apache configuration file so that Apache listens only on the application-specific IP address that you will incorporate into your resource groups?
if you will be running in failover or scalable mode
only if you will be running apache in failover mode
only if you will be running apache in scalable mode
Which two statements are valid reasons for placing the httpd.conf file for a cluster in a directory other than /etc/apache? (Choose two.)
Placing the configuration file on local storage is unsupported.
The files that you find in /etc/apache after the standard Solaris install are inappropriate
for the cluster.
Placing the configuration file in the shared storage rather than in /etc/apache makes
managing the file easier since you only need one copy.
If the file .httpd.conf is not in /etc/apache, then the standard unedited Solaris boot
scripts will not start Apache, and the cluster will work properly even if those boot scripts are present.
Which statement is true about a Veritas disk group in which you want to store your HA- oracle data?
The disk group should contain only disks physically attached to a single node.
You do not need to register the disk group with the cluster if you will be using raw devices to store the data.
The disks may be attached to exactly two nodes even if you want the Oracle data service to fail over to a third node too.
You must have a separate disk group dedicated for Oracle. It is impossible to combine data from two data services in the same disk group.
Which two actions can resolve performance degradation for IO-intensive HA-Oracle?
Use raw devices to store Oracle data rather than a global file system.
Do not add any HAStorage resources to your data service resource group.
Do not add any HAStoragePlus resources to your data service resource group.
Set AffinityOn to true in the HAStoragePlus resource type containing the Oracle global file system(s).
Set AffinityOn to false in the HAStoragePlus resource type containing the Oracle global file system(s).
Which two resource extension properties are required while configuring the SUNW.oracle_listener resource type? (Choose two.)
Which three resources must be registered for HA-Oracle in Sun Cluster 3.x? (Choose three.)
In a two node cluster, you have created a VxVM volume that will contain Oracle data, and you want to configure this volume as a local failover file system. What are three subsequent steps you must perform? (Choose three.)
Reboot both cluster nodes.
Put an entry in the vfstab on both cluster nodes.
Use scrgadm to create an HAStoragePlus resource.
Run scsetup to synchronize your new volume information to the global device namespace.
Use the vxedit command to set the owner and group of the new VxVM volume to oracle and dba, respectively.
When using WebSphere® Studio Application Developer (Application Developer), users often create complex, interrelated projects, such as when they are developing a large J2EE application. These interrelated projects can cause build performance problems, which only increase as more projects are added. This article discusses several alternative ways to resolve this problem. While the article focuses on enterprise-scale J2EE applications, the concepts and approaches apply to the creation of any large application with many interrelated component projects.
Set-up: optimized Java JVM configuration
Application Developer is written in JavaTM, which poses some configuration questions. The IBM ® and Sun Java JVMs have default configurations suitable for a wide range of Java applications. Their initial heap size is 4 MB, with a maximum equal to half of all the physical memory on your machine. Therefore, because Java applications cannot grow past the maximum heap size, they do not automatically consume a lot of memory, and can increase in size without consuming all of your memory.
If you use Application Developer to develop and test a large number of complex programs, you may need to adjust your maximum heap size. To get an estimate of the appropriate heap size:
The total memory remaining will help you determine the appropriate maximum heap size. In our example, it is 640 - 310 = 330 MB.
Now the tradeoffs. If you make your maximum heap too small, you'll run out of memory when building large applications. If you make it too large (or run too many concurrent applications), then parts of Application Developer will continuously swap in and out, causing an apparent system "hang" when swapping back in. If you make your initial heap too small, you'll slow startup as memory is continuously grabbed in pieces. If you make the initial heap too large, you'll prevent unused memory from being available to other system tasks. Also, remember that the actual running Java program (Application Developer) requires memory in addition to its data heap. So consider a maximum heap of 23 of your available memory as calculated above, and an initial heap of about 23 of that maximum heap. (The maximum heap is the critical piece; the initial heap is just a startup optimization.) Also, for major applications, you might even chose to set your initial heap equal to the maximum heap (as is often recommended for server applications). In our example with 330MB of available memory, you could make your maximum heap 250 MB and your initial heap 150 MB. When Java objects are released, the memory is not freed until "garbage collection" takes place. When the heap is initially used up, garbage collection will walk through all old objects and finally free the memory for reuse, causing an apparent "hang" during this processing. It may be better to be aggressive in regular garbage collection rather than letting it build up and result in a long collection. The default minimum free starting collection is 30% and the default maximum free stopping collection is 60%, but we really don't have enough experience yet to recommend changing the defaults.
You can customize the Application Developer configuration using parameters with the startup program (wsappdev.exe). For example:
wsappdev.exe -data MyWorkspace -vmargs -Xmx250m -Xms150m -Xminf0.40 -Xmaxf0.60
Combined Application Developer plus Application Server swap-file thrashing
However, if you are also debugging server applications using the embedded WebSphere Test Environment (WTE) inside Application Developer, then that WTE also tries to to allocate 12 of physical memory for its JVM, and the total of the two heaps exceeds available memory and results in excessive memory swapping! Therefore, you need to specifically allocate the available memory between the WTE JVM (typically 100 MB) and the Application Developer JVM (the rest). So, in our example above with 640 MB of memory and 310 MB of operating system and other programs (leaving 330MB available, perhaps allocating 250 for heaps), you might allocate 80MB to WTE (not the default 6402 = 320 MB it would otherwise use) and allocate the rest (250 - 80 = 170MB) to Application Developer (not the default 6402 = 320 MB it would otherwise use):
wsappdev.exe -data MyWorkspace -vmargs -Xmx170m -Xms150m
Note for Application Developer V5.1: Application Developer V5.1 uses the new j9 JVM, which seems to require either that ms==mx (for optimum performance but fixed memory consumption) or ms>=(ms*.80). So, the previous example either needs mx=180 or ms=140:
wsappdev.exe -data MyWorkspace -vmargs -Xmx170m -Xms140m
To specify the WTE heap within Application Developer, go to the Server perspective Server Configuration view, and open your server instance Editor Window. Select the Environment tab and enter -Xmx100M in the Java VM Arguments window. Then save and close that server editor window. Now, our combination of Application Debugger plus WTE will effectively use 170 + 80 = 250 MB of heap, rather than allocating a combined 320 + 320 = 640 MB and thrashing with excessive swapping. A similar change is required if you run a stand-alone WebSphere Application Server outside of Application Developer (instead of using the embedded WTE).
Of course, your system should always be set up with good swap files (typically, at least the size of your physical memory, and initially set with the initial size equal to the maximum size on a previously defragmented disk), so that when swapping is needed it's reasonably efficient. And it's always a good idea to defragment the drive containing your workspace (and your other drives for that matter). With these steps, your system is all set.
"Load when used" memory usage
If you start Application Developer, then make a trivial change to a JSP and save it, it will take quite awhile to perform and your memory usage reported by Windows Task Manager will go up by 40 MB or more. Even closing that JSP does not cause the reported memory usage to decrease. This is not a memory leak. The way Application Developer (and its base Eclipse WorkBench) works is that functions and features are loaded only when they are first used. Therefore the trivial JSP change causes the JSP compilation and validation classes to be loaded, but AutoBuild will also cause an incremental checkbuildvalidation of all projects, which causes most of the other builders and validators and all project incremental build-state information to be loaded. Subsequent changes to code will typically be much faster and require very little extra memory (although, as noted above, the heap does not "garbage collect" right away).
Background: AutoBuilds and Validation of dependent projects
In many projects, one or more Java files depend on Java files in another project, because they use those other Java classes or extend those other Java classes. Even worse, if one file in project A uses a file in project B, and a different file in project B uses a file in project C, and so on, then you can have many complex project dependencies. This often leads to circular dependencies between projects (even though there might not actually be any circular dependencies at the file level).
Thus, if one file changes, and if the Application Developer "AutoBuild" preference (Window => Preferences => Workbench => Perform build automatically on resource modification) is enabled, then every project may end up doing an incremental rebuild (including incremental validations). This can be quite time consuming. As well, if the "Validation" preference (projectName => Properties => Validation => Run validation when you save changes) is enabled, then various validators (HTML, EJB, MAP, SCHEMA, JSPTM, XML, etc.) run and this can also be even more time consuming.
Most applications have sets of projects containing commonly used library functions, and sets of projects containing widely used base classes. A change to them typically causes massive rebuilds even though their interfaces probably have not changed and the dependent files really don't need to be recompiled (but the compilers and builders don't know this and therefore all dependent files end up being rebuilt).
Solution 1: disable AutoBuilds (and Validation)
You can disable the AutoBuild preference (also disabling Validation), so that dependent files are not rebuilt. Unfortunately, neither are files you change. You must click on the project and select build (for an incremental build), rebuild (for a full build), or runValidation. Sorry, you cannot build just one file. If, when you save a file, you forget to build it, then you won't know about any compilation or validation errors, which can be annoying when you discover it later. Also, if you later re-enable the AutoBuild preference, an incremental build (and validation) is immediately done for every project in your workspace.
Solution 2: disable Validation
You can leave the system-wide AutoBuild preference enabled, but still disable Validation. Unfortunately there is no system-wide preference for it, and you need to right-click each project (you cannot multi-select) and select Properties => Validation => Run validation automatically on resource modification and then uncheck it. (This is disabled if AutoBuild is disabled, and is disabled for project types without validation such as Java projects.)
If you have many projects, it can take a long time to disable (or re-enable) every project. A ValidationOnOff plug-in that turns Validation on and off for all projects will soon be available (isn't the Eclipse facility for dynamic plug-ins great!) as part of an article on WebSphere Developer Domain article (see the WebSphere Studio Transition page).
Solution 3: close inactive projects
If you have 30 (or even 100) projects in your application, you are usually only working on a few at a time. To avoid ongoing system-wide builds and validations, you can close your inactive projects. This is fine if they just depend on your active projects, but if your active projects depend on your inactive ones, you will get compile errors. And, in both cases, you will not be able to debug your application, because closed projects will make pieces of it unavailable. Basically, this approach sounds nice on the surface but is not really a practical solution.
Solution 4: binary projects
As noted above, you are usually working on only a few of the projects in your application at a time. You want to avoid rebuilding the stable parts of your application, but still have their class files available when you rebuild or debug your active projects.
If the binary (compiled) part of each inactive project is stored in a JAR file, then the source in these projects can be removed. Actually, the process is to create the binary JAR, zip your source into a source JAR, delete the project contents, and then import the binary JAR contents back in (and probably the source JAR also, but leaving it as a JAR). The binary class files are extracted and available to other dependent projects, but there is no visible source, and therefore no rebuilds will occur within these projects. Even system-wide rebuilds will only end up rebuilding the active source projects.
Of course, if one or more of these binary projects depend on your active code and if you change an interface, then you need to reload the source back in from wherever you stored it, rebuild that project, recreate the binary JAR and source JAR, and revert back to a binary project again. This is onerous, but since library interfaces don't change very often, this approach may be workable, especially with several independent development teams where each team uses binary projects for the other teams' projects and normal source projects for their own projects.
A disadvantage is that the project no longer has visible source for use by the Debugger. However, if you zipped the source into a source JAR, then you can tell the Debugger to use it right-clicking on the project and selecting Properties => Java build path => select JAR => Attach source => Archive. This approach can optimize development for applications involving many projects. But an analogous approach is much simpler and more automated -- dependent project JARs.
Solution 5: dependent project JARs
In the dependent project JARs approach, the binary contents of each project is packaged into a JAR (as with binary projects), but other projects just depend on that JAR instead of on the project. Therefore the project source does not need to be removed. This approach works because in Application Developer, a change in a dependent JAR does not cause an automatic rebuild of files and projects that use it. (One can argue that it should rebuild in these cases, but the current behavior is nice because it lets you break the project build dependencies.)
The requirement to re-create project JARs anytime the project is rebuilt would be time consuming, but the WebSphere Developer Domain article Developing J2EE utility JARs in WebSphere Studio Application Developer explains how to make Java projects automatically JAR their contents anytime the project is rebuilt. The article includes a downloadable plug-in (did I already mention that the Eclipse facility for dynamic plug-ins is great?).
In summary, to change from normal project dependencies to dependent project JARs, you want your J2EE Enterprise Application (EAR) project to contain the run-time modules, and the utility JAR program to create build-time JARs, Here is the procedure:
Special note: If your project was created by importing an EJB JAR or an EAR or WAR, then you likely have an imported_classes.jar file in your project. If your original JAR contained source for all the contained classes, delete it. If your original JAR contained only classes, then it should not have be imported and expanded, but rather it should be imported unexpanded as a file-system "library JAR" rather than as a project. If it contains source but also contains some classes without source, then this is a mess. You cannot delete it if you need any of the classes, but the other re-creatable classes in the JAR will also remain in the imported JAR, which is on your run-time classpath and will likely be in conflict with changes and rebuilds that you perform. Therefore, the best approach is to import JARs with re-buildable source for all contained classes, and put any binary-only classes in an external JAR library. Keep things clean and simple.
Then when you change a source file in the project it will be rebuilt (assuming you have the AutoBuild preference set) and the project JAR will be rebuilt, but other projects will not be rebuilt since dependent JARs do not trigger rebuilds
All the normal projects (and their project source) are still available for debugging, and their compiled code and the non-code artifacts (J2EE deployment descriptors, and so on) are still in the project and still packaged in the EAR. So the entire application still works and is still debuggable, just the same as always. The only thing that has changed is that build-time dependencies use the project JARs to break the rebuild project dependencies.
Keep in mind that if your main library interfaces change, then you must manually initiate a rebuild-all to ensure that all the dependent projects are rebuilt (you may want to use the Validation OnOff plug-in to enable all project validation, then disable it when the rebuild is done). Otherwise, everything works automatically and seamlessly. Keep in mind that you should adjust the project build order by selecting Window =-> Preferences => Build Order => click project => UpDown to ensure that your common library projects are rebuilt first anytime you do a rebuild-all.
Even with the advantages of using this dependent project JARs approach, you may have so many complex projects in a large application that you may also wish to globally disable validation using the Validation OnOff plug-in and then only re-enable it for a few active projects. Or you might even chose to disable both AutoBuild and validation, but most users find AutoBuild convenient for the current active projects.
Many customers are now using Application Developer with a large number of complex interrelated projects, particularly when developing large J2EE applications. These interrelated projects can cause build performance problems, which increase as more projects are added. This article has described several ways to minimize these problems, the main one being the use of dependent project binary JARs instead of project build dependencies.
ArticleTitle=Optimizing Multi-Project Builds Using Dependent Project JARs in WebSphere Studio Application Developer
Aboard the boat through the integral swim platform with underwater lights that has a foldaway aft-facing seat mounted on the transom wall, making this a perfect area for small game fishing or sun tanning. Entry to the cockpit is via the acrylic transom door located on the port side of the boat. Fwd of the transom door is the extra-large cockpit wet bar with solid-surface countertop and built-in Kenyon grill. There are opposing bench seats in the cockpit (aft and fwd) with built-in fiberglass storage base, a removable solid teak cockpit table with base, and sun pad filler cushions. Next to the fwd cockpit bench is the wide bench-style helm seat with two flip-up thigh-rise bolsters and an innovative fiberglass bridge arch top with aft sunshade.
The helm is located on the fwd stbd and is equipped with state-of-the-art SmartCraft diagnostics, Raymarine C-80 radarchartplotterfish finder, Ritchie compass, and controls for the engines, DTS electronic shifts, Bow thrusters (side power), Stern thrusters (side power), Seastar power-assist steering, Quick windlass, and Trim tabs
Entry to the lower deck cabin is through the lockable sliding glass door, just port of the helm. Inside the open cabin you will find gorgeous Cherry wood interior, Teak & Holly cabin steps, elegant wood flooring in the salon & mid-berth, and beautiful LED lighting throughout the cabin.
Behind the cabin steps is the mid-berth that serves as a conversation pit as well as additional sleeping accommodations. It is furnished with hideaway privacy curtain, mirrored bulkhead, ultraleather HP seating that converts to a double berth with slide-out base and dedicated filler cushion storage, drawer storage, upper & lower storage.
Fwd of the mid–berth to port is the boat’s fully enclosed day head fitted with simulated teak flooring, a shower area with curtain, VacuFlush head, and fixtures such as full-length mirror on the head’s door, medicine cabinet, upper shelf storage with stainless steel rails, vanity with storage below, solid surface countertop, and sink with pull-out faucetsprayer.
Fwd of the mid-berth to stbd is the salondinette that features an ultraleather HP sofa that converts to a bed, gunwale cabinets, a wall mounted TV, Nesa DVD player, Sony stereo system with CD player, and a removable solid wood dinette table with dedicated storage.
Opposite of the salondinette is the gourmet galley that offers amenities such as solid surface countertop with stainless steel sink & faucet, LG Cafe Combo microwave oven with coffeemaker, Norcold refrigerator, reccessed 2-burner Kenyon stovetop with solid surface cover & dedicated storage, storage drawers with insert for cutlery, upper & lower storage cabinets, and trash receptacle.
Full fwd is the cozy V-berth furnished with a double bed with storage below and an elastic foam mattress. It is furnished with hideaway privacy curtain, mirrored rope locker bulkhead, stbd storage cabinet with shelves, gunwale storage, and hanging locker (port).
The Company offers the details of this vessel in good faith but cannot guarantee or warrant the accuracy of this information nor warrant the condition of the vessel. A buyer should instruct his agents, or his surveyors, to investigate such details as the buyer desires validated. This vessel is offered subject to prior sale, price change, or withdrawal without notice.
Article by ArticleForge
Michael Felci, The Desert Sun 9:34 a.m. PDT October 2, 2014
Tuesday: Tom Petty and the Heartbreakers in Anaheim ( Images )
Over the course of his four decades in the music business, Tom Petty has notched 15 top-40 hits, sold more than 80 million albums, been a member of a genuine super group (the Traveling Wilburys), inspired a tribute festival (Petty Fest) and been inducted into the Rock and Roll Hall of Fame.
With a resume like that, you wouldn’t blame him for kicking back a little and milking the nostalgia circuit for all its worth. But Petty, 63, has stayed both relevant and humble — even when bands blatantly rip off his songs. (The Strokes and Red Hot Chili Peppers, I’m talkin’ to you guys.)
Petty’s latest album with long-time backing band the Heartbreakers — 2014’s “Hypnotic Eye” — was the group’s first to debut at no. 1 on the charts and received glowing reviews from both Rolling Stone and SPIN. Next week they’ll perform a pair of shows in So Cal with another classic rock artist who has aged graciously, Steve Winwood.
• Tom Petty and the Heartbreakers with special guest Steve Winwood, 7:30 p.m. Tuesday, Honda Center, 2695 E. Katella Avenue, Anaheim. $39.50-$129.50. Information: (714) 704-2400,
Other highlights this week include:
All you need is (Mike) Love
Mike Love gets a bad rap sometimes.
Maybe it’s because he appeared less than gracious when he gave a rambling, confrontational speech when the Beach Boys were inducted into the Rock and Roll Hall of Fame nearly 25 years ago. Or maybe it’s because of they way he ended the legendary group’s recent 50th anniversary tour — abruptly, leaving Brian Wilson wondering what happened. (To be fair, it doesn’t take much to confuse him).
But according to Love, he’s not an angry, domineering guy at all. And he has Transcendental meditation to thank for it.
“It’s kind of like a secret weapon for me,” the 73-year-old frontman told The Desert Sun recently. “When you practice TM, you find a level of thought that transcends and goes to the source of thought.”
Far out, man.
On Friday, Love brings the latest Beach Boys lineup — including long-time members Bruce Johnston and David Marks — to The Show.
• The Beach Boys in concert, 9 p.m. Friday, The Show, Agua Caliente Casino Resort Spa, Rancho Mirage. (888) 999-1995.
Welcome back to the Hotel California
Despite being led a Michigander (Glenn Frey) and a Texan (Don Henley), no group epitomized the laid-back So Cal sound of the 1970s more than the Eagles. Inspired by the layered harmonies of Crosby, Stills & Nash and the country twang of Gram Parson’s Flying Burrito Brothers, the group personified the liberated spirit of the era with songs like “Take It Easy” and “Peaceful Easy Feeling,” while imagining themselves as western outlaws on albums like “Desperado” and “On the Border.”
Then Joe Walsh showed up.
The ex-James Gang guitarist brought a hard-partying reputation and a harder rock sound to the equation, which he immediately showcased on “Life In the Fast Lane” —— a cautionary tale of excess and decadence. According to legend, Walsh’s blistering guitar riff started off as a warm-up exercise, but when Frey and Henley heard it, they made him play it again and crafted a song around it.
In the 2013 documentary,“History Of The Eagles,” Frey recalled the lyrical inspiration for “Life In the Fast Lane” this way: “I was riding shotgun in a Corvette with a drug dealer on the way to a poker game. The next thing I know we’re doing 90. Holding! Big-Time! I say ‘Hey man!’ He grins and goes ‘Life in the fast lane!’ I thought, ‘Now there’s a song title.’ ”
This weekend, the Eagles return to the fast lanes of So Cal for shows in Anaheim and San Diego.
• The Eagles in concert, 8 p.m. Friday, Honda Center, Anaheim. 8 p.m. Saturday, Viejas Arena at San Diego State University, San Diego. Information:
An amazing journey
It took more than two decades for The Who’s “Tommy” to be adapted into a stage production — the kind that doesn’t climax with the players destroying their instruments.
Dubbed a “rock opera” by creator Pete Townshend (tongue planted firmly in cheek), the 1969 album resonated with spiritual seekers of the era, even if the story about a deaf, dumb and blind boy’s transition into a messiah-type figure is disjointed at best. But great songs like “Amazing Journey,” “The Acid Queen,” “I’m Free” and “Pinball Wizard” hold the concept together.
• “The Who’s Tommy,” 8 p.m. Friday-Saturday, 2 p.m. Sunday, through Oct. 12. Palm Canyon Theatre, 538 N. Palm Canyon Drive, Palm Springs. $36. Information: (760) 323-5123,
Article by ArticleForge
Brandon Belt is resuming baseball activities. Throw open those curtains and let the sun in. This is good news.
Of course, that doesn't mean that he's guaranteed to make it back before the season ends. Belt's been cleared to resume baseball activities before, but the symptoms of the concussion kept reappearing. Still, the slow march toward activation is better than the myriad of alternatives. From Alex Pavlovic:
First baseman Brandon Belt is "doing well," manager Bruce Bochy said, and Belt will begin baseball work sometime this week. The hope all along was that he would be cleared to resume baseball activities next week, and he remains on track after three weeks of rehab for vision problems related to a concussion.
As noted several times before, with both Belt and Hector Sanchez, the most important thing is for the human beings to resume their normal, pre-concussion lives. This trumps whatever game-winning hit either player might have in a September Dodgers series. Get healthy first, worry about baseball second. That should always be the modus operandi if you don't want to be a ghoul.
However, with the positive news on Belt, I don't think it's ghoulish at all to think about him in a lineup and sigh, deeply and yearningly. Since Belt's injury, a few things happened:
When Belt left, Blanco was an everyday center fielder who was Candlestick cold. Morse was a hard-swinging derelict who couldn't field. Dan Uggla had been an ex-Giant for exactly one day, and there wasn't an obvious replacement yet. Everything was in shambles. Now look at this potential lineup, knowing what you know now (or what you think you know now, at least).
That's a ... why, that's a mighty decent lineup. Assuming that Panik isn't really a .250.310.330 hitter in Sctutaro's clothing, that Belt is actually healthy and ready to contribute, and that Morse is healthy. Those are a lot of assumptions, but if they're all valid, look at that normal lineup. Looooook. Before we realized that Scutaro was irreparably damaged, this is the lineup we were hoping for in the offseason. There's a guy with power hitting seventh. Seventh! On July 3, Tyler Colvin was hitting fifth, with Adam Duvall behind him.
If Belt needs more time, well, that's bad news on a personal level, and the dream of a normal, healthy lineup should be suspended in favor of a dream of a normal, healthy person. But if he can come back and help, look at that normal, healthy lineup.
It seems obvious to write "good player makes team better", and I'm a little self-conscious about just how obvious. For whatever reason, though, when players disappear from the roster, I almost forget about them entirely. This wasn't an issue of 2011, of course, and there's something about the Angel Pagan magic talisman of oxymoronity and wins that's hard to ignore, Usually, though, I look at things like Tyler Colvin hitting fifth and Joaquin Arias playing first and grumble about them that very second, without remembering exactly why those players are in the lineup in the first place. I rarely step back and remember what the lineup is supposed to look like, and how spiffy it would be to have it back.
If you're like me, you're all excited at this reminder that Belt might play again this year, and you're excited about your temporary suspension of tunnel vision.
If you're not like me, you're at the penultimate paragraph and wondering what you just read. Sorry about that. You deep thinkers really aren't my target audience, you know.
Regardless, Belt is resuming baseball activities and could be back before the end of the season. That's reason for optimism on a couple levels, but don't forget about the part where he makes the team better. Don't forget that underrated part.